• MQ


    mq
    MQ安装---linux
    MQ_V9.0.3_TRIAL_CDR_LNX_ON_X86_64.tar.gz
    创建mqm用户
    tar -xf MQ_V9.0.3_TRIAL_CDR_LNX_ON_X86_64.tar.gz
    切换至解压后的目录:
    cd MQServer
    授权:
    ./mqlicense.sh -text_only
    再选择1接受授权。
    安装:
    rpm -ivh MQSeriesRuntime-6.0.0-0.i386.rpm
    rpm -ivh MQSeriesSDK-6.0.0-0.i386.rpm
    rpm -ivh MQSeriesJava-6.0.0-0.i386.rpm
    rpm -ivh MQSeriesClient-6.0.0-0.i386.rpm
    rpm -ivh MQSeriesSamples-6.0.0-0.i386.rpm
    rpm -ivh MQSeriesServer-6.0.0-0.i386.rpm
    rpm -ivh MQSeriesMan-7.5.0-2.x86_64.rpm
    rpm -ivh MQSeriesMsg_Zh_CN-7.5.0-2.x86_64.rpm
    --------------------------------------------------
    在服务器和客户端机器上安装 Websphere MQ V7.0.1。
    这两台机器上应有属于 mqm 组的 mqm 和 mqtest 用户。
    两台机器上 mqm 和 mqtest 的用户 id 和组 id 应该是相同的。
    机器 1:
    id mqm: uid=301(mqm), gid=301(mqm)
    id mqtest: uid=501(mqtest), gid=301(mqm)
    机器 2:
    id mqm: uid=301(mqm), gid=301(mqm)
    id mqtest: uid=501(mqtest), gid=301(mqm)
    在 HP-UX 上安装 NFS
    本例中,NFS 服务器为:hpate1,导出路径为:/HA,NFS 客户端为:hostile.
    HP-UX上的 NFS 服务器配置
    1》以 root 身份登录服务器并进行配置。
    2》编辑文件 /etc/rc.config.d/nfsconf,将 NFS_SERVER 和 START_MOUNTD 的值改为 1:
    #more /etc/rc.config.d/nfsconf
    NFS_SERVER=1
    START_MOUNTD=1
    3》启动 nfs.server 脚本:
    /sbin/init.d/nfs.server start
    4》编辑 /etc/exports 为每一个将要导出的目录添加一个条目:
    # more /etc/exports
    /HA
    5》强迫 NFS daemon nfsd 重读 /etc/exports :
    #/usr/sbin/exportfs -a
    6》使用 showmount -e 验证 NFS 的安装:
    # showmount -e
    export list for hpate1:
    HA (everyone)
    #
    HP-UX 上的 NFS 客户端设置
    1》以 root 身份登录。
    2》检查 NFS 客户机上您将要导入的目录是否为空或不存在。
    3》如果目录不存在,创建一个目录:
    #mkdir /HA
    4》添加一个条目 /etc/fstab 这样文件系统将会在启动时自动安装:
    nfs_server:/nfs_server_dir /client_dir nfs defaults 0 0
    # more /etc/fstab
    hpate1:/ha /ha NFS DEFAULTS 0 0
    5》安装远程文件系统:
    #/usr/sbin/mount -a
    6》验证 NFS 安装:
    # mount -v
    hpate1:/HA on /HA type nfs rsize=32768,wsize=32768,NFSv4,dev=4000004
    on Tue Aug 3 14:15:18 2010
    -----------------------------------
    执行 amqmfsck 以验证文件系统与 POSIX 的标准是否保持一致
    本例中: Server1 = stallion.in.ibm.com 和 Server2 = saigon.in.ibm.com。
    1》执行不带任何选项的 amqmfsck 来检查基本锁定情况:
    su - mqtest
    export PATH=/opt/mqm/bin:$PATH
    On Server1:
    $ amqmfsck /HA/mqdata
    The tests on the directory completed successfully.
    On Server2:
    $ amqmfsck /HA/mqdata
    The tests on the directory completed successfully.
    2》执行带有 -c 选项的 amqmfsck 来测试写入目录:
    On Server1:
    $ amqmfsck -c /HA/mqdata
    Start a second copy of this program with the same parameters on another server.
    Writing to test file.
    This will normally complete within about 60 seconds.
    .................
    The tests on the directory completed successfully.
    On Server2:
    $ amqmfsck -c /HA/mqdata
    Start a second copy of this program with the same parameters on another server.
    Writing to test file.
    This will normally complete within about 60 seconds.
    .................
    The tests on the directory completed successfully.
    3》在两台机器上同步执行带有 -w 选项的 amqmfschk,检测等待同时释放对目录的锁定:
    On Server1:
    $ $ amqmfsck -wv /HA/mqdata
    System call: stat("/HA/mqdata",&statbuf)
    System call: statvfs("/HA/mqdata")
    System call: fd = open("/HA/mqdata/amqmfsck.lkw",O_CREAT|O_RDWR,0666)
    System call: fchmod(fd,0666)
    System call: fstat(fd,&statbuf)
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Start a second copy of this program with the same parameters on another server.
    File lock acquired.
    Press Enter or terminate this process to release the lock.
    On Server2:
    $ amqmfsck -wv /HA/mqdata
    System call: stat("/HA/mqdata",&statbuf)
    System call: statvfs("/HA/mqdata")
    System call: fd = open("/HA/mqdata/amqmfsck.lkw",O_CREAT|O_RDWR,0666)
    System call: fchmod(fd,0666)
    System call: fstat(fd,&statbuf)
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    File lock acquired.
    Press Enter or terminate this process to release the lock.
    System call: close(fd)
    File lock released.
    The tests on the directory completed successfully.
    -------------------------------
    安装一个多实例队列管理器
    本例中,Server1 = stallion.in.ibm.com 和 Server2 = saigon.in.ibm.com.
    Server 1
    1》在共享文件系统中创建日志和 qmgrs 目录:
    # mkdir logs
    # mkdir qmgrs
    # chown -R mqm:mqm /HA
    # chmod -R ug+rwx /HA
    2》创建队列管理器:
    # crtmqm -ld /HA/logs -md /HA/qmgrs -q QM1
    WebSphere MQ queue manager created.
    Directory '/HA/qmgrs/QM1' created.
    Creating or replacing default objects for QM1.
    Default objects statistics : 65 created. 0 replaced. 0 failed.
    Completing setup.
    Setup completed.
    #
    3》从 Server1 中复制队列管理器配置细节:
    # dspmqinf -o command QM1
    4》将上述命令复制到 Notepad。输出格式如下:
    addmqinf -s QueueManager -v Name=QM1 -v Directory=QM1 -v Prefix=/var/mqm -v
    DataPath=/HA/qmgrs/QM1
    Server 2
    1》粘帖步骤 4 保存在 Notepad 中的输出命令:
    # addmqinf -s QueueManager -v Name=QM1 -v Directory=QM1 -v Prefix=/var
    /mqm -v DataPath=/HA/qmgrs/QM1
    WebSphere MQ configuration information added.
    #
    2》启动 Server 1 中队列管理器的活动实例:
    # strmqm -x QM1
    WebSphere MQ queue manager 'QM1' starting.
    5 log records accessed on queue manager 'QM1' during the log replay phase.
    Log replay for queue manager 'QM1' complete.
    Transaction manager state recovered for queue manager 'QM1'.
    WebSphere MQ queue manager 'QM1' started.
    #
    3》启动 Server 2 中队列管理器的备用实例:
    # strmqm -x QM1
    WebSphere MQ queue manager QM1 starting.
    A standby instance of queue manager QM1 has been started.
    The active instance is running elsewhere.
    #
    4》使用 dspmq -x 验证安装:
    On Server1 (stallion)
    # dspmq -x
    QMNAME(QM1) STATUS(Running)
    INSTANCE(stallion) MODE(Active)
    INSTANCE(saigon) MODE(Standby)
    #
    On Server2 (saigon)
    # dspmq -x
    QMNAME(QM1) STATUS(Running as standby)
    INSTANCE(stallion) MODE(Active)
    INSTANCE(saigon) MODE(Standby)
    ------------------------------------
    创建一个客户端自动重连接安装
    本例中,Server1 = lins.in.ibm.com 和 Server2 = gtstress42.in.ibm.com。在 Server 1:
    1》使用 defpsist(yes) 创建一个名为 Q 的本地队列。
    2》创建一个名为 CHL 的 svrconn 通道。
    3》启动一个运行在 9898 端口的监听器:
    # runmqsc QM1
    5724-H72
    (C) Copyright IBM Corp. 1994, 2009. ALL RIGHTS RESERVED.
    Starting MQSC for queue manager QM1.
    def ql(Q) defpsist(yes)
    1 : def ql(Q) defpsist(yes)
    AMQ8006: WebSphere MQ queue created.
    define channel(CHL) chltype(SVRCONN) trptype(tcp) MCAUSER('mqm') replace
    2 : define channel(CHL) chltype(SVRCONN) trptype(tcp) MCAUSER('mqm') replace
    AMQ8014: WebSphere MQ channel created.
    end
    # runmqlsr -m SAMP -t tcp -p 9898 &
    [1] 26866
    ]# 5724-H72
    (C) Copyright IBM Corp. 1994, 2009. ALL RIGHTS RESERVED.
    4》在 Server 1 上把 MQSERVER 设置为变量:
    Export MQSERVER=<channelname>/tcp/<server1(port), server2(port)>
    For example: export MQSERVER=CHL/TCP/'9.122.163.105(9898),9.122.163.77(9898)'
    5》在 Server 2 上,启动端口 9898上的一个监听器:
    # runmqlsr -m QM1 -t tcp -p 9898 &
    [1] 24467
    [root@gtstress42 ~]# 5724-H72
    (C) Copyright IBM Corp. 1994, 2009. ALL RIGHTS RESERVED.
    -------------------------------------------
    执行客户端自动重连接样例
    Server 1
    1》调用 amqsphac 样例程序:
    # amqsphac Q QM1
    Sample AMQSPHAC start
    target queue is Q
    message < Message 1 >
    message < Message 2 >
    message < Message 3 >
    message < Message 4 >
    message < Message 5 >
    message < Message 6 >
    message < Message 7 >
    message < Message 8 >
    message < Message 9 >
    message < Message 10 >
    2》在 Server 1 的另一个窗口中,使用 -is 选项结束队列管理器,这样它将切换到备用队列管理器:
    Server 1(new session):
    # endmqm -is QM1
    WebSphere MQ queue manager 'QM1' ending.
    WebSphere MQ queue manager 'QM1' ended, permitting switchover to a standby
    3》验证一个切换是否已发生:
    On Server2:
    # dspmq -x -o standby
    QMNAME(QM1) STANDBY(Permitted)
    INSTANCE(gtstress42.in.ibm.com) MODE(Active) instance.
    4》该连接将断开,且备用队列管理器中出现重连接:
    On Server 1
    16:12:28 : EVENT : Connection Reconnecting (Delay: 57ms)
    10/06/2010 04:12:35 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:12:35 PM AMQ9999: Channel program ended abnormally.
    16:12:37 : EVENT : Connection Reconnecting (Delay: 0ms)
    10/06/2010 04:12:37 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:12:37 PM AMQ9999: Channel program ended abnormally.
    16:12:37 : EVENT : Connection Reconnected
    16:12:38 : EVENT : Connection Broken
    message < Message 11 >
    message < Message 12 >
    message < Message 13 >
    message < Message 14 >
    message < Message 15 >
    message < Message 16 >
    message < Message 17 >
    message < Message 18 >
    message < Message 19 >
    message < Message 20 >
    message < Message 21 >
    message < Message 22 >
    5》在 Server 1 上运行样例程序 amsghac,得到以下信息:
    # amqsghac Q SAMP
    Sample AMQSGHAC start
    10/06/2010 04:14:33 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:14:33 PM AMQ9999: Channel program ended abnormally.
    message < Message 1 >
    message < Message 2 >
    message < Message 3 >
    message < Message 4 >
    message < Message 5 >
    message < Message 6 >
    message < Message 7 >
    message < Message 8 >
    message < Message 9 >
    message < Message 10 >
    message < Message 11 >
    message < Message 12 >
    message < Message 13 >
    message < Message 14 >
    message < Message 15 >
    message < Message 16 >
    message < Message 17 >
    message < Message 18 >
    message < Message 19 >
    message < Message 20 >
    message < Message 21 >
    message < Messagee 22 >
  • 相关阅读:
    液晶电子看板
    车间电子看板厂家
    车间电子看板系统
    Andon系统有哪些类型?
    Andon系统最完整的介绍
    Andon系统一般架构
    生产看板管理系统
    车间看板系统
    生产管理看板介绍
    黑马Java2020在线就业班2.1-全新升级
  • 原文地址:https://www.cnblogs.com/skyzy/p/9226845.html
Copyright © 2020-2023  润新知