• Linux 环境安装


    1、rz/sz:command not  found
      yum -y install lrzsz 

     nc:yum -y install nc 

     更换yum源:https://www.cnblogs.com/zero-gg/p/8477809.html、https://www.cnblogs.com/hzdwwzz/p/9623942.html

     手动安装rpm:rpm -ivh xinetd-2.3.14-34.el6.x86_64.rpm

     手动安装gcc:先查看内核版本,再到http://vault.centos.org/7.4.1708/os/x86_64/Packages/下下载具体rpm包

    cat /etc/redhat-release 
    CentOS Linux release 7.4.1708 (Core)

       centos7防火墙查询:firewall-cmd --state。

     挂载硬盘:https://www.cnblogs.com/cqwo/p/7920730.html?tdsourcetag=s_pctim_aiomsg

    1. fdisk -l 查看硬盘情况
    2. mkfs.ext3 /dev/vdb 格式化硬盘
    3. 新建文件夹或者挂载到已有文件夹,mount /dev/vdb /newroot/
    4. 修改fstab,以便系统启动时自动挂载磁盘,编辑fstab默认启动文件命令: vi /etc/fstab 回车在其中添加一行: /dev/vdb /wwwroot ext3 defaults 0 0,添加这一行
    5. sync将缓存写入服务器,init 6重启服务器

    2、安装JDK

    1. 使用java -version /ps -ef | grep java / whereis java 等命令确认是否有安装JDK
    2. 目前在官网下载低于jdk1.8的java jdk的时候需要登陆,这边分享一个账号,方便下载。账号:2696671285@qq.com 密码:Oracle123
    3. 或者Linux下载:wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u141-b15/336fa29ff2bb4ef291e347e091f7f4a7/jdk-8u221-linux-x64.tar.gz"
    4. 新建/usr/local/lib/java,再使用tar -zxvf jdk-8u221-linux-x64.tar.gz -C /usr/local/lib/java/解压
    5. vim /etc/profile,文件最后添加,后执行source /etc/profile
    export JAVA_HOME=/usr/local/lib/java/jdk1.8.0_221  
    export JRE_HOME=${JAVA_HOME}/jre  
    export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib  
    export PATH=${JAVA_HOME}/bin:$PATH

     3、安装TOMCAT

      解压到/usr/local/lib/tomcat下,启动命令./startup.sh。出现Neither the JAVA_HOME nor the JRE_HOME environment variable is defined At least one of these environment variable is needed to run this program。

    在setclasspath.sh头部添加配置。

    export JAVA_HOME=/usr/local/lib/java/jdk1.8.0_221
    export JRE_HOME=${JAVA_HOME}/jre

     4、安装MEMCACHED

    1. centos系统自动安装,yum install memcached
    2. 启动命令:./memcached -d -m 256 -u root -p 11211 -c 1024 -P /opt/memcached/memcached.pid
    3. -d 选项是启动一个守护进程。
      -u root 表示启动memcached的用户为root。
      -m 是分配给Memcache使用的内存数量,单位是MB,默认64MB。
      -M return error on memory exhausted (rather than removing items)。
      -u 是运行Memcache的用户,如果当前为root 的话,需要使用此参数指定用户。
      -p 是设置Memcache的TCP监听的端口,最好是1024以上的端口。
      -c 选项是最大运行的并发连接数,默认是1024。
      -P 是设置保存Memcache的pid文件。

      3.编写shell脚本,启动命令:cd /usr/bin/,mem_start.sh

    #!/bin/sh
    
    ./memcached -d -m 256 -u root -p 11211 -c 1024 -P /opt/memcached/memcached.pid

     5、安装ZOOKEEPER

    1. 解压到/usr/local/lib/zookeeper-3.4.10
    2. 复制zoo_sample.cfg,改名为zoo.cfg
    3. 修改zoo.cfg文件,添加
      dataDir=/usr/local/lib/zookeeper-3.4.10  
      dataLogDir=/opt/zklog

      4. 修改源文件,vim /etc/profile,添加配置后刷新 source /etc/profile

    export ZOOKEEPER_HOME=/usr/local/lib/zookeeper-3.4.10
    export PATH=$PATH:$ZOOKEEPER_HOME/bin

      5../zkServer.sh start启动

     6、安装kafka
      解压到/usr/local/lib/kafka_2.11-2.2.0,进入config,修改serve.properties
    # Licensed to the Apache Software Foundation (ASF) under one or more
    # contributor license agreements.  See the NOTICE file distributed with
    # this work for additional information regarding copyright ownership.
    # The ASF licenses this file to You under the Apache License, Version 2.0
    # (the "License"); you may not use this file except in compliance with
    # the License.  You may obtain a copy of the License at
    #
    #    http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    # see kafka.server.KafkaConfig for additional details and defaults
    
    ############################# Server Basics #############################
    
    # The id of the broker. This must be set to a unique integer for each broker.
    broker.id=0
    
    ############################# Socket Server Settings #############################
    
    # The address the socket server listens on. It will get the value returned from 
    # java.net.InetAddress.getCanonicalHostName() if not configured.
    #   FORMAT:
    #     listeners = listener_name://host_name:port
    #   EXAMPLE:
    #     listeners = PLAINTEXT://your.host.name:9092
    listeners=PLAINTEXT://:9092
    
    # Hostname and port the broker will advertise to producers and consumers. If not set, 
    # it uses the value for "listeners" if configured.  Otherwise, it will use the value
    # returned from java.net.InetAddress.getCanonicalHostName().
    advertised.listeners=PLAINTEXT://172.18.105.142:9092
    
    # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
    #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    
    # The number of threads that the server uses for receiving requests from the network and sending responses to the network
    num.network.threads=3
    
    # The number of threads that the server uses for processing requests, which may include disk I/O
    num.io.threads=8
    
    # The send buffer (SO_SNDBUF) used by the socket server
    socket.send.buffer.bytes=102400
    
    # The receive buffer (SO_RCVBUF) used by the socket server
    socket.receive.buffer.bytes=102400
    
    # The maximum size of a request that the socket server will accept (protection against OOM)
    socket.request.max.bytes=104857600
    
    
    ############################# Log Basics #############################
    
    # A comma separated list of directories under which to store log files
    log.dirs=/opt/kafkalog
    
    # The default number of log partitions per topic. More partitions allow greater
    # parallelism for consumption, but this will also result in more files across
    # the brokers.
    num.partitions=1
    
    # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
    # This value is recommended to be increased for installations with data dirs located in RAID array.
    num.recovery.threads.per.data.dir=1
    
    ############################# Internal Topic Settings  #############################
    # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
    # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
    offsets.topic.replication.factor=1
    transaction.state.log.replication.factor=1
    transaction.state.log.min.isr=1
    
    ############################# Log Flush Policy #############################
    
    # Messages are immediately written to the filesystem but by default we only fsync() to sync
    # the OS cache lazily. The following configurations control the flush of data to disk.
    # There are a few important trade-offs here:
    #    1. Durability: Unflushed data may be lost if you are not using replication.
    #    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
    #    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
    # The settings below allow one to configure the flush policy to flush data after a period of time or
    # every N messages (or both). This can be done globally and overridden on a per-topic basis.
    
    # The number of messages to accept before forcing a flush of data to disk
    #log.flush.interval.messages=10000
    
    # The maximum amount of time a message can sit in a log before we force a flush
    #log.flush.interval.ms=1000
    
    ############################# Log Retention Policy #############################
    
    # The following configurations control the disposal of log segments. The policy can
    # be set to delete segments after a period of time, or after a given size has accumulated.
    # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
    # from the end of the log.
    
    # The minimum age of a log file to be eligible for deletion due to age
    log.retention.hours=168
    
    # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
    # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
    #log.retention.bytes=1073741824
    
    # The maximum size of a log segment file. When this size is reached a new log segment will be created.
    log.segment.bytes=1073741824
    
    # The interval at which log segments are checked to see if they can be deleted according
    # to the retention policies
    log.retention.check.interval.ms=300000
    
    ############################# Zookeeper #############################
    
    # Zookeeper connection string (see zookeeper docs for details).
    # This is a comma separated host:port pairs, each corresponding to a zk
    # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
    # You can also append an optional chroot string to the urls to specify the
    # root directory for all kafka znodes.
    zookeeper.connect=172.18.105.142:2181
    
    # Timeout in ms for connecting to zookeeper
    zookeeper.connection.timeout.ms=6000
    
    
    ############################# Group Coordinator Settings #############################
    
    # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
    # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
    # The default value for this is 3 seconds.
    # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
    # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
    group.initial.rebalance.delay.ms=0

     kafka操作

    # 后台启动 Kafka bin/kafka-server-start.sh config/server.properties &
    启动后输入命令:jps
     
    创建topic:bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test01
    报错提示:Error while executing topic command : Replication factor: 2 larger than available brokers: 0
    查询列表:
    bin/kafka-topics.sh -list -zookeeper localhost:2181
    启动producer:bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test01
    启动consumer:bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test01 --from-beginning 
     
    原因:没有在kafka目录下创建zookeeper ,指定myid
    解决:
    cd kafka_2.11-1.1.0
      mkdir zookeeper
      cd zookeeper 
      touch myid 
      echo 0 > myid 
    重新启动kafka就ok

     7、安装Redis

     8、安装MONGODB

      Windows版:注册服务

    D:UCAPmongodbinmongod.exe --port 27017 --logpath D:UCAPmongodblogsmongodb.log --dbpath D:UCAPmongodbdata --directoryperdb --config "D:UCAPmongodbmongod.cfg" --install --serviceName "MongoDB"

     9、安装mysql

    https://www.jianshu.com/p/276d59cbc529

  • 相关阅读:
    BZOJ2199[Usaco2011 Jan]奶牛议会——2-SAT+tarjan缩点
    BZOJ3862Little Devil I——树链剖分+线段树
    BZOJ2325[ZJOI2011]道馆之战——树链剖分+线段树
    BZOJ1018[SHOI2008]堵塞的交通——线段树
    BZOJ2733[HNOI2012]永无乡——线段树合并+并查集+启发式合并
    BZOJ4127Abs——树链剖分+线段树
    bzoj 4753 最佳团体
    bzoj 4472 salesman
    bzoj 5369 最大前缀和
    bzoj 1226 学校食堂Dining
  • 原文地址:https://www.cnblogs.com/woniusky/p/11231977.html
Copyright © 2020-2023  润新知