• Redis系列-配置文件小结


    如果不指定配置文件,Redis也可以启动,此时,redis使用默认的内置配置。不过在正式环境,常常通过配置文件【通常叫redis.conf】来配置redis。

    redis.conf配置格式如下:

    1. keyword argument1 argument2 ... argumentN  


    redis.conf配置参数:

    1)daemonize on|yes

    redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes时,启用守护进程

    2)pidfile /var/run/redis_6379.pid

    redis以守护进程方式运行时,系统默认会把pid写入/var/run/redis.pid,可以通过pidfile指定pid文件 ?有何用呢?记一个pid

    3)port 6379

    redis默认监听6379端口,可以通过port指定redis要监听的端口

    4)bind 127.0.0.1

    绑定主机地址

    5)unixsocket /tmp/redis.sock

    指定redis监听的unix socket 路径

    6)timeout 300

    当客户端闲置多长时间,关闭连接,单位秒  

    7)loglevel verbose(0)|debug(1)|notice(2)|warning(3)  (其中warning的级别最高)

    指定日志记录级别,默认是verbose

    8)logfile /var/log/redis_6379.log

    日志记录文件,默认是标准输出stdout,如果redis以守护进程方式运行,logfile 配置为stdout时,logs将要输出到/dev/null

    9)syslog-enabled no|yes

    当配置为yes时,日志输出到系统日志,默认是no

    10)syslog-ident redis

    指定syslog的标示符

    11)syslog-facility local0

    指定syslog设备(facility),必须是user或则local0到local7

    12)databases 16

    设置redis中数据库的个数,默认数据库是DB 0,可以通过select <dbid>,选择使用的数据库。dbid大于等于0,小于等于databases -1 【这里是16-1】

    13)save <seconds> <changes>

    指定多长时间内,有多少次更新操作时,将数据同步到数据库文件,可以多个条件配合,系统默认配置如下:

    1. save 900 1 #900秒 1个修改  
    2. save 300 10 #300秒 10个更新  
    3. save 60 10000<span style="white-space:pre"> </span>#60秒 10000个更新  

    注意,如果不持久化【不把数据写入磁盘】,注释掉save即可。 

    14)rdbcompression yes|no

      数据dump到数据文件时,系统是否压缩string对象数据,系统默认是yes。如果为了节省cpu,可以设置为no,此时数据文件比用LZF压缩时要大
    15)dbfilename dump.rdb

    指定数据库文件名,默认是dump.rdb

    16)dir /var/lib/redis/6379

    指定本地数据库存放目录

    17)slaveof <masterip> <masterport>

      当本机是slave服务时,设置master服务的ip和端口。

      slaveof命令可以将当前服务器转变为指定服务器的从属服务器(slave server)。如果当前服务器已经是某个主服务器(master server)的从属服务器,那么执行 SLAVEOF host port ,将使当前服务器停止对旧主服务器的同步,丢弃旧数据集,转而开始对新主服务器进行同步。

      另外,对一个从属服务器执行命令 <code>SLAVEOF NO ONE</code> 将使得这个从属服务器关闭复制功能,并从从属服务器转变回主服务器,原来同步所得的数据集不会被丢弃。

    利用『SLAVEOF NO ONE 不会丢弃同步所得数据集』这个特性,可以在主服务器失败的时候,将从属服务器用作新的主服务器,从而实现无间断运行。

    18)masterauth <master-password>

      当master服务设置了密码时,slave服务连接master的密码。如果配置不对,slave服务请求将被拒绝。

    19)slave-serve-stale-data yes|no

      当slave和master之间的连接断开或slave正在于master同步时,如果有slave请求,当slave-serve-stale-data配置为yes时,slave可以相应客户端请求;当为no时,slave将要响应错误,默认是yes

    20)requirepass foobared

      设置redis连接密码

    21)maxclients 128

      设置同一时间客户端最大连接数,默认是无限制。如果设置maxclients 0 时,表示不限制  

    22)maxmemory <bytes>

      指定redis最大内存限制,redis在启动时,会把数据加载到内存中,达到最大内存后,redis会先清除已到期或将过期的key,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读操作

    23)maxmemory-policy volatile-lru| allkeys-lru | volatile-random | allkeys->random | volatile-ttl | noeviction

      当redis使用内存达到最大时,使用哪种策略移除内存中数据。

      内存不足"时,数据清除策略,默认为"volatile-lru"。

      volatile-lru  ->对"过期集合"中的数据采取LRU(近期最少使用)算法.如果对key使用"expire"指令指定了过期时间,那么此key将会被添加到"过期集合"中。将已经过期/LRU的数据优先移除.如果"过期集合"中全部移除仍不能满足内存需求,将OOM.
      allkeys-lru ->对所有的数据,采用LRU算法
      volatile-random ->对"过期集合"中的数据采取"随即选取"算法,并移除选中的K-V,直到"内存足够"为止. 如果如果"过期集合"中全部移除全部移除仍不能满足,将OOM
      allkeys-random ->对所有的数据,采取"随机选取"算法,并移除选中的K-V,直到"内存足够"为止
      volatile-ttl ->对"过期集合"中的数据采取TTL算法(最小存活时间),移除即将过期的数据.
      noeviction ->不做任何干扰操作,直接返回OOM异常
      另外,如果数据的过期不会对"应用系统"带来异常,且系统中write操作比较密集,建议采取"allkeys-lru"

    24)appendonly no|yes

      指定是否在每次更新操作后进行日志记录,默认配置是no,即在采用异步方式把数据写入到磁盘,如果不开启,可能会在断电时导致部分数据丢失

    25)appendfilename appendonly.aof

      指定更新日志文件名【aof日志】,默认为appendonly.aof

    26)appendfsync everysec|no|aways

      指定更新日志条件,no表示等操作系统进行数据缓存同步到磁盘的aof文件(快)

      always表示每次更新操作后手动调用fsync将数据写到磁盘的aof文件(慢,安全)

      everysec,表示每秒同步一次(拆中,默认值)

    27)slowlog-log-slower-than 10000

      配置记录慢日志的条件,单位是微妙,当是负值时,关闭慢日志记录,当是0时,记录所有操作

    28)slowlog-max-len 1024

      配置记录慢查询的最大条数

    29)hash-max-zipmap-entries 512

      配置最大元素数,当超过该配置数据时,redis采用特殊hash算法

    30)hash-max-zipmap-value 64

      配置最大元素值,当草果配置值时,采用特殊hash算法

    31)activerehashing yes

      指定是否激活充值hash,默认开启

      可以通过下面命令使用配置文件redis.conf启动redis服务

    1. /usr/local/redis/bin/redis-server /usr/local/redis/etc/redis.conf  

     ===================


    ############################# EVENT NOTIFICATION ##############################

    # Redis can notify Pub/Sub clients about events happening in the key space.
    # This feature is documented at http://redis.io/topics/notifications
    #
    # For instance if keyspace events notification is enabled, and a client
    # performs a DEL operation on key "foo" stored in the Database 0, two
    # messages will be published via Pub/Sub:
    #
    # PUBLISH __keyspace@0__:foo del
    # PUBLISH __keyevent@0__:del foo
    #
    # It is possible to select the events that Redis will notify among a set
    # of classes. Every class is identified by a single character:
    #
    # K Keyspace events, published with __keyspace@<db>__ prefix.
    # E Keyevent events, published with __keyevent@<db>__ prefix.
    # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
    # $ String commands
    # l List commands
    # s Set commands
    # h Hash commands
    # z Sorted set commands
    # x Expired events (events generated every time a key expires)
    # e Evicted events (events generated when a key is evicted for maxmemory)
    # A Alias for g$lshzxe, so that the "AKE" string means all the events.
    #
    # The "notify-keyspace-events" takes as argument a string that is composed
    # of zero or multiple characters. The empty string means that notifications
    # are disabled.
    #
    # Example: to enable list and generic events, from the point of view of the
    # event name, use:
    #
    # notify-keyspace-events Elg
    #
    # Example 2: to get the stream of the expired keys subscribing to channel
    # name __keyevent@0__:expired use:
    #
    # notify-keyspace-events Ex
    #
    # By default all notifications are disabled because most users don't need
    # this feature and the feature has some overhead. Note that if you don't
    # specify at least one of K or E, no events will be delivered.
    notify-keyspace-events ""

    ############################### ADVANCED CONFIG ###############################

    # Hashes are encoded using a memory efficient data structure when they have a
    # small number of entries, and the biggest entry does not exceed a given
    # threshold. These thresholds can be configured using the following directives.
    hash-max-ziplist-entries 512
    hash-max-ziplist-value 64

    # Similarly to hashes, small lists are also encoded in a special way in order
    # to save a lot of space. The special representation is only used when
    # you are under the following limits:
    list-max-ziplist-entries 512
    list-max-ziplist-value 64

    # Sets have a special encoding in just one case: when a set is composed
    # of just strings that happen to be integers in radix 10 in the range
    # of 64 bit signed integers.
    # The following configuration setting sets the limit in the size of the
    # set in order to use this special memory saving encoding.
    set-max-intset-entries 512

    # Similarly to hashes and lists, sorted sets are also specially encoded in
    # order to save a lot of space. This encoding is only used when the length and
    # elements of a sorted set are below the following limits:
    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64

    # HyperLogLog sparse representation bytes limit. The limit includes the
    # 16 bytes header. When an HyperLogLog using the sparse representation crosses
    # this limit, it is converted into the dense representation.
    #
    # A value greater than 16000 is totally useless, since at that point the
    # dense representation is more memory efficient.
    #
    # The suggested value is ~ 3000 in order to have the benefits of
    # the space efficient encoding without slowing down too much PFADD,
    # which is O(N) with the sparse encoding. The value can be raised to
    # ~ 10000 when CPU is not a concern, but space is, and the data set is
    # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
    hll-sparse-max-bytes 3000

    # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
    # order to help rehashing the main Redis hash table (the one mapping top-level
    # keys to values). The hash table implementation Redis uses (see dict.c)
    # performs a lazy rehashing: the more operation you run into a hash table
    # that is rehashing, the more rehashing "steps" are performed, so if the
    # server is idle the rehashing is never complete and some more memory is used
    # by the hash table.
    #
    # The default is to use this millisecond 10 times every second in order to
    # actively rehash the main dictionaries, freeing memory when possible.
    #
    # If unsure:
    # use "activerehashing no" if you have hard latency requirements and it is
    # not a good thing in your environment that Redis can reply from time to time
    # to queries with 2 milliseconds delay.
    #
    # use "activerehashing yes" if you don't have such hard requirements but
    # want to free memory asap when possible.
    activerehashing yes

    # The client output buffer limits can be used to force disconnection of clients
    # that are not reading data from the server fast enough for some reason (a
    # common reason is that a Pub/Sub client can't consume messages as fast as the
    # publisher can produce them).
    #
    # The limit can be set differently for the three different classes of clients:
    #
    # normal -> normal clients including MONITOR clients
    # slave -> slave clients
    # pubsub -> clients subscribed to at least one pubsub channel or pattern
    #
    # The syntax of every client-output-buffer-limit directive is the following:
    #
    # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
    #
    # A client is immediately disconnected once the hard limit is reached, or if
    # the soft limit is reached and remains reached for the specified number of
    # seconds (continuously).
    # So for instance if the hard limit is 32 megabytes and the soft limit is
    # 16 megabytes / 10 seconds, the client will get disconnected immediately
    # if the size of the output buffers reach 32 megabytes, but will also get
    # disconnected if the client reaches 16 megabytes and continuously overcomes
    # the limit for 10 seconds.
    #
    # By default normal clients are not limited because they don't receive data
    # without asking (in a push way), but just after a request, so only
    # asynchronous clients may create a scenario where data is requested faster
    # than it can read.
    #
    # Instead there is a default limit for pubsub and slave clients, since
    # subscribers and slaves receive data in a push fashion.
    #
    # Both the hard or the soft limit can be disabled by setting them to zero.
    client-output-buffer-limit normal 0 0 0
    client-output-buffer-limit slave 256mb 64mb 60
    client-output-buffer-limit pubsub 32mb 8mb 60

    # Redis calls an internal function to perform many background tasks, like
    # closing connections of clients in timeout, purging expired keys that are
    # never requested, and so forth.
    #
    # Not all tasks are performed with the same frequency, but Redis checks for
    # tasks to perform according to the specified "hz" value.
    #
    # By default "hz" is set to 10. Raising the value will use more CPU when
    # Redis is idle, but at the same time will make Redis more responsive when
    # there are many keys expiring at the same time, and timeouts may be
    # handled with more precision.
    #
    # The range is between 1 and 500, however a value over 100 is usually not
    # a good idea. Most users should use the default of 10 and raise this up to
    # 100 only in environments where very low latency is required.
    hz 10

    # When a child rewrites the AOF file, if the following option is enabled
    # the file will be fsync-ed every 32 MB of data generated. This is useful
    # in order to commit the file to the disk more incrementally and avoid
    # big latency spikes.
    aof-rewrite-incremental-fsync yes

  • 相关阅读:
    ubuntu 如何 su 到 root(作为 root 用户操作)
    centos6.5 redis 安装配置及java调用
    springmvc 国际化
    springmvc 整合数据验证框架 jsr
    springmvc 整合shiro
    centos 6.5 安装mysql
    hive 报错 java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
    centos 关闭防火墙
    client.HConnectionManager$HConnectionImplementation: Can't get connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
    fms +fme 视频直播
  • 原文地址:https://www.cnblogs.com/xingzc/p/5999693.html
Copyright © 2020-2023  润新知