• 为fastdfs文件服务器新增一个storage


    一.前言:

      前期,已经搭建好了一套fastdfs文件服务器,一个tracker和一个storage,且部署在同一台服务器上,已经正式投入运行快半年了,1T的空间现在只剩下100G容量了,现在需要扩容,再增加1T。
    有两种方案:1.在原有的storage直接挂存储,这样单个storage达到2T;2.另外部署一套storage,作为group2,这样,就是一个tracker对应两个storage,方便维护以及备份转移。
      现采用第二种方案。

    二.前期准备:

      已经一套fastdfs文件服务器,如果没有,请参考我以前的博客,地址为:https://www.cnblogs.com/lazyInsects/p/9486354.html。
      另外一台新的1T服务器,centeOS6.5以上,关闭防火墙。

    三.开始搭建:

      
    1. 拷贝安装目录下各个.gz文件到/usr/local/src下,解压各个install lib,例如tar -zxvf xxx.tar.gz,这些.gz文件在我的百度网盘,有需要可以联系我。
    2.
    需要先安装 gcc,gcc-c++,perl。命令为
      yum install gcc;
      yum install gcc-c++;
      yun install perl;
     3. 先安装libfastcommon,(需要先安装 gcc,gcc-c++,perl)
    
        cd libfastcommon
        ./make.sh
        ./make.sh install
    
        4. 安装FastDFS
    
        cd FastDFS
        ./make.sh
        ./make.sh install
    
        5. 修改storage.conf
        cd /etc/fdfs
        cp storage.conf.sample storage.conf
    
        //创建目录
        mkdir -p /opt/fdfs/storage
        mkdir -p /data/fdfs
    
        vi storage.conf
    
        修改项:
        base_path=/opt/fdfs/storage
        group_name=group1
        store_path0=/data/fdfs
        tracker_server=tracker服务器ip:22122
        http.server_port=8080
    
        6. 拷贝命令到指定目录
        cp /usr/bin/fdfs_storaged /usr/local/bin
        cp /usr/bin/fdfs_monitor /usr/local/bin
    
    
        7. 连接目录
    
        ln -s /usr/include/fastcommon /usr/local/include/fastcommon 
        ln -s /usr/include/fastdfs /usr/local/include/fastdfs 
        ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
        ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so
    
        8:启动storage
    
        fdfs_storaged /etc/fdfs/storage.conf start
    
        重启
    fdfs_storaged /etc/fdfs/storage.conf restart (/usr/bin/restart.sh /usr/local/bin/fdfs_storaged /etc/fdfs/storage.conf) 9:查看启动状态 netstat -unltp | grep fdfs 查看log: cat /opt/fdfs/stroage/logs/storaged.log or tail -f /opt/fdfs/stroage/logs/storage.log 10: 监控storage状态 //查看storage状态 fdfs_monitor /etc/fdfs/storage.conf fdfs_monitor /home/fastdfs/fdfs_conf/storage.conf 二:storage上安装nginx 1:确保依赖安装 yum install –y openssl-devel pcre-devel zlib-devel 2:解压/usr/local/src下的相关压缩包 3:编译和安装nginx 创建用户和响应的目录和文件,请参考nginx的安装文档 cd /usr/local/src/nginx-1.8.0 ----整段运行------ ./configure --prefix=/usr --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/media/disk1/nginx/logs/error.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --user=nginx --group=nginx --with-pcre=/usr/local/src/pcre-8.35 --with-http_ssl_module --with-http_flv_module --with-http_gzip_static_module --http-log-path=/media/disk1/nginx/logs/access.log --http-client-body-temp-path=/media/disk1/nginx/client --http-proxy-temp-path=/media/disk1/nginx/proxy --http-fastcgi-temp-path=/media/disk1/nginx/fcgi --with-http_stub_status_module --with-poll_module --with-http_realip_module --add-module=/usr/local/src/ngx_cache_purge-2.3 --add-module=/usr/local/src/fastdfs-nginx-module/src --with-cc-opt=-Wno-error 安装: make make install 4: 配置mod_fastdfs.conf cp /usr/local/src/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/ 修改/etc/fdfs/mod_fastdfs.conf配置项 base_path=/data/fdfs tracker_server=tracker服务器io:22122#修改为tracker server信息 url_have_group_name= true #改为true store_path0=/data/fdfs #改为数据存储路径,和storage.conf一样 group_name=group1 group_count = 2 并且在文件末尾加上以下信息: [group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/data/fdfs 5:ln -s /data/fdfs/data /data/fdfs/data/M00 ln -s /opt/file/fdfs/data /opt/file/fdfs/data/M01 6:拷贝http.conf和mime.type cp /usr/local/src/FastDFS/conf/http.conf /etc/fdfs/ cp /usr/local/src/FastDFS/conf/mime.types /etc/fdfs/ 7:设置启动文件和nginx配置文件 请参考安装目录下的storage下的nginx和nginx.conf location ~/group[1-3]/M00 { root /home/fastdfs/data; ngx_fastdfs_module; } 修改对应的ip后分别拷贝到/etc/init.d/、/etc/nginx/ cp /usr/sbin/nginx /etc/init.d/ 如果忘记copy 会报错:[emerg]: getpwnam(“nginx”) failed ,原因没有注册用户 chmod u+x nginx 8: 启动nginx cd /usr/sbin/ ./nginx #service nginx start
        重启nginx
        ./nginx -s reload
    9: 查看启动进程 ps aux | grep nginx cat /media/disk1/nginx/logs/access.log cat /media/disk1/nginx/logs/error.log 如不清楚log位置可在nginx.conf 配置中查看。 安装tracker 和 storage之后,测试 上传文件: 上传: /usr/local/bin/fdfs_upload_file /etc/fdfs/client.conf /usr/local/a.txt 测试: /usr/local/bin/fdfs_test /etc/fdfs/client.conf upload /usr/local/a.txt

     tracker配置文件:

      client.conf

    # connect timeout in seconds
    # default value is 30s
    connect_timeout=30
    
    # network timeout in seconds
    # default value is 30s
    network_timeout=60
    
    # the base path to store log files
    base_path=/opt/fdfs/tracker
    
    # tracker_server can ocur more than once, and tracker_server format is
    #  "host:port", host can be hostname or ip address
    tracker_server=10.74.11.118:22122
    
    #standard log level as syslog, case insensitive, value list:
    ### emerg for emergency
    ### alert
    ### crit for critical
    ### error
    ### warn for warning
    ### notice
    ### info
    ### debug
    log_level=info
    
    # if use connection pool
    # default value is false
    # since V4.05
    use_connection_pool = false
    
    # connections whose the idle time exceeds this time will be closed
    # unit: second
    # default value is 3600
    # since V4.05
    connection_pool_max_idle_time = 3600
    
    # if load FastDFS parameters from tracker server
    # since V4.05
    # default value is false
    load_fdfs_parameters_from_tracker=false
    
    # if use storage ID instead of IP address
    # same as tracker.conf
    # valid only when load_fdfs_parameters_from_tracker is false
    # default value is false
    # since V4.05
    use_storage_id = false
    
    # specify storage ids filename, can use relative or absolute path
    # same as tracker.conf
    # valid only when load_fdfs_parameters_from_tracker is false
    # since V4.05
    storage_ids_filename = storage_ids.conf
    
    
    #HTTP settings
    http.tracker_server_port=8080
    
    #use "#include" directive to include HTTP other settiongs
    include http.conf

    tracker.conf

    # is this config file disabled
    # false for enabled
    # true for disabled
    disabled=false
    
    # bind an address of this host
    # empty for bind all addresses of this host
    bind_addr=
    
    # the tracker server port
    port=22122
    
    # connect timeout in seconds
    # default value is 30s
    connect_timeout=30
    
    # network timeout in seconds
    # default value is 30s
    network_timeout=60
    
    # the base path to store data and log files
    base_path=/opt/fdfs/tracker
    
    # max concurrent connections this server supported
    max_connections=256
    
    # accept thread count
    # default value is 1
    # since V4.07
    accept_threads=1
    
    # work thread count, should <= max_connections
    # default value is 4
    # since V2.00
    work_threads=4
    
    # the method of selecting group to upload files
    # 0: round robin
    # 1: specify group
    # 2: load balance, select the max free space group to upload file
    store_lookup=2
    
    # which group to upload file
    # when store_lookup set to 1, must set store_group to the group name
    store_group=group2
    
    # which storage server to upload file
    # 0: round robin (default)
    # 1: the first server order by ip address
    # 2: the first server order by priority (the minimal)
    store_server=0
    
    # which path(means disk or mount point) of the storage server to upload file
    # 0: round robin
    # 2: load balance, select the max free space path to upload file
    store_path=2
    
    # which storage server to download file
    # 0: round robin (default)
    # 1: the source storage server which the current file uploaded to
    download_server=0
    
    # reserved storage space for system or other applications.
    # if the free(available) space of any stoarge server in 
    # a group <= reserved_storage_space, 
    # no file can be uploaded to this group.
    # bytes unit can be one of follows:
    ### G or g for gigabyte(GB)
    ### M or m for megabyte(MB)
    ### K or k for kilobyte(KB)
    ### no unit for byte(B)
    ### XX.XX% as ratio such as reserved_storage_space = 10%
    reserved_storage_space = 1%
    
    #standard log level as syslog, case insensitive, value list:
    ### emerg for emergency
    ### alert
    ### crit for critical
    ### error
    ### warn for warning
    ### notice
    ### info
    ### debug
    log_level=info
    
    #unix group name to run this program, 
    #not set (empty) means run by the group of current user
    run_by_group=
    
    #unix username to run this program,
    #not set (empty) means run by current user
    run_by_user=
    
    # allow_hosts can ocur more than once, host can be hostname or ip address,
    # "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
    # host[01-08,20-25].domain.com, for example:
    # allow_hosts=10.0.1.[1-15,20]
    # allow_hosts=host[01-08,20-25].domain.com
    allow_hosts=*
    
    # sync log buff to disk every interval seconds
    # default value is 10 seconds
    sync_log_buff_interval = 10
    
    # check storage server alive interval seconds
    check_active_interval = 120
    
    # thread stack size, should >= 64KB
    # default value is 64KB
    thread_stack_size = 64KB
    
    # auto adjust when the ip address of the storage server changed
    # default value is true
    storage_ip_changed_auto_adjust = true
    
    # storage sync file max delay seconds
    # default value is 86400 seconds (one day)
    # since V2.00
    storage_sync_file_max_delay = 86400
    
    # the max time of storage sync a file
    # default value is 300 seconds
    # since V2.00
    storage_sync_file_max_time = 300
    
    # if use a trunk file to store several small files
    # default value is false
    # since V3.00
    use_trunk_file = false 
    
    # the min slot size, should <= 4KB
    # default value is 256 bytes
    # since V3.00
    slot_min_size = 256
    
    # the max slot size, should > slot_min_size
    # store the upload file to trunk file when it's size <=  this value
    # default value is 16MB
    # since V3.00
    slot_max_size = 16MB
    
    # the trunk file size, should >= 4MB
    # default value is 64MB
    # since V3.00
    trunk_file_size = 64MB
    
    # if create trunk file advancely
    # default value is false
    # since V3.06
    trunk_create_file_advance = false
    
    # the time base to create trunk file
    # the time format: HH:MM
    # default value is 02:00
    # since V3.06
    trunk_create_file_time_base = 02:00
    
    # the interval of create trunk file, unit: second
    # default value is 38400 (one day)
    # since V3.06
    trunk_create_file_interval = 86400
    
    # the threshold to create trunk file
    # when the free trunk file size less than the threshold, will create 
    # the trunk files
    # default value is 0
    # since V3.06
    trunk_create_file_space_threshold = 20G
    
    # if check trunk space occupying when loading trunk free spaces
    # the occupied spaces will be ignored
    # default value is false
    # since V3.09
    # NOTICE: set this parameter to true will slow the loading of trunk spaces 
    # when startup. you should set this parameter to true when neccessary.
    trunk_init_check_occupying = false
    
    # if ignore storage_trunk.dat, reload from trunk binlog
    # default value is false
    # since V3.10
    # set to true once for version upgrade when your version less than V3.10
    trunk_init_reload_from_binlog = false
    
    # the min interval for compressing the trunk binlog file
    # unit: second
    # default value is 0, 0 means never compress
    # FastDFS compress the trunk binlog when trunk init and trunk destroy
    # recommand to set this parameter to 86400 (one day)
    # since V5.01
    trunk_compress_binlog_min_interval = 0
    
    # if use storage ID instead of IP address
    # default value is false
    # since V4.00
    use_storage_id = false
    
    # specify storage ids filename, can use relative or absolute path
    # since V4.00
    storage_ids_filename = storage_ids.conf
    
    # id type of the storage server in the filename, values are:
    ## ip: the ip address of the storage server
    ## id: the server id of the storage server
    # this paramter is valid only when use_storage_id set to true
    # default value is ip
    # since V4.03
    id_type_in_filename = ip
    
    # if store slave file use symbol link
    # default value is false
    # since V4.01
    store_slave_file_use_link = false
    
    # if rotate the error log every day
    # default value is false
    # since V4.02
    rotate_error_log = false
    
    # rotate error log time base, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    # default value is 00:00
    # since V4.02
    error_log_rotate_time=00:00
    
    # rotate error log when the log file exceeds this size
    # 0 means never rotates log file by log file size
    # default value is 0
    # since V4.02
    rotate_error_log_size = 0
    
    # keep days of the log files
    # 0 means do not delete old log files
    # default value is 0
    log_file_keep_days = 0
    
    # if use connection pool
    # default value is false
    # since V4.05
    use_connection_pool = false
    
    # connections whose the idle time exceeds this time will be closed
    # unit: second
    # default value is 3600
    # since V4.05
    connection_pool_max_idle_time = 3600
    
    # HTTP port on this tracker server
    http.server_port=8080
    
    # check storage HTTP server alive interval seconds
    # <= 0 for never check
    # default value is 30
    http.check_alive_interval=30
    
    # check storage HTTP server alive type, values are:
    #   tcp : connect to the storge server with HTTP port only, 
    #        do not request and get response
    #   http: storage check alive url must return http status 200
    # default value is tcp
    http.check_alive_type=tcp
    
    # check storage HTTP server alive uri/url
    # NOTE: storage embed HTTP server support uri: /status.html
    http.check_alive_uri=/status.html

    storage1配置文件:

    storage.conf

    # is this config file disabled
    # false for enabled
    # true for disabled
    disabled=false
    
    # the name of the group this storage server belongs to
    #
    # comment or remove this item for fetching from tracker server,
    # in this case, use_storage_id must set to true in tracker.conf,
    # and storage_ids.conf must be configed correctly.
    group_name=group1
    
    # bind an address of this host
    # empty for bind all addresses of this host
    bind_addr=
    
    # if bind an address of this host when connect to other servers 
    # (this storage server as a client)
    # true for binding the address configed by above parameter: "bind_addr"
    # false for binding any address of this host
    client_bind=true
    
    # the storage server port
    port=23000
    
    # connect timeout in seconds
    # default value is 30s
    connect_timeout=30
    
    # network timeout in seconds
    # default value is 30s
    network_timeout=60
    
    # heart beat interval in seconds
    heart_beat_interval=30
    
    # disk usage report interval in seconds
    stat_report_interval=60
    
    # the base path to store data and log files
    base_path=/opt/fdfs/storage
    
    # max concurrent connections the server supported
    # default value is 256
    # more max_connections means more memory will be used
    max_connections=256
    
    # the buff size to recv / send data
    # this parameter must more than 8KB
    # default value is 64KB
    # since V2.00
    buff_size = 256KB
    
    # accept thread count
    # default value is 1
    # since V4.07
    accept_threads=1
    
    # work thread count, should <= max_connections
    # work thread deal network io
    # default value is 4
    # since V2.00
    work_threads=4
    
    # if disk read / write separated
    ##  false for mixed read and write
    ##  true for separated read and write
    # default value is true
    # since V2.00
    disk_rw_separated = true
    
    # disk reader thread count per store base path
    # for mixed read / write, this parameter can be 0
    # default value is 1
    # since V2.00
    disk_reader_threads = 1
    
    # disk writer thread count per store base path
    # for mixed read / write, this parameter can be 0
    # default value is 1
    # since V2.00
    disk_writer_threads = 1
    
    # when no entry to sync, try read binlog again after X milliseconds
    # must > 0, default value is 200ms
    sync_wait_msec=50
    
    # after sync a file, usleep milliseconds
    # 0 for sync successively (never call usleep)
    sync_interval=0
    
    # storage sync start time of a day, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    sync_start_time=00:00
    
    # storage sync end time of a day, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    sync_end_time=23:59
    
    # write to the mark file after sync N files
    # default value is 500
    write_mark_file_freq=500
    
    # path(disk or mount point) count, default value is 1
    store_path_count=2
    
    # store_path#, based 0, if store_path0 not exists, it's value is base_path
    # the paths must be exist
    store_path0=/data/fdfs
    store_path1=/opt/file/fdfs
    
    # subdir_count  * subdir_count directories will be auto created under each 
    # store_path (disk), value can be 1 to 256, default value is 256
    subdir_count_per_path=256
    
    # tracker_server can ocur more than once, and tracker_server format is
    #  "host:port", host can be hostname or ip address
    tracker_server=10.74.11.118:22122
    
    #standard log level as syslog, case insensitive, value list:
    ### emerg for emergency
    ### alert
    ### crit for critical
    ### error
    ### warn for warning
    ### notice
    ### info
    ### debug
    log_level=info
    
    #unix group name to run this program, 
    #not set (empty) means run by the group of current user
    run_by_group=
    
    #unix username to run this program,
    #not set (empty) means run by current user
    run_by_user=
    
    # allow_hosts can ocur more than once, host can be hostname or ip address,
    # "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
    # host[01-08,20-25].domain.com, for example:
    # allow_hosts=10.0.1.[1-15,20]
    # allow_hosts=host[01-08,20-25].domain.com
    allow_hosts=*
    
    # the mode of the files distributed to the data path
    # 0: round robin(default)
    # 1: random, distributted by hash code
    file_distribute_path_mode=0
    
    # valid when file_distribute_to_path is set to 0 (round robin), 
    # when the written file count reaches this number, then rotate to next path
    # default value is 100
    file_distribute_rotate_count=100
    
    # call fsync to disk when write big file
    # 0: never call fsync
    # other: call fsync when written bytes >= this bytes
    # default value is 0 (never call fsync)
    fsync_after_written_bytes=0
    
    # sync log buff to disk every interval seconds
    # must > 0, default value is 10 seconds
    sync_log_buff_interval=10
    
    # sync binlog buff / cache to disk every interval seconds
    # default value is 60 seconds
    sync_binlog_buff_interval=10
    
    # sync storage stat info to disk every interval seconds
    # default value is 300 seconds
    sync_stat_file_interval=300
    
    # thread stack size, should >= 512KB
    # default value is 512KB
    thread_stack_size=512KB
    
    # the priority as a source server for uploading file.
    # the lower this value, the higher its uploading priority.
    # default value is 10
    upload_priority=10
    
    # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
    # multi aliases split by comma. empty value means auto set by OS type
    # default values is empty
    if_alias_prefix=
    
    # if check file duplicate, when set to true, use FastDHT to store file indexes
    # 1 or yes: need check
    # 0 or no: do not check
    # default value is 0
    check_file_duplicate=0
    
    # file signature method for check file duplicate
    ## hash: four 32 bits hash code
    ## md5: MD5 signature
    # default value is hash
    # since V4.01
    file_signature_method=hash
    
    # namespace for storing file indexes (key-value pairs)
    # this item must be set when check_file_duplicate is true / on
    key_namespace=FastDFS
    
    # set keep_alive to 1 to enable persistent connection with FastDHT servers
    # default value is 0 (short connection)
    keep_alive=0
    
    # you can use "#include filename" (not include double quotes) directive to 
    # load FastDHT server list, when the filename is a relative path such as 
    # pure filename, the base path is the base path of current/this config file.
    # must set FastDHT server list when check_file_duplicate is true / on
    # please see INSTALL of FastDHT for detail
    ##include /home/yuqing/fastdht/conf/fdht_servers.conf
    
    # if log to access log
    # default value is false
    # since V4.00
    use_access_log = false
    
    # if rotate the access log every day
    # default value is false
    # since V4.00
    rotate_access_log = false
    
    # rotate access log time base, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    # default value is 00:00
    # since V4.00
    access_log_rotate_time=00:00
    
    # if rotate the error log every day
    # default value is false
    # since V4.02
    rotate_error_log = false
    
    # rotate error log time base, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    # default value is 00:00
    # since V4.02
    error_log_rotate_time=00:00
    
    # rotate access log when the log file exceeds this size
    # 0 means never rotates log file by log file size
    # default value is 0
    # since V4.02
    rotate_access_log_size = 0
    
    # rotate error log when the log file exceeds this size
    # 0 means never rotates log file by log file size
    # default value is 0
    # since V4.02
    rotate_error_log_size = 0
    
    # keep days of the log files
    # 0 means do not delete old log files
    # default value is 0
    log_file_keep_days = 0
    
    # if skip the invalid record when sync file
    # default value is false
    # since V4.02
    file_sync_skip_invalid_record=false
    
    # if use connection pool
    # default value is false
    # since V4.05
    use_connection_pool = false
    
    # connections whose the idle time exceeds this time will be closed
    # unit: second
    # default value is 3600
    # since V4.05
    connection_pool_max_idle_time = 3600
    
    # use the ip address of this storage server if domain_name is empty,
    # else this domain name will ocur in the url redirected by the tracker server
    http.domain_name=
    
    # the port of the web server on this storage server
    http.server_port=8080

    由于tracker与storage1安装在同一台服务器,Nginx配置文件为:

    nginx.conf:

    user root;
    worker_processes 2;
    error_log /media/disk1/nginx/logs/error.log info;
    pid        /var/run/nginx.pid;
    worker_rlimit_nofile 65535;
    events
    {
     use epoll;
     worker_connections 65535;
    }
    http
    {
     include mime.types;
     default_type  application/octet-stream;
     server_names_hash_bucket_size 128;
     client_header_buffer_size 32k;
     large_client_header_buffers 4 32k;
     client_max_body_size 8m;
     sendfile on;
     tcp_nopush on;
     keepalive_timeout 60;
    
     tcp_nodelay on;
    
     client_body_buffer_size 512k;
     proxy_connect_timeout 5;
     proxy_read_timeout 60;
     proxy_send_timeout 5;
     proxy_buffer_size 16k;
     proxy_buffers 4 64k;
     proxy_busy_buffers_size 128k;
     proxy_temp_file_write_size 128k;
     proxy_temp_path /media/disk1/nginx/proxy_temp;
     proxy_cache_path /usr/local/nginx/conf/proxy_cache levels=1:2 keys_zone=http-cache:100m max_size=3g inactive=30d;
     proxy_cache_bypass $http_secret_header;
    
    
     gzip on;
     gzip_min_length 1k;
     gzip_buffers 4 16k;
     gzip_http_version 1.0;
     gzip_comp_level 2;
     gzip_types text/plain application/x-javascript text/css application/xml;
     gzip_vary on;
     
     log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"' 
                  '"$upstream_cache_status"';
                  
    upstream fdfs_group2 {
           server 10.74.11.121:8080 weight=1 max_fails=5 fail_timeout=30s;
        }
        
     server
     {
     listen 8080;
     server_name localhost;
     index index.html index.htm index.php;
    
    
     access_log /media/disk1/nginx/logs/access.log  main;
    
     location /group1/M00 {
        root /data/fdfs/data;
        ngx_fastdfs_module;
     }
     location /group1/M01 {
        root /opt/file/fdfs/data;
        ngx_fastdfs_module;
     }
     location /group2/M00 {
               proxy_next_upstream http_502 http_504 error timeout invalid_header;
               proxy_cache http-cache;
               proxy_cache_valid 200 304 12h;
               proxy_cache_key $uri$is_args$args;
               proxy_pass http://fdfs_group2;
               expires 30d;
            }
    location /group2/M01 {
               proxy_next_upstream http_502 http_504 error timeout invalid_header;
               proxy_cache http-cache;
               proxy_cache_valid 200 304 12h;
               proxy_cache_key $uri$is_args$args;
               proxy_pass http://fdfs_group2;
               expires 30d;
            }
    location ~ /purge(/.*) {
        allow 127.0.0.1;
        allow 10.74.11.118;
        deny all;
        proxy_cache_purge http-cache $1$is_args$args;
    }
    
    location /NginxStatus  
     {  
     stub_status on;  
     auth_basic "NginxStatus";  
     }  
    
     }
    }

    storage2配置文件:

    storage.conf:

    # is this config file disabled
    # false for enabled
    # true for disabled
    disabled=false
    
    # the name of the group this storage server belongs to
    #
    # comment or remove this item for fetching from tracker server,
    # in this case, use_storage_id must set to true in tracker.conf,
    # and storage_ids.conf must be configed correctly.
    group_name=group2
    
    # bind an address of this host
    # empty for bind all addresses of this host
    bind_addr=
    
    # if bind an address of this host when connect to other servers 
    # (this storage server as a client)
    # true for binding the address configed by above parameter: "bind_addr"
    # false for binding any address of this host
    client_bind=true
    
    # the storage server port
    port=23000
    
    # connect timeout in seconds
    # default value is 30s
    connect_timeout=30
    
    # network timeout in seconds
    # default value is 30s
    network_timeout=60
    
    # heart beat interval in seconds
    heart_beat_interval=30
    
    # disk usage report interval in seconds
    stat_report_interval=60
    
    # the base path to store data and log files
    base_path=/opt/fdfs/storage
    
    # max concurrent connections the server supported
    # default value is 256
    # more max_connections means more memory will be used
    max_connections=256
    
    # the buff size to recv / send data
    # this parameter must more than 8KB
    # default value is 64KB
    # since V2.00
    buff_size = 256KB
    
    # accept thread count
    # default value is 1
    # since V4.07
    accept_threads=1
    
    # work thread count, should <= max_connections
    # work thread deal network io
    # default value is 4
    # since V2.00
    work_threads=4
    
    # if disk read / write separated
    ##  false for mixed read and write
    ##  true for separated read and write
    # default value is true
    # since V2.00
    disk_rw_separated = true
    
    # disk reader thread count per store base path
    # for mixed read / write, this parameter can be 0
    # default value is 1
    # since V2.00
    disk_reader_threads = 1
    
    # disk writer thread count per store base path
    # for mixed read / write, this parameter can be 0
    # default value is 1
    # since V2.00
    disk_writer_threads = 1
    
    # when no entry to sync, try read binlog again after X milliseconds
    # must > 0, default value is 200ms
    sync_wait_msec=50
    
    # after sync a file, usleep milliseconds
    # 0 for sync successively (never call usleep)
    sync_interval=0
    
    # storage sync start time of a day, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    sync_start_time=00:00
    
    # storage sync end time of a day, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    sync_end_time=23:59
    
    # write to the mark file after sync N files
    # default value is 500
    write_mark_file_freq=500
    
    # path(disk or mount point) count, default value is 1
    store_path_count=2
    
    # store_path#, based 0, if store_path0 not exists, it's value is base_path
    # the paths must be exist
    store_path0=/data/fdfs
    store_path1=/opt/file/fdfs
    
    # subdir_count  * subdir_count directories will be auto created under each 
    # store_path (disk), value can be 1 to 256, default value is 256
    subdir_count_per_path=256
    
    # tracker_server can ocur more than once, and tracker_server format is
    #  "host:port", host can be hostname or ip address
    tracker_server=10.74.11.118:22122
    
    #standard log level as syslog, case insensitive, value list:
    ### emerg for emergency
    ### alert
    ### crit for critical
    ### error
    ### warn for warning
    ### notice
    ### info
    ### debug
    log_level=info
    
    #unix group name to run this program, 
    #not set (empty) means run by the group of current user
    run_by_group=
    
    #unix username to run this program,
    #not set (empty) means run by current user
    run_by_user=
    
    # allow_hosts can ocur more than once, host can be hostname or ip address,
    # "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or
    # host[01-08,20-25].domain.com, for example:
    # allow_hosts=10.0.1.[1-15,20]
    # allow_hosts=host[01-08,20-25].domain.com
    allow_hosts=*
    
    # the mode of the files distributed to the data path
    # 0: round robin(default)
    # 1: random, distributted by hash code
    file_distribute_path_mode=0
    
    # valid when file_distribute_to_path is set to 0 (round robin), 
    # when the written file count reaches this number, then rotate to next path
    # default value is 100
    file_distribute_rotate_count=100
    
    # call fsync to disk when write big file
    # 0: never call fsync
    # other: call fsync when written bytes >= this bytes
    # default value is 0 (never call fsync)
    fsync_after_written_bytes=0
    
    # sync log buff to disk every interval seconds
    # must > 0, default value is 10 seconds
    sync_log_buff_interval=10
    
    # sync binlog buff / cache to disk every interval seconds
    # default value is 60 seconds
    sync_binlog_buff_interval=10
    
    # sync storage stat info to disk every interval seconds
    # default value is 300 seconds
    sync_stat_file_interval=300
    
    # thread stack size, should >= 512KB
    # default value is 512KB
    thread_stack_size=512KB
    
    # the priority as a source server for uploading file.
    # the lower this value, the higher its uploading priority.
    # default value is 10
    upload_priority=10
    
    # the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
    # multi aliases split by comma. empty value means auto set by OS type
    # default values is empty
    if_alias_prefix=
    
    # if check file duplicate, when set to true, use FastDHT to store file indexes
    # 1 or yes: need check
    # 0 or no: do not check
    # default value is 0
    check_file_duplicate=0
    
    # file signature method for check file duplicate
    ## hash: four 32 bits hash code
    ## md5: MD5 signature
    # default value is hash
    # since V4.01
    file_signature_method=hash
    
    # namespace for storing file indexes (key-value pairs)
    # this item must be set when check_file_duplicate is true / on
    key_namespace=FastDFS
    
    # set keep_alive to 1 to enable persistent connection with FastDHT servers
    # default value is 0 (short connection)
    keep_alive=0
    
    # you can use "#include filename" (not include double quotes) directive to 
    # load FastDHT server list, when the filename is a relative path such as 
    # pure filename, the base path is the base path of current/this config file.
    # must set FastDHT server list when check_file_duplicate is true / on
    # please see INSTALL of FastDHT for detail
    ##include /home/yuqing/fastdht/conf/fdht_servers.conf
    
    # if log to access log
    # default value is false
    # since V4.00
    use_access_log = false
    
    # if rotate the access log every day
    # default value is false
    # since V4.00
    rotate_access_log = false
    
    # rotate access log time base, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    # default value is 00:00
    # since V4.00
    access_log_rotate_time=00:00
    
    # if rotate the error log every day
    # default value is false
    # since V4.02
    rotate_error_log = false
    
    # rotate error log time base, time format: Hour:Minute
    # Hour from 0 to 23, Minute from 0 to 59
    # default value is 00:00
    # since V4.02
    error_log_rotate_time=00:00
    
    # rotate access log when the log file exceeds this size
    # 0 means never rotates log file by log file size
    # default value is 0
    # since V4.02
    rotate_access_log_size = 0
    
    # rotate error log when the log file exceeds this size
    # 0 means never rotates log file by log file size
    # default value is 0
    # since V4.02
    rotate_error_log_size = 0
    
    # keep days of the log files
    # 0 means do not delete old log files
    # default value is 0
    log_file_keep_days = 0
    
    # if skip the invalid record when sync file
    # default value is false
    # since V4.02
    file_sync_skip_invalid_record=false
    
    # if use connection pool
    # default value is false
    # since V4.05
    use_connection_pool = false
    
    # connections whose the idle time exceeds this time will be closed
    # unit: second
    # default value is 3600
    # since V4.05
    connection_pool_max_idle_time = 3600
    
    # use the ip address of this storage server if domain_name is empty,
    # else this domain name will ocur in the url redirected by the tracker server
    http.domain_name=
    
    # the port of the web server on this storage server
    http.server_port=8080

    storage2上面的Nginx配置文件:

    user root;
    worker_processes 2;
    error_log /media/disk1/nginx/logs/error.log info;
    pid        /var/run/nginx.pid;
    worker_rlimit_nofile 65535;
    events
    {
     use epoll;
     worker_connections 65535;
    }
    http
    {
     include mime.types;
     default_type  application/octet-stream;
     server_names_hash_bucket_size 128;
     client_header_buffer_size 32k;
     large_client_header_buffers 4 32k;
     client_max_body_size 8m;
     sendfile on;
     tcp_nopush on;
     keepalive_timeout 60;
    
     tcp_nodelay on;
    
     client_body_buffer_size 512k;
     proxy_connect_timeout 5;
     proxy_read_timeout 60;
     proxy_send_timeout 5;
     proxy_buffer_size 16k;
     proxy_buffers 4 64k;
     proxy_busy_buffers_size 128k;
     proxy_temp_file_write_size 128k;
     proxy_temp_path /media/disk1/nginx/proxy_temp;
     proxy_cache_path /media/disk1/nginx/proxy_cache levels=1:2 keys_zone=content:20m inactive=1d max_size=100m;
     proxy_cache_bypass $http_secret_header;
    
    
     gzip on;
     gzip_min_length 1k;
     gzip_buffers 4 16k;
     gzip_http_version 1.0;
     gzip_comp_level 2;
     gzip_types text/plain application/x-javascript text/css application/xml;
     gzip_vary on;
     
     log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"' 
                  '"$upstream_cache_status"';
    
     server
     {
     listen 8080;
     server_name localhost;
     index index.html index.htm index.php;
    
    
     access_log /media/disk1/nginx/logs/access.log  main;
    
     location /group2/M00 {
        root /data/fdfs/data;
        ngx_fastdfs_module;
     }
     location /group2/M01 {
        root /opt/file/fdfs/data;
        ngx_fastdfs_module;
     }
    
    location ~ /purge(/.*) {
        allow all;
        proxy_cache_purge content $1$is_args$args;
    }
    
    location /NginxStatus  
     {  
     stub_status on;  
     auth_basic "NginxStatus";  
     }  
    
     }
    }

    四.遇到的问题及解决办法

    1.测试上传时生成的链接访问不了
    原因:storage.config里面,由于store_path我配置了2个,store_path0和store_path1,但是store_path_count默认是1,忘记修改了,应该改成2.
    改完后,重启storage和tracker,命令分别为:fdfs_storaged /etc/fdfs/storage.conf restart和fdfs_trackerd /etc/fdfs/tracker.conf restart

    2.打开storage2里面的文件很慢
    原因:在tracker服务器的nginx配置代理没配正确,正确配置看上面的内容。












  • 相关阅读:
    设计若干个函数, ①删除除空格、英文逗号、句号、问号、感叹号之外的所有字符; ②两个英文单词之间的空格多于1个的,把多余的删去; ③检测每句首字母是否为大写字母,若不是,将其转换为大写字母; 检测句中除首字母外是否有大写字母,若有,将其转化为小写字母。
    在一个无序整数数组中,找出连续增长片段最长的一段, 增长步长是1。Example: [3,2,4,5,6,1,9], 最长的是[4,5,6]
    linux常用命令
    linux下硬盘分区/格式化/挂载
    Solr集群搭建
    redis集群搭建手册
    免费论文查重
    log4 配置日志文件变量名
    sqlserver 属性 DefaultFullTextLanguageLcid 不可用
    iis 0x80070032 Cannot read configuration file because it exceeds the maximum file size
  • 原文地址:https://www.cnblogs.com/lazyInsects/p/9531604.html
Copyright © 2020-2023  润新知