• Redis 3.0.4 AOF持久化


      1.aof持久化

        1.1.redis提供了两种持久化机制,rdb持久化和aof持久化。

          1.rdb持久化:把内存中的数据库状态以快照形式保存到磁盘,避免数据意外丢失。优点是文件小,重启时加载很快,缺点是重启加载时,备份时间点之后的数据会丢失。

          2.aof持久化:通过保存redis服务器所执行的写命令来记录数据库状态,优点是:相比较rdb丢数据少,缺点是加载慢。

            通过更改appendfsync修改aof落盘策略:

              1.always:将aof_buf缓冲区的所有内容写入并保存到aof文件(只会丢失正在写的数据);

              2.everysec:将aof_buf缓冲区的所有内容写入到aof文件,如果上次同步aof文件的时间据现在超过1s,那么在此对aof文件进行同步,并且这个同步操作由一个线程专门负责执行(最多丢失1s数据);

              3.no:将aof_buf缓冲区中的所有内容写入到aof文件,但并不对aof文件进行同步,何时同步由操作系统决定,linux大多是30s,(会丢失刷盘之前的写入数据)。

      2.aof持久化的实现

          aof持久化功能的实现现在可分为命令追加、文件写入、文件同步三个步骤。

        2.1 命令追加

          当aof持久化功能是打开的,服务器在执行完一个写命令之后,会以协议格式将被执行的写命令追加到服务器状态的aof_buf缓冲区的末尾:

    //命令追加
    void feedAppendOnlyFile(struct redisCommand *cmd, int dictid, robj **argv, int argc) {
        sds buf = sdsempty();
        robj *tmpargv[3];
        //判断这次写入操作的数据库索引和上次的 是否一致
        /* The DB this command was targeting is not the same as the last command
         * we appended. To issue a SELECT command is needed. */
        if (dictid != server.aof_selected_db) {
            char seldb[64];
    
            snprintf(seldb,sizeof(seldb),"%d",dictid);
            buf = sdscatprintf(buf,"*2
    $6
    SELECT
    $%lu
    %s
    ",
                (unsigned long)strlen(seldb),seldb);
            server.aof_selected_db = dictid;
        }
        //判断是否有过期时间
        if (cmd->proc == expireCommand || cmd->proc == pexpireCommand ||
            cmd->proc == expireatCommand) {
            /* Translate EXPIRE/PEXPIRE/EXPIREAT into PEXPIREAT */
            buf = catAppendOnlyExpireAtCommand(buf,cmd,argv[1],argv[2]);
        } else if (cmd->proc == setexCommand || cmd->proc == psetexCommand) {
            /* Translate SETEX/PSETEX to SET and PEXPIREAT */
            tmpargv[0] = createStringObject("SET",3);
            tmpargv[1] = argv[1];
            tmpargv[2] = argv[3];
            buf = catAppendOnlyGenericCommand(buf,3,tmpargv);
            decrRefCount(tmpargv[0]);
            buf = catAppendOnlyExpireAtCommand(buf,cmd,argv[1],argv[2]);
        } else {
            /* All the other commands don't need translation or need the
             * same translation already operated in the command vector
             * for the replication itself. */
            buf = catAppendOnlyGenericCommand(buf,argc,argv);
        }
    
        /* Append to the AOF buffer. This will be flushed on disk just before
         * of re-entering the event loop, so before the client will get a
         * positive reply about the operation performed. */
        if (server.aof_state == REDIS_AOF_ON)
            server.aof_buf = sdscatlen(server.aof_buf,buf,sdslen(buf));
        //如果实例正在重写,则还要将数据写入到aof重写缓冲区中
        /* If a background append only file rewriting is in progress we want to
         * accumulate the differences between the child DB and the current one
         * in a buffer, so that when the child process will do its work we
         * can append the differences to the new append only file. */
        if (server.aof_child_pid != -1)
            aofRewriteBufferAppend((unsigned char*)buf,sdslen(buf));
    
        sdsfree(buf);
    }   

        2.2aof文件的写入与同步

          redis服务器进程就是一个时间循环,这个循环中的文件事件负责接收客户端的命令请求,以及向客户端发送命令回复,而时间事件则负责执行像serverCron函数这样需要定时运行的函数。

          为了提高文件的写入效率,在现代操作系统中,当用户调用weite函数,将一些数据写入到文件的时候,操作系统通常会将写入数据暂时保存在一个文件缓冲区里面,等到缓冲区的空间被填满、或者超过了指定的时限之后,才真正将缓冲区的数据写入到磁盘里面。

          这种做法虽然提高了效率,但也为写入数据带来了安全问题,因为如果计算机发生停机,那么保存在内存缓冲区里面的写入数据将会丢失。

          为此,系统提供fsync和fdatasync两个同步函数,它们可以强制让操作系统立即将缓冲区的数据写入到硬盘里,从而确保写入数据的安全性。

    /* Write the append only file buffer on disk.
     *
     * Since we are required to write the AOF before replying to the client,
     * and the only way the client socket can get a write is entering when the
     * the event loop, we accumulate all the AOF writes in a memory
     * buffer and write it on disk using this function just before entering
     * the event loop again.
     *
     * About the 'force' argument:
     *
     * When the fsync policy is set to 'everysec' we may delay the flush if there
     * is still an fsync() going on in the background thread, since for instance
     * on Linux write(2) will be blocked by the background fsync anyway.
     * When this happens we remember that there is some aof buffer to be
     * flushed ASAP, and will try to do that in the serverCron() function.
     *
     * However if force is set to 1 we'll write regardless of the background
     * fsync. */
    #define AOF_WRITE_LOG_ERROR_RATE 30 /* Seconds between errors logging. */
    void flushAppendOnlyFile(int force) {
        ssize_t nwritten;
        int sync_in_progress = 0;
        mstime_t latency;
    
        if (sdslen(server.aof_buf) == 0) return;
        //查看是否有其他fsync正在进行
        if (server.aof_fsync == AOF_FSYNC_EVERYSEC)
            sync_in_progress = bioPendingJobsOfType(REDIS_BIO_AOF_FSYNC) != 0;
        //当同步策略是everysec时,并且force=0
        //如果有fsync正在同步,那么:
        //1.如果aof_flush_postponed_start = 0 表示是首次推迟写  那么将写入文件推迟  并且aof_flush_postponed_start记录为当前时间 返回
        //2.如果不是首次推迟,那么判断首次推迟的时间和当前时间是否大于2s 如果否,则返回
        //3.不满足1、2 则进行写入并将aof_delayed_fsync++
    
        if (server.aof_fsync == AOF_FSYNC_EVERYSEC && !force) {
            /* With this append fsync policy we do background fsyncing.
             * If the fsync is still in progress we can try to delay
             * the write for a couple of seconds. */
            if (sync_in_progress) {
                if (server.aof_flush_postponed_start == 0) {
                    /* No previous write postponing, remember that we are
                     * postponing the flush and return. */
                    server.aof_flush_postponed_start = server.unixtime;
                    return;
                } else if (server.unixtime - server.aof_flush_postponed_start < 2) {
                    /* We were already waiting for fsync to finish, but for less
                     * than two seconds this is still ok. Postpone again. */
                    return;
                }
                /* Otherwise fall trough, and go write since we can't wait
                 * over two seconds. */
                server.aof_delayed_fsync++;
                redisLog(REDIS_NOTICE,"Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis.");
            }
        }
        /* We want to perform a single write. This should be guaranteed atomic
         * at least if the filesystem we are writing is a real physical one.
         * While this will save us against the server being killed I don't think
         * there is much to do about the whole server stopping for power problems
         * or alike */
    
        latencyStartMonitor(latency);
        //将aof_buf写入到aof_fd 
        nwritten = write(server.aof_fd,server.aof_buf,sdslen(server.aof_buf));
        latencyEndMonitor(latency);
        /* We want to capture different events for delayed writes:
         * when the delay happens with a pending fsync, or with a saving child
         * active, and when the above two conditions are missing.
         * We also use an additional event name to save all samples which is
         * useful for graphing / monitoring purposes. */
        if (sync_in_progress) {
            latencyAddSampleIfNeeded("aof-write-pending-fsync",latency);
        } else if (server.aof_child_pid != -1 || server.rdb_child_pid != -1) {
            latencyAddSampleIfNeeded("aof-write-active-child",latency);
        } else {
            latencyAddSampleIfNeeded("aof-write-alone",latency);
        }
        latencyAddSampleIfNeeded("aof-write",latency);
    
        /* We performed the write so reset the postponed flush sentinel to zero. */
        server.aof_flush_postponed_start = 0;
        //如果写入的数据不等于aof_buf的长度
        if (nwritten != (signed)sdslen(server.aof_buf)) {
            static time_t last_write_error_log = 0;
            int can_log = 0;
    
            /* Limit logging rate to 1 line per AOF_WRITE_LOG_ERROR_RATE seconds. */
            if ((server.unixtime - last_write_error_log) > AOF_WRITE_LOG_ERROR_RATE) {
                can_log = 1;
                last_write_error_log = server.unixtime;
            }
    
            /* Log the AOF write error and record the error code. */
        //如果写入错误 则记录在log中
            if (nwritten == -1) {
                if (can_log) {
                    redisLog(REDIS_WARNING,"Error writing to the AOF file: %s",
                        strerror(errno));
                    server.aof_last_write_errno = errno;
                }
            } else {
            //如果写入了一部分发生错误  
                if (can_log) {
                    redisLog(REDIS_WARNING,"Short write while writing to "
                                           "the AOF file: (nwritten=%lld, "
                                           "expected=%lld)",
                                           (long long)nwritten,
                                           (long long)sdslen(server.aof_buf));
                }
            //将追加的内容截断 删除掉追加的内容 恢复成写入文件之前
                if (ftruncate(server.aof_fd, server.aof_current_size) == -1) {
                    if (can_log) {
                        redisLog(REDIS_WARNING, "Could not remove short write "
                                 "from the append-only file.  Redis may refuse "
                                 "to load the AOF the next time it starts.  "
                                 "ftruncate: %s", strerror(errno));
                    }
                } else {
                    /* If the ftruncate() succeeded we can set nwritten to
                     * -1 since there is no longer partial data into the AOF. */
                    nwritten = -1;
                }
                server.aof_last_write_errno = ENOSPC;
            }
    
            /* Handle the AOF write error. */
        //如果同步策略是每次写入就同步 无法恢复这种策略的写,因为我们已经告知客户,已经将写的数据同步到磁盘,所以直接退出。
            if (server.aof_fsync == AOF_FSYNC_ALWAYS) {
                /* We can't recover when the fsync policy is ALWAYS since the
                 * reply for the client is already in the output buffers, and we
                 * have the contract with the user that on acknowledged write data
                 * is synced on disk. */
                redisLog(REDIS_WARNING,"Can't recover from AOF write error when the AOF fsync policy is 'always'. Exiting...");
                exit(1);
            } else {
                /* Recover from failed write leaving data into the buffer. However
                 * set an error to stop accepting writes as long as the error
                 * condition is not cleared. */
            //设置执行write操作的状态
                server.aof_last_write_status = REDIS_ERR;
    
                /* Trim the sds buffer if there was a partial write, and there
                 * was no way to undo it with ftruncate(2). */
                if (nwritten > 0) {
                    server.aof_current_size += nwritten;
                    sdsrange(server.aof_buf,nwritten,-1);
                }
                return; /* We'll try again on the next call... */
            }
        } else {
            /* Successful write(2). If AOF was in error state, restore the
             * OK state and log the event. */
            if (server.aof_last_write_status == REDIS_ERR) {
                redisLog(REDIS_WARNING,
                    "AOF write error looks solved, Redis can write again.");
                server.aof_last_write_status = REDIS_OK;
            }
        }
        server.aof_current_size += nwritten;
    
        /* Re-use AOF buffer when it is small enough. The maximum comes from the
         * arena size of 4k minus some overhead (but is otherwise arbitrary). */
        //如果当前aof_buf大小小于4k 则将缓存内容清空 重用缓存 否则重新申请
        if ((sdslen(server.aof_buf)+sdsavail(server.aof_buf)) < 4000) {
            sdsclear(server.aof_buf);
        } else {
            sdsfree(server.aof_buf);
            server.aof_buf = sdsempty();
        }
    
        /* Don't fsync if no-appendfsync-on-rewrite is set to yes and there are
         * children doing I/O in the background. */
        //如果no-appendfsync-on-rewrite设置为yes 并且正在重写或者save  则不执行fsync
        if (server.aof_no_fsync_on_rewrite &&
            (server.aof_child_pid != -1 || server.rdb_child_pid != -1))
                return;
    
        /* Perform the fsync if needed. */
        
        if (server.aof_fsync == AOF_FSYNC_ALWAYS) {
            /* aof_fsync is defined as fdatasync() for Linux in order to avoid
             * flushing metadata. */
            latencyStartMonitor(latency);
            aof_fsync(server.aof_fd); /* Let's try to get this data on the disk */
            latencyEndMonitor(latency);
            latencyAddSampleIfNeeded("aof-fsync-always",latency);
            server.aof_last_fsync = server.unixtime;
        } else if ((server.aof_fsync == AOF_FSYNC_EVERYSEC &&
                    server.unixtime > server.aof_last_fsync)) {
            if (!sync_in_progress) aof_background_fsync(server.aof_fd);
            server.aof_last_fsync = server.unixtime;
        }
    }

        2.3 aof文件载入与数据还原

          因为aof文件里面包含了重建数据库状态所需的所有写入命令,所以服务器只要读入并重新执行一遍aof文件里面保存的写命令,就可以还原服务器关闭之前的数据库状态。

          

    /* Replay the append log file. On success REDIS_OK is returned. On non fatal
     * error (the append only file is zero-length) REDIS_ERR is returned. On
     * fatal error an error message is logged and the program exists. */
    //aof的载入
    int loadAppendOnlyFile(char *filename) {
        struct redisClient *fakeClient;
        FILE *fp = fopen(filename,"r");
        struct redis_stat sb;
        int old_aof_state = server.aof_state;
        long loops = 0;
        off_t valid_up_to = 0; /* Offset of the latest well-formed command loaded. */
    
        if (fp && redis_fstat(fileno(fp),&sb) != -1 && sb.st_size == 0) {
            server.aof_current_size = 0;
            fclose(fp);
            return REDIS_ERR;
        }
    
        if (fp == NULL) {
            redisLog(REDIS_WARNING,"Fatal error: can't open the append log file for reading: %s",strerror(errno));
            exit(1);
        }
    
        /* Temporarily disable AOF, to prevent EXEC from feeding a MULTI
         * to the same file we're about to read. */
        server.aof_state = REDIS_AOF_OFF;
        //生成一个伪客户端
        fakeClient = createFakeClient();
        // 设置载入的状态信息
        startLoading(fp);
    
        while(1) {
            int argc, j;
            unsigned long len;
            robj **argv;
            char buf[128];
            sds argsds;
            struct redisCommand *cmd;
    
            /* Serve the clients from time to time */
            if (!(loops++ % 1000)) {
            // ftello(fp)返回当前文件载入的偏移量
                // 设置载入时server的状态信息,更新当前载入的进度
                loadingProgress(ftello(fp));
            // 在服务器被阻塞的状态下,仍然能处理请求
                // 因为当前处于载入状态,当client的请求到来时,总是返回loading的状态错误
                processEventsWhileBlocked();
            }
        // 将一行文件内容读到buf中,遇到"
    "停止
            if (fgets(buf,sizeof(buf),fp) == NULL) {
                if (feof(fp))
                    break;
                else
                    goto readerr;
            }
            if (buf[0] != '*') goto fmterr;
            if (buf[1] == '') goto readerr;
            argc = atoi(buf+1);
            if (argc < 1) goto fmterr;
    
            argv = zmalloc(sizeof(robj*)*argc);
            fakeClient->argc = argc;
            fakeClient->argv = argv;
    
            for (j = 0; j < argc; j++) {
                if (fgets(buf,sizeof(buf),fp) == NULL) {
                    fakeClient->argc = j; /* Free up to j-1. */
                    freeFakeClientArgv(fakeClient);
                    goto readerr;
                }
                if (buf[0] != '$') goto fmterr;
                len = strtol(buf+1,NULL,10);
                argsds = sdsnewlen(NULL,len);
                if (len && fread(argsds,len,1,fp) == 0) {
                    sdsfree(argsds);
                    fakeClient->argc = j; /* Free up to j-1. */
                    freeFakeClientArgv(fakeClient);
                    goto readerr;
                }
                argv[j] = createObject(REDIS_STRING,argsds);
                if (fread(buf,2,1,fp) == 0) {
                    fakeClient->argc = j+1; /* Free up to j. */
                    freeFakeClientArgv(fakeClient);
                    goto readerr; /* discard CRLF */
                }
            }
    
            /* Command lookup */
         // 查找命令
            cmd = lookupCommand(argv[0]->ptr);
            if (!cmd) {
                redisLog(REDIS_WARNING,"Unknown command '%s' reading the append only file", (char*)argv[0]->ptr);
                exit(1);
            }
    
            /* Run the command in the context of a fake client */
            cmd->proc(fakeClient);
    
            /* The fake client should not have a reply */
            redisAssert(fakeClient->bufpos == 0 && listLength(fakeClient->reply) == 0);
            /* The fake client should never get blocked */
            redisAssert((fakeClient->flags & REDIS_BLOCKED) == 0);
    
            /* Clean up. Command code may have changed argv/argc so we use the
             * argv/argc of the client instead of the local variables. */
            freeFakeClientArgv(fakeClient);
            if (server.aof_load_truncated) valid_up_to = ftello(fp);
        }
    
        /* This point can only be reached when EOF is reached without errors.
         * If the client is in the middle of a MULTI/EXEC, log error and quit. */
        if (fakeClient->flags & REDIS_MULTI) goto uxeof;
    
    loaded_ok: /* DB loaded, cleanup and return REDIS_OK to the caller. */
        fclose(fp);
        freeFakeClient(fakeClient);
        server.aof_state = old_aof_state;
        stopLoading();
        aofUpdateCurrentSize();
        server.aof_rewrite_base_size = server.aof_current_size;
        return REDIS_OK;
    
    readerr: /* Read error. If feof(fp) is true, fall through to unexpected EOF. */
        if (!feof(fp)) {
            redisLog(REDIS_WARNING,"Unrecoverable error reading the append only file: %s", strerror(errno));
            exit(1);
        }
    
    uxeof: /* Unexpected AOF end of file. */
        if (server.aof_load_truncated) {
            redisLog(REDIS_WARNING,"!!! Warning: short read while loading the AOF file !!!");
            redisLog(REDIS_WARNING,"!!! Truncating the AOF at offset %llu !!!",
                (unsigned long long) valid_up_to);
            if (valid_up_to == -1 || truncate(filename,valid_up_to) == -1) {
                if (valid_up_to == -1) {
                    redisLog(REDIS_WARNING,"Last valid command offset is invalid");
                } else {
                    redisLog(REDIS_WARNING,"Error truncating the AOF file: %s",
                        strerror(errno));
                }
            } else {
                /* Make sure the AOF file descriptor points to the end of the
                 * file after the truncate call. */
                if (server.aof_fd != -1 && lseek(server.aof_fd,0,SEEK_END) == -1) {
                    redisLog(REDIS_WARNING,"Can't seek the end of the AOF file: %s",
                        strerror(errno));
                } else {
                    redisLog(REDIS_WARNING,
                        "AOF loaded anyway because aof-load-truncated is enabled");
                    goto loaded_ok;
                }
            }
        }
        redisLog(REDIS_WARNING,"Unexpected end of file reading the append only file. You can: 1) Make a backup of your AOF file, then use ./redis-check-aof --fix <filename>. 2) Alternatively you can set the 'aof-load-truncated' configuration option to yes and restart the server.");
        exit(1);
    
    fmterr: /* Format error. */
        redisLog(REDIS_WARNING,"Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix <filename>");
        exit(1);
    }

          Redis读取aof文件并还原数据状态的步骤:

        1. 创建一个不带网络连接的伪客户端(createFakeClient);因为Redis的命令只能在客户端上下文中执行,为载入aof文件时所使用的命令只能在客户端上下文中执行,而载入aof文件时所使用的命令直接来源于aof文件而不是网络连接,所以服务器使用了一个没有网络连接的伪客户端执行aof文件保存的写命令,伪客户端执行命令的效果和带网络连接的客户端执行命令的效果完全一样
        2. 从aof文件中分析并读取一条写命令
        3. 使用伪客户端执行被读入毒乳的写命令
        4. 一只执行2、3直到aof文件中的所有写命令都被处理完毕为止。

      3.aof重写

        因为aof持久化是通过保存被执行的写命令来记录数据库状态的,所以随着服务器运行时间的流逝,aof文件中的内容会越来越多,文件的体积也会越来愈大,如果不加以控制的话,体积过大的aof很可能对redis服务器、甚至整个宿主机造成影响,并且aof文件的体积越大,使用aof文件来进行数据还原所需的时间就越多。

    /* Write a sequence of commands able to fully rebuild the dataset into
     * "filename". Used both by REWRITEAOF and BGREWRITEAOF.
     *
     * In order to minimize the number of commands needed in the rewritten
     * log Redis uses variadic commands when possible, such as RPUSH, SADD
     * and ZADD. However at max REDIS_AOF_REWRITE_ITEMS_PER_CMD items per time
     * are inserted using a single command. */
    int rewriteAppendOnlyFile(char *filename) {
        dictIterator *di = NULL;
        dictEntry *de;
        rio aof;
        FILE *fp;
        char tmpfile[256];
        int j;
        long long now = mstime();
        char byte;
        size_t processed = 0;
    
        /* Note that we have to use a different temp name here compared to the
         * one used by rewriteAppendOnlyFileBackground() function. */
        //写入到临时文件
        snprintf(tmpfile,256,"temp-rewriteaof-%d.aof", (int) getpid());
        fp = fopen(tmpfile,"w");
        if (!fp) {
            redisLog(REDIS_WARNING, "Opening the temp file for AOF rewrite in rewriteAppendOnlyFile(): %s", strerror(errno));
            return REDIS_ERR;
        }
        //用于接收重写期间的写入数据 父进程发给子进程
        server.aof_child_diff = sdsempty();
        rioInitWithFile(&aof,fp);
        if (server.aof_rewrite_incremental_fsync)
            rioSetAutoSync(&aof,REDIS_AOF_AUTOSYNC_BYTES);
        for (j = 0; j < server.dbnum; j++) {
            char selectcmd[] = "*2
    $6
    SELECT
    ";
            redisDb *db = server.db+j;
            dict *d = db->dict;
            if (dictSize(d) == 0) continue;
            di = dictGetSafeIterator(d); //字典迭代器
            if (!di) {
                fclose(fp);
                return REDIS_ERR;
            }
    
            /* SELECT the new DB */
            if (rioWrite(&aof,selectcmd,sizeof(selectcmd)-1) == 0) goto werr;
            if (rioWriteBulkLongLong(&aof,j) == 0) goto werr;
    
            /* Iterate this DB writing every entry */
            while((de = dictNext(di)) != NULL) {
                sds keystr;
                robj key, *o;
                long long expiretime;
    
                keystr = dictGetKey(de);
                o = dictGetVal(de);
                initStaticStringObject(key,keystr);
    
                expiretime = getExpire(db,&key);
    
                /* If this key is already expired skip it */
                if (expiretime != -1 && expiretime < now) continue;
    
                /* Save the key and associated value */
                //根据type取出key和value
                if (o->type == REDIS_STRING) {
                    /* Emit a SET command */
                    char cmd[]="*3
    $3
    SET
    ";
                    if (rioWrite(&aof,cmd,sizeof(cmd)-1) == 0) goto werr;
                    /* Key and value */
                    if (rioWriteBulkObject(&aof,&key) == 0) goto werr;
                    if (rioWriteBulkObject(&aof,o) == 0) goto werr;
                } else if (o->type == REDIS_LIST) {
                    if (rewriteListObject(&aof,&key,o) == 0) goto werr;
                } else if (o->type == REDIS_SET) {
                    if (rewriteSetObject(&aof,&key,o) == 0) goto werr;
                } else if (o->type == REDIS_ZSET) {
                    if (rewriteSortedSetObject(&aof,&key,o) == 0) goto werr;
                } else if (o->type == REDIS_HASH) {
                    if (rewriteHashObject(&aof,&key,o) == 0) goto werr;
                } else {
                    redisPanic("Unknown object type");
                }
                /* Save the expire time */
                //查看是否有过期时间
                if (expiretime != -1) {
                    char cmd[]="*3
    $9
    PEXPIREAT
    ";
                    if (rioWrite(&aof,cmd,sizeof(cmd)-1) == 0) goto werr;
                    if (rioWriteBulkObject(&aof,&key) == 0) goto werr;
                    if (rioWriteBulkLongLong(&aof,expiretime) == 0) goto werr;
                }
                /* Read some diff from the parent process from time to time. */
                //aof重写了10k数据 会从管道1中读取服务器(父进程)缓存的新数据 并存在server.aof_child_diff
                if (aof.processed_bytes > processed+1024*10) {
                    processed = aof.processed_bytes;
                    aofReadDiffFromParent();
                      }
            }
            dictReleaseIterator(di);
            di = NULL;
        }
    
        /* Do an initial slow fsync here while the parent is still sending
         * data, in order to make the next final fsync faster. */
        //父进程仍然在发送数据 使aof文件内容写入到磁盘 
        if (fflush(fp) == EOF) goto werr;
        if (fsync(fileno(fp)) == -1) goto werr;
    
        /* Read again a few times to get more data from the parent.
         * We can't read forever (the server may receive data from clients
         * faster than it is able to send data to the child), so we try to read
         * some more data in a loop as soon as there is a good chance more data
         * will come. If it looks like we are wasting time, we abort (this
         * happens after 20 ms without new data). */
        //再次从父进程读取数据,如果20ms之内没有新数据 则终止读写
        int nodata = 0;
        mstime_t start = mstime();
        while(mstime()-start < 1000 && nodata < 20) {
            if (aeWait(server.aof_pipe_read_data_from_parent, AE_READABLE, 1) <= 0)
            {
                nodata++;
                continue;
            }
            nodata = 0; /* Start counting from zero, we stop on N *contiguous*
                           timeouts. */
            aofReadDiffFromParent();  //从父进程读取数据
        }
        //给父进程发送 ! 便父进程停止给子进程发送数据
        /* Ask the master to stop sending diffs. */
        if (write(server.aof_pipe_write_ack_to_parent,"!",1) != 1) goto werr;
        //等待父进程的响应 ! 
        if (anetNonBlock(NULL,server.aof_pipe_read_ack_from_parent) != ANET_OK)
            goto werr;
        /* We read the ACK from the server using a 10 seconds timeout. Normally
         * it should reply ASAP, but just in case we lose its reply, we are sure
         * the child will eventually get terminated. */
    
        if (syncRead(server.aof_pipe_read_ack_from_parent,&byte,1,5000) != 1 ||
            byte != '!') goto werr;
        redisLog(REDIS_NOTICE,"Parent agreed to stop sending diffs. Finalizing AOF...");
    
        /* Read the final diff if any. */
        //读取管道中的的剩余数据
        aofReadDiffFromParent();
    
        /* Write the received diff to the file. */
        redisLog(REDIS_NOTICE,
            "Concatenating %.2f MB of AOF diff received from parent.",
            (double) sdslen(server.aof_child_diff) / (1024*1024));
        if (rioWrite(&aof,server.aof_child_diff,sdslen(server.aof_child_diff)) == 0)
            goto werr;
    
        /* Make sure data will not remain on the OS's output buffers */
        if (fflush(fp) == EOF) goto werr;
        if (fsync(fileno(fp)) == -1) goto werr;
        if (fclose(fp) == EOF) goto werr;
    
        /* Use RENAME to make sure the DB file is changed atomically only
         * if the generate DB file is ok. */
        if (rename(tmpfile,filename) == -1) {
            redisLog(REDIS_WARNING,"Error moving temp append only file on the final destination: %s", strerror(errno));
            unlink(tmpfile);
            return REDIS_ERR;
        }
        redisLog(REDIS_NOTICE,"SYNC append only file rewrite performed");
        return REDIS_OK;
    
    werr:
        redisLog(REDIS_WARNING,"Write error writing append only file on disk: %s", strerror(errno));
        fclose(fp);
        unlink(tmpfile);
        if (di) dictReleaseIterator(di);
        return REDIS_ERR;
    }          

          

  • 相关阅读:
    叶落归根(hometown)
    设置(settings)
    文明距离(civil)
    计算机基础知识
    gojs插件使用教程
    编程语言分类
    dp优化简单总结
    Splay入门题目 [HNOI2002]营业额统计
    hdu3415:最大k子段和,单调队列
    hdu5072(鞍山regional problem C):容斥,同色三角形模型
  • 原文地址:https://www.cnblogs.com/chenyang920/p/13259023.html
Copyright © 2020-2023  润新知