• Mysql:配置replication——复制的应用方案


    16.2. Replication Solutions

    Replication can be used in many different environments for a range of purposes. In this section you will find general notes and advice on using replication for specific solution types.

    For information on using replication in a backup environment, including notes on the setup, backup procedure, and files to back up, see Section 16.2.1, “Using Replication for Backups”.

    For advice and tips on using different storage engines on the master and slaves, see Section 16.2.2, “Using Replication with Different Master and Slave Storage Engines”.

    Using replication as a scale-out solution requires some changes in the logic and operation of applications that use the solution. See Section 16.2.3, “Using Replication for Scale-Out”.

    For performance or data distribution reasons you may want to replicate different databases to different replication slaves. See Section 16.2.4, “Replicating Different Databases to Different Slaves”

    As the number of replication slaves increases, the load on the master can increase (because of the need to replicate the binary log to each slave) and lead to a reduction in performance of the master. For tips on improving your replication performance, including using a single secondary server as an replication master, see Section 16.2.5, “Improving Replication Performance”.

    For guidance on switching masters, or converting slaves into masters as part of an emergency failover solution, see Section 16.2.6, “Switching Masters During Failover”.

    To secure your replication communication you can encrypt the communication channel by using SSL to exchange data. Step-by-step instructions can be found in Section 16.2.7, “Setting Up Replication Using SSL”.

    16.2.1. Using Replication for Backups

    You can use replication as a backup solution by replicating data from the master to a slave, and then backing up the data slave. Because the slave can be paused and shut down without affecting the running operation of the master you can produce an effective snapshot of 'live' data that would otherwise require a shutdown of the master database.

    How you back up the database will depend on the size of the database and whether you are backing up only the data, or the data and the replication slave state so that you can rebuild the slave in the event of failure. There are therefore two choices:

    If you are using replication as a solution to enable you to back up the data on the master, and the size of your database is not too large, then the mysqldump tool may be suitable. See Section 16.2.1.1, “Backing Up a Slave Using mysqldump”.

    For larger databases, where mysqldump would be impractical or inefficient, you can back up the raw data files instead. Using the raw data files option also means that you can back up the binary and relay logs that will enable you to recreate the slave in the event of a slave failure. For more information, see Section 16.2.1.2, “Backing Up Raw Data from a Slave”.

    Another backup strategy, which can be used for either master or slave servers, is to put the server in a read-only state. The backup is performed against the read-only server, which then is changed back to its usual read/write operational status. See Section 16.2.1.3, “Backing Up a Master or Slave by Making It Read Only”.

    16.2.1.1. Backing Up a Slave Using mysqldump

    Using mysqldump to create a copy of the database enables you to capture all of the data in the database in a format that allows the information to be imported into another instance of MySQL. Because the format of the information is SQL statements the file can easily be distributed and applied to running servers in the event that you need access to the data in an emergency. However, if the size of your data set is very large then mysqldump may be impractical.

    When using mysqldump you should stop the slave before starting the dump process to ensure that the dump contains a consistent set of data:

    1. Stop the slave from processing requests. You can either stop the slave completely using mysqladmin:

      shell> mysqladmin stop-slave

      Alternatively, you can stop processing the relay log files by stopping the replication SQL thread. Using this method will allow the binary log data to be transferred. Within busy replication environments this may speed up the catch-up process when you start the slave processing again:

      shell> mysql -e 'STOP SLAVE SQL_THREAD;'
    2. Run mysqldump to dump your databases. You may either select databases to be dumped, or dump all databases. For more information, see Section 4.5.4, “mysqldump — A Database Backup Program”. For example, to dump all databases:

      shell> mysqldump --all-databases >fulldb.dump
    3. Once the dump has completed, start slave operations again:

      shell> mysqladmin start-slave

    In the preceding example you may want to add login credentials (user name, password) to the commands, and bundle the process up into a script that you can run automatically each day.

    If you use this approach, make sure you monitor the slave replication process to ensure that the time taken to run the backup in this way is not affecting the slave's ability to keep up with events from the master. See Section 16.1.4.1, “Checking Replication Status”. If the slave is unable to keep up you may want to add another server and distribute the backup process. For an example of how to configure this scenario, see Section 16.2.4, “Replicating Different Databases to Different Slaves”.

    16.2.1.2. Backing Up Raw Data from a Slave

    To guarantee the integrity of the files that are copied, backing up the raw data files on your MySQL replication slave should take place while your slave server is shut down. If the MySQL server is still running then background tasks, particularly with storage engines with background processes such as InnoDB, may still be updating the database files. With InnoDB, these problems should be resolved during crash recovery, but since the slave server can be shut down during the backup process without affecting the execution of the master it makes sense to take advantage of this facility.

    To shut down the server and back up the files:

    1. Shut down the slave MySQL server:

      shell> mysqladmin shutdown
    2. Copy the data files. You can use any suitable copying or archive utility, including cp, tar or WinZip:

      shell> tar cf /tmp/dbbackup.tar ./data
    3. Start up the mysqld process again:

      shell> mysqld_safe &

      Under Windows:

      C:\> "C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld"

    Normally you should back up the entire data folder for the slave MySQL server. If you want to be able to restore the data and operate as a slave (for example, in the event of failure of the slave), then when you back up the slave's data, you should back up the slave status files, master.info and relay-log.info, along with the relay log files. These files are needed to resume replication after you restore the slave's data.

    If you lose the relay logs but still have the relay-log.info file, you can check it to determine how far the SQL thread has executed in the master binary logs. Then you can use CHANGE MASTER TO with the MASTER_LOG_FILE and MASTER_LOG_POS options to tell the slave to re-read the binary logs from that point. Of course, this requires that the binary logs still exist on the master server.

    If your slave is subject to replicating LOAD DATA INFILE statements, you should also back up any SQL_LOAD-* files that exist in the directory that the slave uses for this purpose. The slave needs these files to resume replication of any interrupted LOAD DATA INFILE operations. The directory location is specified using the --slave-load-tmpdir option. If this option is not specified, the directory location is the value of the tmpdir system variable.

    16.2.1.3. Backing Up a Master or Slave by Making It Read Only

    It is possible to back up either master or slave servers in a replication setup by acquiring a global read lock and manipulating the read_only system variable to change the read-only state of the server to be backed up:

    1. Make the server read-only, so that it processes only retrievals and blocks updates

    2. Perform the backup

    3. Change the server back to its normal read/write state

    The following instructions describe how to do this for a master server and for a slave server.

    These instructions require MySQL 5.1.15 or higher. For earlier versions, setting read_only did not block while table locks or outstanding transactions were pending, so that some data changes could still occur during the backup.

    Note

    The instructions in this section place the server to be backed up in a state that is safe for backup methods that get the data from the server, such as mysqldump (see Section 4.5.4, “mysqldump — A Database Backup Program”). You should not attempt to use these instructions to make a binary backup by copying files directly because the server may still have modified data cached in memory and not flushed to disk.

    For both scenarios discussed here, suppose that you have the following replication setup:

    • A master server M1

    • A slave server S1 that has M1 as its master

    • A client C1 connected to M1

    • A client C2 connected to S1

    Scenario 1: Backup with a Read-Only Master

    Put the master M1 in a read-only state by executing these statements on it:

    FLUSH TABLES WITH READ LOCK;
    SET GLOBAL read_only = ON;
    

    While M1 is in a read-only state, the following properties are true:

    • Requests for updates sent by C1 to M1 will fail because the server is in read-only mode

    • Requests for retrievals sent by C1 to M1 will succeed

    • Making a backup on M1 is safe

    • Making a backup on S1 is not safe: this server is still running, and might be processing the binary log or update requests coming from client C2 (S1 might not be in a read-only state)

    While M1 is read only, perform the backup. For example, you can use mysqldump.

    After the backup on M1 has been done, restore M1 to its normal operational state by executing these statements:

    SET GLOBAL read_only = OFF;
    UNLOCK TABLES;
    

    Although performing the backup on M1 is safe (as far as the backup is concerned), it is not optimal because clients of M1 are blocked from executing updates.

    This strategy also applies to backing up a single server in a non-replication setting.

    Scenario 2: Backup with a Read-Only Slave

    Put the slave S1 in a read-only state by executing these statements on it:

    FLUSH TABLES WITH READ LOCK;
    SET GLOBAL read_only = ON;
    

    While S1 is in a read-only state, the following properties are true:

    • The master M1 will continue to operate

    • Making a backup on the master is not safe

    • The slave S1 is stopped

    • Making a backup on the slave S1 is safe

    These properties provide the basis for a popular backup scenario: Having one slave busy performing a backup for a while is not a problem because it does not affect the entire network, and the system is still running during the backup. (For example, clients can still perform updates on the master server.)

    While S1 is read only, perform the backup.

    After the backup on S1 has been done, restore S1 to its normal operational state by executing these statements:

    SET GLOBAL read_only = OFF;
    UNLOCK TABLES;
    

    After the slave is restored to normal operation, it again synchronizes to the master by catching up with any outstanding updates in the binary log from the master.

    In either scenario, the statements to acquire the global read lock and manipulate the read_only variable are performed on the server to be backed up and do not propagate to any slaves of that server.

    16.2.2. Using Replication with Different Master and Slave Storage Engines

    The replication process does not care if the source table on the master and the replicated table on the slave use different engine types. In fact, the system variables storage_engine and table_type are not replicated.

    This provides a number of advantages in the replication process in that you can take advantage of different engine types for different replication scenarios. For example, in a typical scaleout scenario (see Section 16.2.3, “Using Replication for Scale-Out”), you want to use InnoDB tables on the master to take advantage of the transactional functionality, but use MyISAM on the slaves where transaction support is not required because the data is only read. When using replication in a data logging environment you may want to use the Archive storage engine on the slave.

    Setting up different engines on the master and slave depends on how you set up the initial replication process:

    • If you used mysqldump to create the database snapshot on your master then you could edit the dump text to change the engine type used on each table.

      Another alternative for mysqldump is to disable engine types that you do not want to use on the slave before using the dump to build the data on the slave. For example, you can add the --skip-innodb option on your slave to disable the InnoDB engine. If a specific engine does not exist, MySQL will use the default engine type, usually MyISAM. If you want to disable further engines in this way, you may want to consider building a special binary to be used on the slave that only supports the engines you want.

    • If you are using raw data files for the population of the slave, you will be unable to change the initial table format. Instead, use ALTER TABLE to change the table types after the slave has been started.

    • For new master/slave replication setups where there are currently no tables on the master, avoid specifying the engine type when creating new tables.

    If you are already running a replication solution and want to convert your existing tables to another engine type, follow these steps:

    1. Stop the slave from running replication updates:

      mysql> STOP SLAVE;

      This will enable you to change engine types without interruptions.

    2. Execute an ALTER TABLE ... Engine='enginetype' for each table where you want to change the engine type.

    3. Start the slave replication process again:

      mysql> START SLAVE;

    Although the storage_engine and table_type variables are not replicated, be aware that CREATE TABLE and ALTER TABLE statements that include the engine specification will be correctly replicated to the slave. For example, if you have a CSV table and you execute:

    mysql> ALTER TABLE csvtable Engine='MyISAM';

    The above statement will be replicated to the slave and the engine type on the slave will be converted to MyISAM, even if you have previously changed the table type on the slave to an engine other than CSV. If you want to retain engine differences on the master and slave, you should be careful to use the storage_engine variable on the master when creating a new table. For example, instead of:

    mysql> CREATE TABLE tablea (columna int) Engine=MyISAM;

    Use this format:

    mysql> SET storage_engine=MyISAM;
    mysql> CREATE TABLE tablea (columna int);

    When replicated, the storage_engine variable will be ignored, and the CREATE TABLE statement will be executed with the slave's default engine type.

    16.2.3. Using Replication for Scale-Out

    You can use replication as a scale-out solution, i.e. where you want to split up the load of database queries across multiple database servers, within some reasonable limitations.

    Because replication works from the distribution of one master to one or more slaves, using replication for scaleout works best in an environment where you have a high number of reads and low number of writes/updates. Most websites fit into this category, where users are browsing the website, reading articles, posts, or viewing products. Updates only occur during session management, or when making a purchase or adding a comment/message to a forum.

    Replication in this situation enables you to distribute the reads over the replication slaves, while still allowing your web servers to communicate with the replication master when a write is required. You can see a sample replication layout for this scenario in Figure 16.1, “Using replication to improve the performance during scaleout”.

    Figure 16.1. Using replication to improve the performance during scaleout

    Using replication to improve the performance
          during scaleout

    If the part of your code that is responsible for database access has been properly abstracted/modularized, converting it to run with a replicated setup should be very smooth and easy. Change the implementation of your database access to send all writes to the master, and to send reads to either the master or a slave. If your code does not have this level of abstraction, setting up a replicated system gives you the opportunity and motivation to clean it up. Start by creating a wrapper library or module that implements the following functions:

    • safe_writer_connect()

    • safe_reader_connect()

    • safe_reader_statement()

    • safe_writer_statement()

    safe_ in each function name means that the function takes care of handling all error conditions. You can use different names for the functions. The important thing is to have a unified interface for connecting for reads, connecting for writes, doing a read, and doing a write.

    Then convert your client code to use the wrapper library. This may be a painful and scary process at first, but it pays off in the long run. All applications that use the approach just described are able to take advantage of a master/slave configuration, even one involving multiple slaves. The code is much easier to maintain, and adding troubleshooting options is trivial. You need modify only one or two functions; for example, to log how long each statement took, or which statement among those issued gave you an error.

    If you have written a lot of code, you may want to automate the conversion task by using the replace utility that comes with standard MySQL distributions, or write your own conversion script. Ideally, your code uses consistent programming style conventions. If not, then you are probably better off rewriting it anyway, or at least going through and manually regularizing it to use a consistent style.

    16.2.4. Replicating Different Databases to Different Slaves

    There may be situations where you have a single master and want to replicate different databases to different slaves. For example, you may want to distribute different sales data to different departments to help spread the load during data analysis. A sample of this layout is shown in Figure 16.2, “Using replication to replicate databases to separate replication slaves”.

    Figure 16.2. Using replication to replicate databases to separate replication slaves

    Using replication to replicate databases to
          separate replication slaves

    You can achieve this separation by configuring the master and slaves as normal, and then limiting the binary log statements that each slave processes by using the --replicate-wild-do-table configuration option on each slave.

    Important

    You should not use --replicate-do-db for this purpose when using statement-based replication, since statement-based replication causes this option's affects to vary according to the database that is currently selected. This applies to mixed-format replication as well, since this allows some updates to be replicated using the statement-based format.

    However, it should be to use --replicate-do-db for this purpose if you are using row-based replication only, since in this case the currently-selected database has no effect on the option's operation.

    For example, to support the separation as shown in Figure 16.2, “Using replication to replicate databases to separate replication slaves”, you should configure each replication slave as follows, before executing START SLAVE:

    • Replication slave 1 should use --replicate-wild-do-table=databaseA.%.

    • Replication slave 2 should use --replicate-wild-do-table=databaseB.%.

    • Replication slave 3 should use --replicate-wild-do-table=databaseC.%.

    If you have data that needs to be synchronized to the slaves before replication starts, you have a number of choices:

    • Synchronize all the data to each slave, and delete the databases, tables, or both that you do not want to keep.

    • Use mysqldump to create a separate dump file for each database and load the appropriate dump file on each slave.

    • Use a raw data file dump and include only the specific files and databases that you need for each slave.

      Note

      This does not work with InnoDB databases unless you use innodb_file_per_table.

    Each slave in this configuration receives the entire binary log from the master, but executes only those events from the binary log that apply to the databases and tables included by the --replicate-wild-do-table option in effect on that slave.

    16.2.5. Improving Replication Performance

    As the number of slaves connecting to a master increases, the load, although minimal, also increases, as each slave uses up a client connection to the master. Also, as each slave must receive a full copy of the master binary log, the network load on the master may also increase and start to create a bottleneck.

    If you are using a large number of slaves connected to one master, and that master is also busy processing requests (for example, as part of a scaleout solution), then you may want to improve the performance of the replication process.

    One way to improve the performance of the replication process is to create a deeper replication structure that enables the master to replicate to only one slave, and for the remaining slaves to connect to this primary slave for their individual replication requirements. A sample of this structure is shown in Figure 16.3, “Using an additional replication host to improve performance”.

    Figure 16.3. Using an additional replication host to improve performance

    Using an additional replication host to
          improve performance

    For this to work, you must configure the MySQL instances as follows:

    • Master 1 is the primary master where all changes and updates are written to the database. Binary logging should be enabled on this machine.

    • Master 2 is the slave to the Master 1 that provides the replication functionality to the remainder of the slaves in the replication structure. Master 2 is the only machine allowed to connect to Master 1. Master 2 also has binary logging enabled, and the --log-slave-updates option so that replication instructions from Master 1 are also written to Master 2's binary log so that they can then be replicated to the true slaves.

    • Slave 1, Slave 2, and Slave 3 act as slaves to Master 2, and replicate the information from Master 2, which is really the data logged on Master 1.

    The above solution reduces the client load and the network interface load on the primary master, which should improve the overall performance of the primary master when used as a direct database solution.

    If your slaves are having trouble keeping up with the replication process on the master then there are a number of options available:

    • If possible, you should put the relay logs and the data files on different physical drives. To do this, use the --relay-log option to specify the location of the relay log.

    • If the slaves are significantly slower than the master, then you may want to divide up the responsibility for replicating different databases to different slaves. See Section 16.2.4, “Replicating Different Databases to Different Slaves”.

    • If your master makes use of transactions and you are not concerned about transaction support on your slaves, then use MyISAM or another non-transactional engine. See Section 16.2.2, “Using Replication with Different Master and Slave Storage Engines”.

    • If your slaves are not acting as masters, and you have a potential solution in place to ensure that you can bring up a master in the event of failure, then you can switch off --log-slave-updates. This prevents 'dumb' slaves from also logging events they have executed into their own binary log.

    16.2.6. Switching Masters During Failover

    There is currently no official solution for providing failover between master and slaves in the event of a failure. With the currently available features, you would have to set up a master and a slave (or several slaves), and to write a script that monitors the master to check whether it is up. Then instruct your applications and the slaves to change master in case of failure.

    Remember that you can tell a slave to change its master at any time, using the CHANGE MASTER TO statement. The slave will not check whether the databases on the master are compatible with the slave, it will just start executing events from the specified log and position on the new master. In a failover situation all the servers in the group are probably executing the same events from the same binary log, so changing the source of the events should not affect the database structure or integrity providing you are careful.

    Run your slaves with the --log-bin option and without --log-slave-updates. In this way, the slave is ready to become a master as soon as you issue STOP SLAVE; RESET MASTER, and CHANGE MASTER TO statement on the other slaves. For example, assume that you have the structure shown in Figure 16.4, “Redundancy using replication, initial structure”.

    Figure 16.4. Redundancy using replication, initial structure

    Redundancy using replication, initial
          structure

    In this diagram, the MySQL Master holds the master database, the MySQL Slave computers are replication slaves, and the Web Client machines are issuing database reads and writes. Web clients that issue only reads (and would normally be connected to the slaves) are not shown, as they do not need to switch to a new server in the event of failure. For a more detailed example of a read/write scaleout replication structure, see Section 16.2.3, “Using Replication for Scale-Out”.

    Each MySQL Slave (Slave 1, Slave 2, and Slave 3) are slaves running with --log-bin and without --log-slave-updates. Because updates received by a slave from the master are not logged in the binary log unless --log-slave-updates is specified, the binary log on each slave is empty initially. If for some reason MySQL Master becomes unavailable, you can pick one of the slaves to become the new master. For example, if you pick Slave 1, all Web Clients should be redirected to Slave 1, which will log updates to its binary log. Slave 2 and Slave 3 should then replicate from Slave 1.

    The reason for running the slave without --log-slave-updates is to prevent slaves from receiving updates twice in case you cause one of the slaves to become the new master. Suppose that Slave 1 has --log-slave-updates enabled. Then it will write updates that it receives from Master to its own binary log. When Slave 2 changes from Master to Slave 1 as its master, it may receive updates from Slave 1 that it has already received from Master

    Make sure that all slaves have processed any statements in their relay log. On each slave, issue STOP SLAVE IO_THREAD, then check the output of SHOW PROCESSLIST until you see Has read all relay log. When this is true for all slaves, they can be reconfigured to the new setup. On the slave Slave 1 being promoted to become the master, issue STOP SLAVE and RESET MASTER.

    On the other slaves Slave 2 and Slave 3, use STOP SLAVE and CHANGE MASTER TO MASTER_HOST='Slave1' (where 'Slave1' represents the real host name of Slave 1). To use CHANGE MASTER TO, add all information about how to connect to Slave 1 from Slave 2 or Slave 3 (user, password, port). In CHANGE MASTER TO, there is no need to specify the name of Slave 1's binary log or binary log position to read from: We know it is the first binary log and position 4, which are the defaults for CHANGE MASTER TO. Finally, use START SLAVE on Slave 2 and Slave 3.

    Once the new replication is in place, you will then need to instruct each Web Client to direct their statements to Slave 1. From that point on, all updates statements sent by Web Client to Slave 1 are written to the binary log of Slave 1, which then contains every update statement sent to Slave 1 since Master died.

    The resulting server structure is shown in Figure 16.5, “Redundancy using replication, after master failure”.

    Figure 16.5. Redundancy using replication, after master failure

    Redundancy using replication, after master
          failure

    When Master is up again, you must issue on it the same CHANGE MASTER TO as that issued on Slave 2 and Slave 3, so that Master becomes a slave of S1 and picks up each Web Client writes that it missed while it was down.

    To make Master a master again (because it is the most powerful machine, for example), use the preceding procedure as if Slave 1 was unavailable and Master was to be the new master. During this procedure, do not forget to run RESET MASTER on Master before making Slave 1, Slave 2, and Slave 3 slaves of Master. Otherwise, they may pick up old Web Client writes from before the point at which Master became unavailable.

    Note that there is no synchronization between the different slaves to a master. Some slaves might be ahead of others. This means that the concept outlined in the previous example might not work. In practice, however, the relay logs of different slaves will most likely not be far behind the master, so it would work, anyway (but there is no guarantee).

    A good way to keep your applications informed as to the location of the master is by having a dynamic DNS entry for the master. With bind you can use nsupdate to dynamically update your DNS.

    16.2.7. Setting Up Replication Using SSL

    Setting up replication using an SSL connection is similar to setting up a server and client using SSL. You will need to obtain (or create) a suitable security certificate that you can use on the master, and a similar certificate (from the same certificate authority) on each slave.

    To use SSL for encrypting the transfer of the binary log required during replication you must first set up the master to support SSL network connections. If the master does not support SSL connections (because it has not been compiled or configured for SSL), then replication through an SSL connection will not be possible.

    For more information on setting up a server and client for SSL connectivity, see Section 5.5.7.2, “Using SSL Connections”.

    To enable SSL on the master you will need to create or obtain suitable certificates and then add the following configuration options to the master's configuration within the mysqld section:

    ssl-ca=cacert.pem
    ssl-cert=server-cert.pem
    ssl-key=server-key.pem

    Note

    You should use full path to specify the location of your certificate files.

    The options are as follows:

    • ssl-ca identifies the Certificate Authority (CA) certificate.

    • ssl-cert identifies the server public key. This can be sent to the client and authenticated against the CA certificate that it has.

    • ssl-key identifies the server private key.

    On the slave, you have two options available for setting the SSL information. You can either add the slaves certificates to the client section of the slave configuration file, or you can explicitly specify the SSL information using the CHANGE MASTER TO statement.

    Using the former option, add the following lines to the client section of the slave configuration file:

    [client]
    ssl-ca=cacert.pem
    ssl-cert=server-cert.pem
    ssl-key=server-key.pem

    Restart the slave server, using the --skip-slave to prevent the slave from connecting to the master. Use CHANGE MASTER TO to specify the master configuration, using the master_ssl option to enable SSL connectivity:

    mysql> CHANGE MASTER TO \
        MASTER_HOST='master_hostname', \
        MASTER_USER='replicate', \
        MASTER_PASSWORD='password', \
        MASTER_SSL=1;

    To specify the SSL certificate options during the CHANGE MASTER TO command, append the SSL options:

    CHANGE MASTER TO \
          MASTER_HOST='master_hostname', \
          MASTER_USER='replicate', \
          MASTER_PASSWORD='password', \
          MASTER_SSL=1, \
          MASTER_SSL_CA = 'ca_file_name', \
          MASTER_SSL_CAPATH = 'ca_directory_name', \
          MASTER_SSL_CERT = 'cert_file_name', \
          MASTER_SSL_KEY = 'key_file_name';

    Once the master information has been updated, start the slave replication process:

    mysql> START SLAVE;

    You can use the SHOW SLAVE STATUS to confirm that SSL connection has been completed.

    For more information on the CHANGE MASTER TO syntax, see Section 12.6.2.1, “CHANGE MASTER TO Syntax”.

    If you want to enforce SSL connections to be used during replication, then create a user with the REPLICATION SLAVE privilege and use the REQUIRE_SSL option for that user. For example:

    mysql> GRANT REPLICATION SLAVE ON *.*
        -> TO 'repl'@'%.mydomain.com' IDENTIFIED BY 'slavepass' REQUIRE SSL;
  • 相关阅读:
    李开复给中国学生的第二封信:从优秀到卓越
    李开复写给中国学生的一封信:从诚信谈起
    Cocos2D-x培训课程
    CCLabelAtlas创建自定义字体
    cocos2d-x设计模式发掘之五:防御式编程模式
    【转】VS2010中使用AnkhSvn
    cocos2d-x 2.1.4学习笔记01:windows平台搭建cocos2d-x开发环境
    【转】Win7环境下VS2010配置Cocos2d-x-2.1.4最新版本的开发环境(亲测)
    ffmpeg开发中出现的问题(二)
    ffmpeg开发中出现的问题
  • 原文地址:https://www.cnblogs.com/jinzhenshui/p/1508424.html
Copyright © 2020-2023  润新知