在以下示例中,我们使用带有 AdminAPI 的沙箱部署来完成以下任务,以部署带有 MySQL Router 的 InnoDB 集群。
部署和使用 MySQL 的本地沙箱实例允许您在部署到生产服务器之前在本地测试功能。AdminAPI 具有用于创建沙盒实例的内置功能,这些实例已预先配置为在本地部署的场景中与 InnoDB Cluster、InnoDB ClusterSet 和 InnoDB ReplicaSet 一起使用。
此示例包含以下部分:
沙盒实例仅适用于出于测试目的在本地机器上部署和运行。
安装以下组件:
MySQL 服务器:有关更多信息,请参阅 安装 MySQL。
MySQL Shell:有关更多信息,请参阅 安装 MySQL Shell。
MySQL 路由器:有关更多信息,请参阅 安装 MySQL 路由器。
要提供对一次故障的容忍度,请创建一个具有三个实例的 InnoDB 集群。在这个例子中,我们将使用在同一台机器上运行的三个沙箱实例。在现实世界的设置中,这三个实例将在网络上的不同主机上运行。
-
要启动 MySQL Shell,请发出:
> mysqlsh
-
要创建和启动 MySQL 沙箱实例,请使用
dba.deploySandboxInstance()
属于 X AdminAPI 一部分的函数。在 MySQL Shell 中发出以下三个语句并为每个实例输入根密码:mysql-js> dba.deploySandboxInstance(3310) mysql-js> dba.deploySandboxInstance(3320) mysql-js> dba.deploySandboxInstance(3330)
笔记对所有实例使用相同的 root 密码。
要创建 InnoDB 集群,请完成以下步骤:
-
通过发出以下命令连接到您希望成为 InnoDB 集群中主实例的 MySQL 实例:
mysql-js> shell.connect('root@localhost:3310')
-
发出
dba.createCluster()
创建集群的命令,并使用分配的变量cluster
来保存输出值:mysql-js> cluster = dba.createCluster('devCluster')
此命令输出:
A new InnoDB cluster will be created on instance 'localhost:3310'. Validating instance configuration at localhost:3310... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3310 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33101'. Use the localAddress option to override. Creating InnoDB cluster 'devCluster' on '127.0.0.1:3310'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure. <Cluster:devCluster>
-
cluster.status()
通过使用具有分配变量 的函数验证创建是否成功cluster
:mysql-js> cluster.status()
输出以下状态:
{ “clusterName”: “devCluster”, “defaultReplicaSet”: { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, “groupInformationSourceMember”: “127.0.0.1:3310” }
-
集群已启动并正在运行,但还不能容忍故障。
<Cluster>.addInstance()
使用以下函数 将另一个 MySQL 服务器实例添加到集群:{ mysql-js> cluster.addInstance('root@localhost:3320') NOTE: The target instance '127.0.0.1:3320' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3320' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): mysql-js> cluster.addInstance('root@localhost:3330') }
-
从提示中选择一种恢复方法。选项是:
克隆:克隆您要添加到主集群的实例,删除该实例包含的所有事务。MySQL 克隆插件会自动安装。假设您要添加一个空实例(尚未处理任何交易)或包含您不希望保留的交易的实例,请选择克隆选项。
增量恢复:使用异步复制将集群处理的所有事务恢复到加入实例。如果您确定集群曾经处理过的所有更新都是在
GTID
启用全局事务 ID ( ) 的情况下完成的,则增量恢复是合适的。没有清除事务,新实例包含与GTID
集群相同的集合或其子集。
在此示例中,选择C 作为Clone:
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C Validating instance configuration at localhost:3320... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3320 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33201'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3310 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: 127.0.0.1:3320 is shutting down... * Waiting for server restart... ready * 127.0.0.1:3320 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s) State recovery already finished for '127.0.0.1:3320' The instance '127.0.0.1:3320' was successfully added to the cluster.
-
添加创建的第三个实例并再次选择 C 作为克隆恢复方法:
mysql-js> cluster.addInstance('root@localhost:3330')
-
通过发出以下命令检查集群的状态:
mysql-js> cluster.status()
这将输出以下内容:
{ "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3310" } The setup of the InnoDB Cluster was successful!
集群现在最多可以容忍一次故障。通过发出以下命令退出 MySQL Shell:
\q
MySQL InnoDB Cluster搭建完成后,测试Cluster的高可用。为此,使用 MySQL 路由器。如果一个实例失败,MySQL Router 会自动更新其路由配置并确保将新连接路由到其余实例。
在 MySQL Router 可以执行路由操作之前,让它知道新的 InnoDB Cluster。为此,使用该
–bootstrap
选项并将 MySQL Router 指向R/W
集群的当前 MySQL Server 实例(主实例)。mysql-router
将路由器的配置存储在使用该
-d
选项
调用的文件夹中。
-
在您的主目录中打开一个终端:
-
在 Linux 系统上,发出:
[demo-user@losthost]$> mysqlrouter --bootstrap root@localhost:3310 -d mysqlrouter
-
在 Windows 系统上,发出:
C:\Users\demo-user> mysqlrouter --bootstrap root@localhost:3310 -d mysql-router
MySQL Router 然后打印它将用于路由连接的 TCP/IP 端口。有关详细信息,请参阅 部署 MySQL 路由器。
-
-
成功配置 MySQL Router 后,在后台线程中启动它:
-
在 Windows 系统上,使用
start /B
命令并将路由器指向使用以下–bootstrap
选项生成的配置文件:C:\> start /B mysqlrouter -c %HOMEPATH%\mysql-router\mysqlrouter.conf
-
或者调用之前创建的文件夹
Windows PowerShell
中的脚本:mysqlrouter
\mysqlrouter\start.ps1
-
在使用 systemd的 Linux 系统上,发出:
sudo systemctl start mysqlrouter.service
-
或者在 Linux 系统上,调用之前创建的文件夹
Shell
中的脚本:mysqlrouter
/mysqlrouter/start.sh
-
现在 InnoDB 集群和 MySQL 路由器正在运行,测试集群设置。
不是直接连接到 MySQL 服务器实例之一,而是通过 MySQL 路由器连接。
-
发出以下连接命令:
> mysqlsh root@localhost:6446
提供 root 密码以连接到 InnoDB 集群。
-
通过创建一个变量
cluster
并为其分配dba.getCluster()
操作值来检查 InnoDB Cluster 的状态:mysql-js> cluster = dba.getCluster()
mysql-js> cluster.status()
-
切换到 SQL 模式:
mysql-js> \sql
-
通过发出以下命令查询实例正在运行的端口:
mysql-sql> SELECT @@port; +--------+ | @@port | +--------+ | 3310 | +--------+ 1 row in set (0.0007 sec)
-
切换回 JavaScript 模式:
mysql-js> \js
-
使用
dba.killSandboxInstance()
函数 停止MySQL Server 实例:dba.killSandboxInstance(3310) Killing MySQL instance... Instance localhost:3310 successfully killed.
-
SELECT @@port
通过对刚刚终止的实例 运行命令检查 MySQL Router 是否正确路由流量并检查结果:-
切换到 SQL 模式:
mysql-js> \sql
-
查看MySQL的端口:
mysql-sql> SELECT @@port;
-
返回一个错误;
ERROR: 2013 (HY000): Lost connection to MySQL server during query
. 这个错误意味着在端口 3310 上运行的实例不再运行。-
再次检查端口:
mysql-sql> SELECT @@port; +--------+ | @@port | +--------+ | 3320 | +--------+
此输出显示在端口 3320 上运行的实例已提升为新的
Read/Write
主实例。-
回到JavaScript模式,查看Cluster状态:
mysql-js> cluster.status() { "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3320", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "SECONDARY", "mode": "n/a", "readReplicas": {}, "role": "HA", "shellConnectError": "MySQL Error 2003: Could not open connection to '127.0.0.1:3310': Can't connect to MySQL server on '127.0.0.1:3310' (10061)", "status": "(MISSING)" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3320" }
正式运行在3310端口的MySQL Server实例为
MISSING
.-
dba.startSandboxInstance()
通过使用端口号 发出操作来重新启动此实例 :mysql-js> dba.startSandboxInstance(3310)
-
检查集群的状态可以看出实例已经恢复为集群中的活跃状态,但是作为
SECONDARY
成员:mysql-js > cluster.status() { "clusterName": "devCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3320", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3320" }
所有实例都重新上线,Cluster 可以再次容忍一次故障。