RabbitMQ 集群

2019独角兽企业重金招聘Python工程师标准>>> hot3.png

Clustering Guide

A RabbitMQ broker is a logical grouping of one or several Erlang nodes, each running the RabbitMQ applicationand sharing users, virtual hosts, queues, exchanges, etc. Sometimes we refer to the collection of nodes as acluster.

All data/state required for the operation of a RabbitMQ broker is replicated across all nodes, for reliability and scaling, with full ACID properties. An exception to this are message queues, which by default reside on the node that created them, though they are visible and reachable from all nodes. To replicate queues across nodes in a cluster, see the documentation on high availability (note that you will need a working cluster first).

RabbitMQ clustering does not tolerate network partitions well, so it should not be used over a WAN. The shovel or federation plugins are better solutions for connecting brokers across a WAN.

The composition of a cluster can be altered dynamically. All RabbitMQ brokers start out as running on a single node. These nodes can be joined into clusters, and subsequently turned back into individual brokers again.

RabbitMQ brokers tolerate the failure of individual nodes. Nodes can be started and stopped at will.

A node can be a disk node or a RAM node. (Note: disk and disc are used interchangeably. Configuration syntax or status messages normally use disc.) In most cases you want all your nodes to be disk nodes; RAM nodes are a special case that can be used to improve the performance of large clusters: see RAM nodesbelow.

Clustering transcript

The following is a transcript of setting up and manipulating a RabbitMQ cluster across three machines -rabbit1rabbit2rabbit3.

We assume that the user is logged into all three machines, that RabbitMQ has been installed on the machines, and that the rabbitmq-server and rabbitmqctl scripts are in the user's PATH.

Erlang cookie

Erlang nodes use a cookie to determine whether they are allowed to communicate with each other - for two nodes to be able to communicate they must have the same cookie. The cookie is just a string of alphanumeric characters. It can be as long or short as you like.

Erlang will automatically create a random cookie file when the RabbitMQ server starts up. The easiest way to proceed is to allow one node to create the file, and then copy it to all the other nodes in the cluster.

On Unix systems, the cookie will be typically located in /var/lib/rabbitmq/.erlang.cookie or$HOME/.erlang.cookie.

On Windows, the locations are C:\Users\Current User\.erlang.cookie(%HOMEDRIVE% + %HOMEPATH%\.erlang.cookie) orC:\Documents and Settings\Current User\.erlang.cookie, and C:\Windows\.erlang.cookie for RabbitMQ Windows service. If Windows service is used, the cookie should be placed in both places.

As an alternative, you can insert the option "-setcookie cookie" in the erl call in the rabbitmq-serverand rabbitmqctl scripts.

Starting independent nodes

Clusters are set up by re-configuring existing RabbitMQ nodes into a cluster configuration. Hence the first step is to start RabbitMQ on all nodes in the normal way:

rabbit1$ rabbitmq-server -detachedrabbit2$ rabbitmq-server -detachedrabbit3$ rabbitmq-server -detached

This creates three independent RabbitMQ brokers, one on each node, as confirmed by the cluster_statuscommand:

rabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1]}]},{running_nodes,[rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2]}]
...done.
rabbit3$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3]}]
...done.

The node name of a RabbitMQ broker started from the rabbitmq-server shell script israbbit@shorthostname, where the short node name is lower-case (as in rabbit@rabbit1, above). If you use the rabbitmq-server.bat batch file on Windows, the short node name is upper-case (as inrabbit@RABBIT1). When you type node names, case matters, and these strings must match exactly.

Creating the cluster

In order to link up our three nodes in a cluster, we tell two of the nodes, say rabbit@rabbit2 andrabbit@rabbit3, to join the cluster of the third, say rabbit@rabbit1.

We first join rabbit@rabbit2 in a cluster with rabbit@rabbit1. To do that, on rabbit@rabbit2 we stop the RabbitMQ application and join the rabbit@rabbit1 cluster, then restart the RabbitMQ application. Note that joining a cluster implicitly resets the node, thus removing all resources and data that were previously present on that node.

rabbit2$ rabbitmqctl stop_appStopping node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl join_cluster rabbit@rabbit1Clustering node rabbit@rabbit2 with [rabbit@rabbit1] ...done.
rabbit2$ rabbitmqctl start_appStarting node rabbit@rabbit2 ...done.

We can see that the two nodes are joined in a cluster by running the cluster_status command on either of the nodes:

rabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
...done.

Now we join rabbit@rabbit3 to the same cluster. The steps are identical to the ones above, except this time we'll cluster to rabbit2 to demonstrate that the node chosen to cluster to does not matter - it is enough to provide one online node and the node will be clustered to the cluster that the specified node belongs to.

rabbit3$ rabbitmqctl stop_appStopping node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl join_cluster rabbit@rabbit2Clustering node rabbit@rabbit3 with rabbit@rabbit2 ...done.
rabbit3$ rabbitmqctl start_appStarting node rabbit@rabbit3 ...done.

We can see that the three nodes are joined in a cluster by running the cluster_status command on any of the nodes:

rabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3,rabbit@rabbit2,rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3,rabbit@rabbit1,rabbit@rabbit2]}]
...done.
rabbit3$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3,rabbit@rabbit2,rabbit@rabbit1]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
...done.

By following the above steps we can add new nodes to the cluster at any time, while the cluster is running.

Restarting cluster nodes

Nodes that have been joined to a cluster can be stopped at any time. It is also ok for them to crash. In both cases the rest of the cluster continues operating unaffected, and the nodes automatically "catch up" with the other cluster nodes when they start up again.

We shut down the nodes rabbit@rabbit1 and rabbit@rabbit3 and check on the cluster status at each step:

rabbit1$ rabbitmqctl stopStopping and halting node rabbit@rabbit1 ...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3,rabbit@rabbit2]}]
...done.
rabbit3$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit3]}]
...done.
rabbit3$ rabbitmqctl stopStopping and halting node rabbit@rabbit3 ...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit2]}]
...done.

Now we start the nodes again, checking on the cluster status as we go along:

rabbit1$ rabbitmq-server -detachedrabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
...done.
rabbit3$ rabbitmq-server -detachedrabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]
...done.
rabbit3$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
...done.

There are some important caveats:

  • When the entire cluster is brought down, the last node to go down must be the first node to be brought online. If this doesn't happen, the nodes will wait 30 seconds for the last disc node to come back online, and fail afterwards. If the last node to go offline cannot be brought back up, it can be removed from the cluster using the forget_cluster_node command - consult the rabbitmqctl manpage for more information.

  • If all cluster nodes stop in a simultaneous and uncontrolled manner (for example with a power cut) you can be left with a situation in which all nodes think that some other node stopped after them. In this case you can use the force_boot command on one node to make it bootable again - consult the rabbitmqctlmanpage for more information.

Breaking up a cluster

Nodes need to be removed explicitly from a cluster when they are no longer meant to be part of it. We first remove rabbit@rabbit3 from the cluster, returning it to independent operation. To do that, onrabbit@rabbit3 we stop the RabbitMQ application, reset the node, and restart the RabbitMQ application.

rabbit3$ rabbitmqctl stop_appStopping node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl resetResetting node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl start_appStarting node rabbit@rabbit3 ...done.

Note that it would have been equally valid to list rabbit@rabbit3 as a node.

Running the cluster_status command on the nodes confirms that rabbit@rabbit3 now is no longer part of the cluster and operates independently:

rabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
...done.
rabbit3$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3]}]
...done.

We can also remove nodes remotely. This is useful, for example, when having to deal with an unresponsive node. We can for example remove rabbit@rabbi1 from rabbit@rabbit2.

rabbit1$ rabbitmqctl stop_appStopping node rabbit@rabbit1 ...done.
rabbit2$ rabbitmqctl forget_cluster_node rabbit@rabbit1Removing node rabbit@rabbit1 from cluster ...
...done.

Note that rabbit1 still thinks its clustered with rabbit2, and trying to start it will result in an error. We will need to reset it to be able to start it again.

rabbit1$ rabbitmqctl start_appStarting node rabbit@rabbit1 ...
Error: inconsistent_cluster: Node rabbit@rabbit1 thinks it's clustered with node rabbit@rabbit2, but rabbit@rabbit2 disagrees
rabbit1$ rabbitmqctl resetResetting node rabbit@rabbit1 ...done.
rabbit1$ rabbitmqctl start_appStarting node rabbit@mcnulty ...
...done.

The cluster_status command now shows all three nodes operating as independent RabbitMQ brokers:

rabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1]}]},{running_nodes,[rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2]}]
...done.
rabbit3$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3]}]
...done.

Note that rabbit@rabbit2 retains the residual state of the cluster, whereas rabbit@rabbit1 andrabbit@rabbit3 are freshly initialised RabbitMQ brokers. If we want to re-initialise rabbit@rabbit2 we follow the same steps as for the other nodes:

rabbit2$ rabbitmqctl stop_appStopping node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl resetResetting node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl start_appStarting node rabbit@rabbit2 ...done.

Auto-configuration of a cluster

Instead of configuring clusters "on the fly" using the cluster command, clusters can also be set up via theRabbitMQ configuration file. The file should set the cluster_nodes field in the rabbit application to a tuple contanining a list of rabbit nodes, and an atom - either disc or ram - indicating whether the node should join them as a disc node or not.

If cluster_nodes is specified, RabbitMQ will try to cluster to each node provided, and stop after it can cluster with one of them. RabbitMQ will try cluster to any node which is online that has the same version of Erlang and RabbitMQ. If no suitable nodes are found, the node is left unclustered.

Note that the cluster configuration is applied only to fresh nodes. A fresh nodes is a node which has just been reset or is being start for the first time. Thus, the automatic clustering won't take place after restarts of nodes. This means that any change to the clustering via rabbitmqctl will take precedence over the automatic clustering configuration.

A common use of cluster configuration via the RabbitMQ config file is to automatically configure nodes to join a common cluster. For this purpose the same cluster nodes can be specified on all cluster.

Say we want to join our three separate nodes of our running example back into a single cluster. First we reset and stop all nodes, to make sure that we're working with fresh nodes:

rabbit1$ rabbitmqctl stop_appStopping node rabbit@rabbit1 ...done.
rabbit1$ rabbitmqctl resetResetting node rabbit@rabbit1 ...done.
rabbit1$ rabbitmqctl stopStopping and halting node rabbit@rabbit1 ...done.
rabbit2$ rabbitmqctl stop_appStopping node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl resetResetting node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl stopStopping and halting node rabbit@rabbit2 ...done.
rabbit3$ rabbitmqctl stop_appStopping node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl resetResetting node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl stopStopping and halting node rabbit@rabbit3 ...done.

Now we set the relevant field in the config file:

[...{rabbit, [...{cluster_nodes, {['rabbit@rabbit1', 'rabbit@rabbit2', 'rabbit@rabbit3'], disc}},...]},...
].

For instance, if this were the only field we needed to set, we would simply create the RabbitMQ config file with the contents:

[{rabbit,[{cluster_nodes, {['rabbit@rabbit1', 'rabbit@rabbit2', 'rabbit@rabbit3'], disc}}]}].

(Note for Erlang programmers and the curious: this is a standard Erlang configuration file. For more details, see the configuration guide and the Erlang Config Man Page.)

Once we have the configuration files in place, we simply start the nodes:

rabbit1$ rabbitmq-server -detachedrabbit2$ rabbitmq-server -detachedrabbit3$ rabbitmq-server -detached

We can see that the three nodes are joined in a cluster by running the cluster_status command on any of the nodes:

rabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]
...done.
rabbit3$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]}]
...done.

Note that, in order to remove a node from an auto-configured cluster, it must first be removed from theRabbitMQ configuration file files of the other nodes in the cluster. Only then, can it be reset safely.

Upgrading clusters

When upgrading from one major or minor version of RabbitMQ to another (i.e. from 3.0.x to 3.1.x, or from 2.x.x to 3.x.x), or when upgrading Erlang, the whole cluster must be taken down for the upgrade (since clusters cannot run mixed versions like this). This will not be the case when upgrading from one patch version to another (i.e. from 3.0.x to 3.0.y); these versions can be mixed in a cluster (with the exception that 3.0.0 cannot be mixed with later versions from the 3.0.x series).

RabbitMQ will automatically update its persistent data structures if necessary when upgrading between major / minor versions. In a cluster, this task is performed by the first disc node to be started (the "upgrader" node). Therefore when upgrading a RabbitMQ cluster, you should not attempt to start any RAM nodes first; any RAM nodes started will emit an error message and fail to start up.

While not strictly necessary, it is a good idea to decide ahead of time which disc node will be the upgrader, stop that node last, and start it first. Otherwise changes to the cluster configuration that were made between the upgrader node stopping and the last node stopping will be lost.

Automatic upgrades are only possible from RabbitMQ versions 2.1.1 and later. If you have an earlier cluster, you will need to rebuild it to upgrade.

A cluster on a single machine

Under some circumstances it can be useful to run a cluster of RabbitMQ nodes on a single machine. This would typically be useful for experimenting with clustering on a desktop or laptop without the overhead of starting several virtual machines for the cluster. The two main requirements for running more than one node on a single machine are that each node should have a unique name and bind to a unique port / IP address combination for each protocol in use.

You can start multiple nodes on the same host manually by repeated invocation of rabbitmq-server (rabbitmq-server.bat on Windows). You must ensure that for each invocation you set the environment variables RABBITMQ_NODENAME and RABBITMQ_NODE_PORT to suitable values.

For example:

$ RABBITMQ_NODE_PORT=5672 RABBITMQ_NODENAME=rabbit rabbitmq-server -detached
$ RABBITMQ_NODE_PORT=5673 RABBITMQ_NODENAME=hare rabbitmq-server -detached
$ rabbitmqctl -n hare stop_app
$ rabbitmqctl -n hare join_cluster rabbit@`hostname -s`
$ rabbitmqctl -n hare start_app

will set up a two node cluster, both nodes as disc nodes. Note that if you have RabbitMQ opening any ports other than AMQP, you'll need to configure those not to clash as well - for example:

$ RABBITMQ_NODE_PORT=5672 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15672}]" RABBITMQ_NODENAME=rabbit rabbitmq-server -detached
$ RABBITMQ_NODE_PORT=5673 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15673}]" RABBITMQ_NODENAME=hare rabbitmq-server -detached

will start two nodes (which can then be clustered) when the management plugin is installed.

Issues with hostname

RabbitMQ names the database directory using the current hostname of the system. If the hostname changes, a new empty database is created. To avoid data loss it's crucial to set up a fixed and resolvable hostname. For example:

sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname

Whenever the hostname changes you should restart RabbitMQ:

$ /etc/init.d/rabbitmq-server restart

A similar effect can be achieved by using rabbit@localhost as the broker nodename.

The impact of this solution is that clustering will not work, because the chosen hostname will not resolve to a routable address from remote hosts. The rabbitmqctl command will similarly fail when invoked from a remote host. A more sophisticated solution that does not suffer from this weakness is to use DNS, e.g.Amazon Route 53 if running on EC2.

Firewalled nodes

The case for firewalled clustered nodes exists when nodes are in a data center or on a reliable network, but separated by firewalls. Again, clustering is not recommended over a WAN or when network links between nodes are unreliable.

In the most common configuration you will need to open ports 4369 and 25672 for clustering to work.

Erlang makes use of a Port Mapper Daemon (epmd) for resolution of node names in a cluster. The default epmd port is 4369, but this can be changed using the ERL_EPMD_PORT environment variable. All nodes must use the same port. For further details see the Erlang epmd manpage.

Once a distributed Erlang node address has been resolved via epmd, other nodes will attempt to communicate directly with that address using the Erlang distributed node protocol. The default port for this traffic in RabbitMQ is 20000 higher than RABBITMQ_NODE_PORT (i.e. 25672 by default). This can be explicitly configured using the RABBITMQ_DIST_PORT variable - see the configuration guide.

Erlang Versions Across the Cluster

All nodes in a cluster must run the same version of Erlang.

Connecting to Clusters from Clients

A client can connect as normal to any node within a cluster. If that node should fail, and the rest of the cluster survives, then the client should notice the closed connection, and should be able to reconnect to some surviving member of the cluster. Generally, it's not advisable to bake in node hostnames or IP addresses into client applications: this introduces inflexibility and will require client applications to be edited, recompiled and redeployed should the configuration of the cluster change or the number of nodes in the cluster change. Instead, we recommend a more abstracted approach: this could be a dynamic DNS service which has a very short TTL configuration, or a plain TCP load balancer, or some sort of mobile IP achieved with pacemaker or similar technologies. In general, this aspect of managing the connection to nodes within a cluster is beyond the scope of RabbitMQ itself, and we recommend the use of other technologies designed specifically to solve these problems.

Clusters with RAM nodes

RAM nodes keep their metadata only in memory. As RAM nodes don't have to write to disc as much as disc nodes, they can perform better. However, note that since persistent queue data is always stored on disc, the performance improvements will affect only resource management (e.g. adding/removing queues, exchanges, or vhosts), but not publishing or consuming speed.

RAM nodes are an advanced use case; when setting up your first cluster you should simply not use them. You should have enough disc nodes to handle your redundancy requirements, then if necessary add additional RAM nodes for scale.

A cluster containing only RAM nodes is fragile; if the cluster stops you will not be able to start it again andwill lose all data. RabbitMQ will prevent the creation of a RAM-node-only cluster in many situations, but it can't absolutely prevent it.

The examples here show a cluster with one disc and one RAM node for simplicity only; such a cluster is a poor design choice.

Creating RAM nodes

We can declare a node as a RAM node when it first joins the cluster. We do this withrabbitmqctl join_cluster as before, but passing the --ram flag:

rabbit2$ rabbitmqctl stop_appStopping node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl join_cluster --ram rabbit@rabbit1Clustering node rabbit@rabbit2 with [rabbit@rabbit1] ...done.
rabbit2$ rabbitmqctl start_appStarting node rabbit@rabbit2 ...done.

RAM nodes are shown as such in the cluster status:

rabbit1$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_statusCluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
...done.

Changing node types

We can change the type of a node from ram to disc and vice versa. Say we wanted to reverse the types ofrabbit@rabbit2 and rabbit@rabbit1, turning the former from a ram node into a disc node and the latter from a disc node into a ram node. To do that we can use the change_cluster_node_type command. The node must be stopped first.

rabbit2$ rabbitmqctl stop_appStopping node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl change_cluster_node_type discTurning rabbit@rabbit2 into a disc node ...
...done.
Starting node rabbit@rabbit2 ...done.
rabbit1$ rabbitmqctl stop_appStopping node rabbit@rabbit1 ...done.
rabbit1$ rabbitmqctl change_cluster_node_type ramTurning rabbit@rabbit1 into a ram node ...
rabbit1$ rabbitmqctl start_appStarting node rabbit@rabbit1 ...done.

转载于:https://my.oschina.net/u/2329222/blog/416789

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/544813.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

使用uuid作为数据库主键,被技术总监怼了!

一、前言在日常开发中,数据库中主键id的生成方案,主要有三种数据库自增ID采用随机数生成不重复的ID采用jdk提供的uuid对于这三种方案,我发现在数据量少的情况下,没有特别的差异,但是当单表的数据量达到百万级以上时候&…

ThreadLocal不好用?那是你没用对!

作者 | 王磊来源 | Java中文社群(ID:javacn666)转载请联系授权(微信ID:GG_Stone)在 Java 中,如果要问哪个类使用简单,但用好最不简单?我想你的脑海中一定会浮现出一次词—…

记一则js替换字符串的问题

2019独角兽企业重金招聘Python工程师标准>>> 软件的一处功能用到EasyUI的表单提交,返回一串字符串,这串字符串里有一段HTML代码,正常的情况下这段HTML代码里的双引号“ 是用 \ 转义过的。在IE中没问题,但是在Firefox和…

『图解Java并发』面试必问的CAS原理你会了吗?

在并发编程中我们都知道i操作是非线程安全的,这是因为 i操作不是原子操作。如何保证原子性呢?常用的方法就是加锁。在Java语言中可以使用 Synchronized和CAS实现加锁效果。Synchronized是悲观锁,线程开始执行第一步就是获取锁,一旦…

SimpleDateFormat线程不安全的5种解决方案!

作者 | 王磊来源 | Java中文社群(ID:javacn666)转载请联系授权(微信ID:GG_Stone)1.什么是线程不安全?线程不安全也叫非线程安全,是指多线程执行中,程序的执行结果和预期的…

mac地址漂移flapping的前因后果

一、什么是mac地址flapping?mac地址漂移是指:在同一个vlan内,mac地址表项的出接口出现变更。如图:二、产生的原因1、因为环路或VRRP切换,导致的MAC地址漂移告警。(不予关注)2、因为无线用户漫游&#xff0…

时间转换竟多出1年!Java开发中的20个坑你遇到过几个?

前言最近看了极客时间的《Java业务开发常见错误100例》,再结合平时踩的一些代码坑,写写总结,希望对大家有帮助,感谢阅读~1. 六类典型空指针问题包装类型的空指针问题级联调用的空指针问题Equals方法左边的空指针问题ConcurrentHas…

Oracle RAC Failover 详解

2019独角兽企业重金招聘Python工程师标准>>> Oracle RAC 同时具备HA(High Availiablity) 和LB(LoadBalance). 而其高可用性的基础就是Failover(故障转移). 它指集群中任何一个节点的故障都不会影响用户的使用,连接到故障节点的用户会被自动转移到健康节…

一个ThreadLocal和面试官大战30个回合

开场杭州某商务楼里,正发生着一起求职者和面试官的battle。面试官:你先自我介绍一下。安琪拉:面试官你好,我是草丛三婊,最强中单(妲己不服),草地摩托车车手,第21套广播体…

图文并茂的聊聊Java内存模型!

在面试中,面试官经常喜欢问:『说说什么是Java内存模型(JMM)?』面试者内心狂喜,这题刚背过:『Java内存主要分为五大块:堆、方法区、虚拟机栈、本地方法栈、PC寄存器,balabala……』面试官会心一笑…

AngularJS入门心得2——何为双向数据绑定

前言:谁说Test工作比较轻松,最近在熟悉几个case,差点没疯。最近又是断断续续的看我的AngularJS,总觉得自己还是没有入门,可能是自己欠前端的东西太多了,看不了几行代码就有几个常用函数不熟悉的。看过了大漠…

Java中那些内存泄漏的场景!

虽然Java程序员不用像C/C程序员那样时刻关注内存的使用情况,JVM会帮我们处理好这些,但并不是说有了GC就可以高枕无忧,内存泄露相关的问题一般在测试的时候很难发现,一旦上线流量起来可能马上就是一个诡异的线上故障。内存泄露定义…

ThreadLocal内存溢出代码演示和原因分析!

作者 | 王磊来源 | Java中文社群(ID:javacn666)转载请联系授权(微信ID:GG_Stone)前言ThreadLocal 翻译成中文是线程本地变量的意思,也就是说它是线程中的私有变量,每个线程只能操作自…

彻夜怒肝!Spring Boot+Sentinel+Nacos高并发已撸完,快要裂开了!

很多人说程序员是最容易实现财富自由的职业,也确实,比如字节 28 岁的程序员郭宇不正是从普通开发一步步做起的吗?回归行业现状,当开发能力可以满足公司业务需求时,拿到超预期的 Offer 并不算难。最近我也一直在思考这个…

湖南多校对抗5.24

据说A,B,C题都比较水这里就不放代码了 D:Facility Locations 然而D题是一个脑经急转弯的题:有m行,n列,每个位置有可能为0,也可能不为0,问最多选K行是不是可以使得每一列都至少有一个0,其中代价c有个约束条件…

PPT演讲计时器

下载 GitHub 源码地址 如果访问不到的话,可以从百度盘下载: 链接:https://pan.baidu.com/s/1bK4sug-eK85fmPgi9DzhcA 提取码:0vp3 文件:VB.Equal.Timer-VB计时器软件-绿色无残留 写在前面 转眼也工作了两年了&…

2万字!66道并发面试题及答案

我花了点时间整理了一些多线程,并发相关的面试题,虽然不是很多,但是偶尔看看还是很有用的哦!话不多说,直接开整!01 什么是线程?线程是操作系统能够进⾏运算调度的最⼩单位,它被包含在…

25种代码坏味道总结+优化示例

前言什么样的代码是好代码呢?好的代码应该命名规范、可读性强、扩展性强、健壮性......而不好的代码又有哪些典型特征呢?这25种代码坏味道大家要注意啦1. Duplicated Code (重复代码)重复代码就是不同地点,有着相同的程…

滚动照片抽奖软件

CODE GitHub 源码 1、女友说很丑的一个软件 说个最近的事情,女友公司过年了要搞活动,需要个抽奖的环节,当时就问我能不能给做一个,正好我也没啥事儿,就在周末的时候用C#做了一个,虽然派上用场了&#xf…

11个小技巧,玩转Spring!

前言最近有些读者私信我说希望后面多分享spring方面的文章,这样能够在实际工作中派上用场。正好我对spring源码有过一定的研究,并结合我这几年实际的工作经验,把spring中我认为不错的知识点总结一下,希望对您有所帮助。一 如何获取…