MQ RabbitMQ Brokers Should Use Cluster Deployment Mode
Overview
This check verifies that your Amazon MQ RabbitMQ brokers use cluster deployment mode (CLUSTER_MULTI_AZ) instead of single-instance deployment. Cluster mode distributes broker nodes across multiple Availability Zones, providing high availability and fault tolerance.
Risk
Running a RabbitMQ broker as a single instance creates a single point of failure:
- If the instance or Availability Zone experiences an outage, your message queue becomes unavailable
- Messages in flight could be lost during unexpected failures
- Applications depending on the broker may experience significant downtime
- Message ordering and delivery guarantees may be compromised
Cluster deployment eliminates these risks by maintaining broker availability even when individual nodes or AZs fail.
Remediation Steps
Prerequisites
You need permission to create or modify Amazon MQ brokers. If you have existing single-instance brokers, you will need to create new cluster-mode brokers and migrate your applications (deployment mode cannot be changed after creation).
AWS Console Method
Important: You cannot convert an existing single-instance broker to cluster mode. You must create a new broker with cluster deployment and migrate your applications.
- Open the Amazon MQ console at https://console.aws.amazon.com/amazon-mq
- Click Create brokers
- Select RabbitMQ as the engine type, then click Next
- For Deployment mode, select Cluster deployment
- Enter a Broker name (e.g.,
my-rabbitmq-cluster) - Choose an appropriate Broker instance type (e.g.,
mq.m5.large) - Configure RabbitMQ access:
- Enter an Admin username
- Enter a Password (at least 12 characters, with 4 unique characters)
- Under Network and security:
- Choose whether to make the broker publicly accessible
- Select your VPC and subnets (for private brokers, select subnets in different AZs)
- Select or create security groups
- Click Create broker
After the new broker is active (this takes several minutes), update your applications to use the new broker endpoint, then delete the old single-instance broker.
AWS CLI (optional)
Create a new RabbitMQ broker in cluster deployment mode:
aws mq create-broker \
--broker-name my-rabbitmq-cluster \
--engine-type RABBITMQ \
--engine-version 3.13 \
--deployment-mode CLUSTER_MULTI_AZ \
--host-instance-type mq.m5.large \
--no-publicly-accessible \
--auto-minor-version-upgrade \
--subnet-ids subnet-xxxxxxxx subnet-yyyyyyyy \
--security-groups sg-xxxxxxxx \
--users '[{"Username":"admin","Password":"YourSecurePassword123!"}]' \
--region us-east-1
Replace:
subnet-xxxxxxxxandsubnet-yyyyyyyywith your subnet IDs (in different AZs)sg-xxxxxxxxwith your security group IDYourSecurePassword123!with a secure password
Check the broker status:
aws mq describe-broker \
--broker-id <broker-id> \
--region us-east-1 \
--query '{Name:BrokerName,DeploymentMode:DeploymentMode,State:BrokerState}'
List all brokers and their deployment modes:
aws mq list-brokers \
--region us-east-1 \
--query 'BrokerSummaries[?EngineType==`RABBITMQ`].{Name:BrokerName,Id:BrokerId,DeploymentMode:DeploymentMode}'
CloudFormation (optional)
Use this template to create a RabbitMQ broker in cluster deployment mode:
AWSTemplateFormatVersion: '2010-09-09'
Description: Amazon MQ RabbitMQ broker in cluster deployment mode
Parameters:
BrokerName:
Type: String
Description: Name for the RabbitMQ broker
Default: my-rabbitmq-cluster
HostInstanceType:
Type: String
Description: Instance type for broker nodes
Default: mq.m5.large
AllowedValues:
- mq.m5.large
- mq.m5.xlarge
- mq.m5.2xlarge
- mq.m5.4xlarge
Username:
Type: String
Description: Admin username for the broker
MinLength: 2
MaxLength: 100
Password:
Type: String
Description: Admin password for the broker
NoEcho: true
MinLength: 12
SubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: Subnet IDs for the broker (provide subnets in different AZs)
SecurityGroupIds:
Type: List<AWS::EC2::SecurityGroup::Id>
Description: Security group IDs for the broker
Resources:
RabbitMQBroker:
Type: AWS::AmazonMQ::Broker
Properties:
BrokerName: !Ref BrokerName
EngineType: RABBITMQ
EngineVersion: '3.13'
HostInstanceType: !Ref HostInstanceType
DeploymentMode: CLUSTER_MULTI_AZ
PubliclyAccessible: false
AutoMinorVersionUpgrade: true
SubnetIds: !Ref SubnetIds
SecurityGroups: !Ref SecurityGroupIds
Users:
- Username: !Ref Username
Password: !Ref Password
Outputs:
BrokerArn:
Description: ARN of the RabbitMQ broker
Value: !GetAtt RabbitMQBroker.Arn
BrokerEndpoints:
Description: AMQP endpoints for the broker
Value: !Join [',', !GetAtt RabbitMQBroker.AmqpEndpoints]
Deploy the stack:
aws cloudformation create-stack \
--stack-name rabbitmq-cluster \
--template-body file://template.yaml \
--parameters \
ParameterKey=Username,ParameterValue=admin \
ParameterKey=Password,ParameterValue=YourSecurePassword123! \
ParameterKey=SubnetIds,ParameterValue="subnet-xxxxxxxx,subnet-yyyyyyyy" \
ParameterKey=SecurityGroupIds,ParameterValue=sg-xxxxxxxx \
--region us-east-1
Terraform (optional)
variable "broker_name" {
description = "Name for the RabbitMQ broker"
type = string
default = "my-rabbitmq-cluster"
}
variable "host_instance_type" {
description = "Instance type for broker nodes"
type = string
default = "mq.m5.large"
}
variable "admin_username" {
description = "Admin username for the broker"
type = string
}
variable "admin_password" {
description = "Admin password for the broker"
type = string
sensitive = true
}
variable "subnet_ids" {
description = "Subnet IDs for the broker (in different AZs)"
type = list(string)
}
variable "security_group_ids" {
description = "Security group IDs for the broker"
type = list(string)
}
resource "aws_mq_broker" "rabbitmq_cluster" {
broker_name = var.broker_name
engine_type = "RabbitMQ"
engine_version = "3.13"
host_instance_type = var.host_instance_type
deployment_mode = "CLUSTER_MULTI_AZ"
publicly_accessible = false
auto_minor_version_upgrade = true
subnet_ids = var.subnet_ids
security_groups = var.security_group_ids
user {
username = var.admin_username
password = var.admin_password
}
tags = {
Environment = "production"
}
}
output "broker_arn" {
description = "ARN of the RabbitMQ broker"
value = aws_mq_broker.rabbitmq_cluster.arn
}
output "broker_endpoints" {
description = "AMQP endpoints for the broker"
value = aws_mq_broker.rabbitmq_cluster.instances[*].endpoints
}
Create a terraform.tfvars file with your values:
admin_username = "admin"
admin_password = "YourSecurePassword123!"
subnet_ids = ["subnet-xxxxxxxx", "subnet-yyyyyyyy"]
security_group_ids = ["sg-xxxxxxxx"]
Verification
After creating your cluster-mode broker, verify it is properly configured:
- Go to the Amazon MQ console
- Click on your broker name
- On the Details page, confirm Deployment mode shows Cluster
- Check that the Status shows Running
CLI verification
aws mq describe-broker \
--broker-id <broker-id> \
--region us-east-1 \
--query '{Name:BrokerName,DeploymentMode:DeploymentMode,State:BrokerState}'
The output should show "DeploymentMode": "CLUSTER_MULTI_AZ".
To find brokers that are not using cluster mode:
aws mq list-brokers \
--region us-east-1 \
--query 'BrokerSummaries[?EngineType==`RABBITMQ` && DeploymentMode!=`CLUSTER_MULTI_AZ`].{Name:BrokerName,Id:BrokerId,DeploymentMode:DeploymentMode}'
Additional Resources
- AWS Documentation: Amazon MQ for RabbitMQ
- AWS Documentation: RabbitMQ Broker Architecture
- AWS Documentation: Amazon MQ Best Practices
Notes
- Migration required: You cannot change the deployment mode of an existing broker. You must create a new cluster-mode broker and migrate your applications.
- Cost considerations: Cluster deployment uses multiple broker instances across AZs, which increases costs compared to single-instance deployment. However, the improved availability typically justifies the additional cost for production workloads.
- Instance type requirements: Cluster deployment requires at least
mq.m5.largeinstance type. - Network configuration: For private cluster brokers, you need subnets in at least two different Availability Zones.
- Client reconnection: Ensure your applications implement automatic reconnection logic with exponential backoff to handle node failovers gracefully.