Migrating RabbitMQ from Self Hosted to AWS MQ

RabbitMQ is a widely used open-source message broker that helps in scaling the application by deploying a message queuing mechanism between applications. It supports multiple messaging protocols, queuing, delivery acknowledgement and flexible routing to queues.

Amazon MQ is a fully managed service that provisions and manages open source message brokers like RabbitMQ and Apache ActiveMQ. Amazon MQ supports RabbitMQ, a popular open source message broker. This enables the migration of any existing RabbitMQ message brokers to AWS without having any code changes.

Before we begin, The expectation is to have basic understanding of RabbitMQ and MQ in general.

For folks who don’t have time.

THINGS TO KNOW:

Producer / Consumer: Application that sends the messages / receives* the messages.

Queue: Buffer that stores messages

Message: Information that is sent from the producer to a consumer through RabbitMQ.

Connection: A TCP connection between your application and the RabbitMQ broker.

Exchange: Receives messages from producers and pushes them to queues depending on rules defined by the exchange type. To receive messages, a queue needs to be bound to at least one exchange.

Binding: A binding is a link between a queue and an exchange.

Routing key: A key that the exchange looks at to decide how to route the message to queues. Think of the routing key like an address for the message.*

AMQP: Advanced Message Queuing Protocol is the protocol used by RabbitMQ for messaging.

Virtual host: Provides a way to segregate applications using the same RabbitMQ instance. Different users can have different permissions to different vhost and queues and exchanges can be created, so they only exist in one vhost.

Typical Setup:

During Migration:

After the migration:

Steps:

Since your application will be live and you need to make sure there’s no message/data loss during the migration process.

RabbitMQ has an awesome inbuilt feature to support this transition. There are few plugins which support this argument.

  1. Federation Plugin
  2. Shovel Plugin
  1. Federation Plugin:
    1. It helps to transmit messages between brokers without requiring clustering. This is useful for a number of reasons.
    2. The Federation plugin makes it easy to move consumers from Cluster X to Cluster Y, “without disrupting message consumption or losing messages”.
      1. They may have different users and virtual hosts
      2. They may run on different version of RabbitMQ
  1. Shovel Plugin:
    1. This is the plugin that unidirectionally moves messages from a source to destination. Sometimes, it’s necessary to reliably and continually move messages from a source (typically a queue) in one cluster to a destination (an exchange, topic, etc) in another cluster

APPROACH:

THINGS TO KNOW:

  1. Upstream Server: The system in which the original messages are published
  2. Downstream server: The system through which messages are transmitted. Federated Exchange/ Queue takes places here.

For this scenario, we can go with Federation Plugin, which is widely used.

Federated Queues :

AWS MQ - access only "publicly exposed" Nodes:

Broker Type Limits:

"Secure" connection:

RabbitMQ supports providing the SSL certs. But we need access to the machine. Since the AWS MQ is a hosted service, we do not have access to the backend

Federation via AQMP:

amqps://user:password@server-name?cacertfile=/path/to/cacert.pem&certfile=/path/to/cert.pem&keyfile=/path/to/key.pem&verify=verify_peer

As mentioned earlier, say our setup is private and not secure, The options are

Federation via AQMPS:

amqps://user:password@server-name:5671