Publishing to a RabbitMQ exchange from Flask (using Celery)

What do you do when life gives you a rabbit, a whale and a bunch of celery? It’s not a riddle, but running a web app that integrates RabbitMQ with Celery, and orchestrating it all with Docker can prove puzzling. To potentially save someone headache, the minimal configuration is detailed below to get started.

It is assumed that RabbitMQ version 3.7 is used. The version is noteworthy, as the configuration file format changed in 3.7.

Docker Compose

In this post, it is assumed you are using Docker Compose to declaratively define your container.

First, in the docker-compose.yml file, add a rabbitmq dependency section.

We specify that we want to:

  • Use an image of the latest version of the 3.x RabbitMQ build for Debian that bundles the RabbitMQ management plugin. Other Docker files can be found here.
  • Declare a host for the container called rabbitmq, which we will later reference in the Python client code to connect to the message exchange.
  • Add a directory on the container for RabbitMQ and import a file declaring any plugins we want to enable, as well as the actual RabbitMQ configuration. Contents described later.
  • Command to start RabbitMQ.
  • Export some ports so as to interface with software running outside the container. Default RabbitMQ management port is 15672 and actual messaging port is 5672.

Content of the aforementioned configuration files is as below.

rabbitmq.conf

This allows the default guest user credentials to be used to login to the RabbitMQ management outside of the local host. Remove this in production.

rabbitmq_enabled_plugins

These files should be created in the same directory as your docker-compose.yml.
All of the available configuration parameters are listed on the RabbitMQ site.

Celery

Pika is a popular Python module to interface with RabbitMQ via the standard AMQP protocol. Be warned that it is not thread safe. Thankfully Celery task functions can inherit from a base task class that instantiates a connection for each newly spawned worker process. Workers are pooled and re-used so a new connection will not be established each time a task function runs.

Create a new task class and declare a function that connects to RabbitMQ, or returns an existing connection object if already connected.

Finally, in a Celery tasks class, add a function to publish a message to an exchange.

Conclusion

Run docker-compose up --build and test out the application.

Share your thoughts