Better Programming

Advice for programmers.

Follow publication

Develop a Simple URL Shortener Using Microservices Architecture

Node.js + MySQL + RabbitMQ + Redis + Docker

Ivan Ostojic
Better Programming
Published in
9 min readMar 7, 2022

--

Photo by Akshar Dave 🌻 on Unsplash

Heard about microservices but never really understood them? Or didn’t get a chance to develop applications using this architecture? Well, the chances are you never needed it. Just because something is popular and a lot of big companies use it doesn’t mean you should too.

However, you should be familiar with this approach so you may one day find yourself in a situation where it is appropriate to use it.
In this article, we are going to create a simple URL shortener by using a microservices architecture.

Here’s our task:

You need to create an application for URL shortening. An example of such an application is TinyURL (http://tinyurl.com/). The application will be consist of 2 services. The first service is a Management Service, and the other service is Redirection Service. RabbitMQ/Kafka will be used for transferring messages between services

Architecture diagram

Management Service:

Management Service has a RESTful API for creating and deleting URLs. You should create two routes:

Creation route

  • The route should create a short URL based on a real URL.
  • The short URL must be unique.
  • Short URL hash identification must be located in the URI path. Example: http://localhost:8080/uAYC3sOddP

Request example

{
"realURL": "https://www.example.com/test"
}

Response example

{
"id": 3,
"realURL": "https://www.example.com/test",
"shortURL": "http://localhost:8080/gfjhgESta"
}

Deletion route:

  • Remove short URL using id.

Management service should use MySQL/PostgreSQL for the persistence layer. After creating or deleting a short URL, the information must be sent to RabbitMQ/Kafka

Redirection Service

The service will find the real URL, based on the hash part of the short URL and the user will be redirected to the real URL. Redirection accepts information about short URLs through RabbitMQ/ Kafka.

In the case of creating a short URL on Management Service, the information will be stored in Redis on Redirection Service, while in the case of deleting a short URL, the information will be deleted from Redis on Redirection Service. Redirection service has one RESTful API route.

Redirect route

  • The route should return a 302 HTTP code for the existing short URL.
  • The route should return a 404 HTTP code for non-existing short URLs.

Rate limiter

Implement rate limiting on Redirection Service where the service allows 10 redirect requests in a period of 120 seconds for a specific URL.

Redirect route

  • The route should return 429 HTTP code after reaching the threshold.

Let’s Get Started

I will assume you have Docker installed and ready.

First, we are going to create a new project and inside that let’s create three folders: Management-Service, Redirection-Service and Database-Service.

We need to spin up the MySQL database so get inside Database-Service and create Dockerfile with this content:

FROM will pull MySQL 8 server image from DockerHub and ENV will set environment variables.
COPY will copy everything from /db-dump inside /docker-entrypoint-initdb.d

When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d.

EXPOSE informs Docker that the container listens on the specified network ports at runtime.

Now we’ve got MySQL installed and ready but we don’t have a database yet. We’ll need only one table within which to store the URLs. Let’s create file setup.sql inside db-dump folder:

CREATE TABLE urls (
id INT PRIMARY KEY AUTO_INCREMENT NOT NULL,
realUrl VARCHAR(40) NULL,
createdAt TIMESTAMP NULL,
updatedAt TIMESTAMP NULL
)

Now let’s create docker-compose.yml file inside the root project folder. Here we are going to configure all of our services and containers. Then, with a single command docker-compose up, you create and start all the services from your YAML file.

  • db represents the name of service. You can call it however you want.
  • build will take the path to the folder where the Dockerfile of our database service is located.
  • command is going to provide defaults for an executing container. In this case we are using the mysql_native_password plugin so we can log into MySQL using a classic username/password combination and not something like unix socket, PAM, etc.
  • container_name is just an alias we can use instead of container ID.
  • volumes are the preferred mechanism for persisting data generated by and used by Docker containers. Every time we destroy container, everything inside our database will be erased too. We want to persist data on our local host machine. There are 3 types of volumes:
  • Bind mounts (Host volumes) — A file or directory on the host machine is mounted into a container.
  • Named volumes — Named volumes are completely managed by Docker. Instead of the path to the file, you specify only the name of the volume followed by the path of the folder you want to persist. Docker will automatically create a new directory on the host machine which is going to have the contents of the container’s directory specified by the path. In our case, Docker will create a named volume mysql-data which is going to store everything from container’s /var/lib/mysql. Volume’s contents exist outside the lifecycle of a given container so for example, we could use the same database for another container.
  • Anonymous volumes — Almost the same as named volumes but can be difficult to refer to them because their name is some random hash value.

I highly suggest reading a little bit more about volumes in general.

ports means it will map the host’s 3306 port to the container’s 3306 port which is a standard MySQL port.

We can run docker-compose up -d inside the terminal

The first time we start containers it will be a little bit slower because Docker needs to pull images configured in .yml file. -d is for detached mode so you can keep using your terminal. You can check all of your running Docker containers with docker ps.

Let’s get inside our container. You can use docker exec -it mysqldb /bin/bash.

The docker exec command runs a new command in a running container.
We specify flags -i for interactive which means it will keep STDIN open even if not attached which is what we need if we want to type any command at all and -t allocates a pseudo-TTY, pseudo-terminal which connects a user’s “terminal” with STDIN and STDOUT.

Then we specify the container’s ID/name and command to run /bin/bash (for terminal). You can make sure the database was created by running
mysql -u root -p , provide password 123456 and SHOW DATABASES; .
You can populate the database with something like INSERT INTO (realUrl) VALUES ('https://google.com') .

The second time we run the container it is not going to execute startup script located inside docker-entrypoint-initdb.d because it will recognize already created database (thanks to the volume it won’t get destroyed).
Sometimes you may want to recreate the database with a different name or password. For that, we need to destroy volume docker volume rm <volume_name> so startup scripts would run again and create a new volume.

Importent: If you are making changes to the Dockerfile make sure to use docker-compose build --no-cache afterwards so that the changes could be reflected.

Include RabbitMQ

Pull RabbitMQ into the docker-compose.yml file:

5672 is for RabbitMQ and 15672 is for RabbitMQ web interface. After running docker-compose up you can check localhost:15672 and log into RabbitMQ interface using default credentials username:guest password:guest .

Next, we are going to create a Management service

Change directory to /Management-Service , create /src folder and Dockerfile .

Change directory to /src, create package.json and install these dependencies npm install amqplib body-parser cors dotenv express hashids mysql2 nodemon sequelize .

You should add "start":"nodemon -L index.jsinside package.json ‘s scripts tag so we can start project with nodemon using npm start. Nodemon is a tool that helps develop node.js-based applications by automatically restarting the node application when file changes in the directory are detected.
Now paste this inside Dockerfile :

This will pull the latest Node.js image, copy package.json inside the container, install dependencies, then we are going to move /node_modules outside of the /src folder.

You are probably wondering why are we moving node_modules. The reason is because we want to be able to change code on our host machine and see changes immediately reflected inside the container.

That’s why we are going to bind mount from /src to /nodeapp/src inside docker-compose.yml . A binding mount means the Docker container can access and reference the host machine’s filesystem.

If we change the files within the /src folder through the container, it would also affect the host’s file system.

This would happen if we didn’t move /node_modules :

  1. We copy code from /src to container’s /src .
  2. Container runs npm install and /node_modules dependencies are created.
  3. Because we are binding /src from host machine to container’s /src it would overwrite everything including /node_modules . This means our container depends on /node_modules located on our host machine and command RUN npm install doesn’t make too much sense.
    When someone tries to clone our repository it won’t have /node_modules on his computer because he never used npm install so it would copy only /src without dependencies.

Now we need to specify the configuration and needed technologies for Management service inside docker-compose.yml .

Let’s create /services/MQService.js where we are going to create functions for connecting with RabbitMQ and publishing messages to queues.

Code is pretty much self-explanatory, we provide settings for connecting to the RabbitMQ server, channel.assertQueue(QUEUE) is going to create (if it doesn’t exist already) queue with name 'shortUrl' .
publishToQueue accepts message which we want to send and type that can be any string we want. We will use this so we could differentiate messages meant for creating and deleting URLs.

Create index.js inside /src with following content:

First, we are importing necessary libraries. Express.js for basic routing and middlewares, bodyParser for handling form-data using POST requests, dotenv so we can load environment variables into process.env (docker-compose.yml will be able to pass environment variables), Sequelize is ORM, cors for handling CORS requests and Hashids for generating small unique hashes. We’ll have two routes:
- POST ‘/’ - Create URL
- DELETE ‘/:id’ - Delete URL

createUrl first creates entry in database, then we create a hash based on the id of newly created record, we publish Url object to queue and return it back to the user as a response.

Let’s test our routes. Perform POST request with the parameter realUrl .

POST / HTTP/1.1Host: localhost:8081Content-Type: application/x-www-form-urlencodedrealUrl=www.google.ba

If the request is successful, you will see shortURL queue created in RabbitMQ GUI. Go to Queues -> Get Message(s) and you should see your Url object.

Redirection service

Let’s create the same structure as for the management service. Dockerfile is going to be the same.

Change directory to /src, create package.json and install these dependencies npm i amqplib body-parser cors dotenv express nodemon redis .

Make sure to add "start":"nodemon -L index.js inside package.json . To be able to spin up a container for this microservice, let’s add service in docker-compose-yml file.

As stated in the task description, we will have to store URLs in Redis so that’s why we added redis-store too. We need to consume messages from RabbitMQ so let’s create MQService.js but this time we don’t need to publish anything to the queue, we only need to consume the messages from the queue and store them in Redis.

consumeQueue is going to pass the message to our callback that will be defined in index.js .

Let’s create an index.js :

Now we are almost ready but if you clone this repo and start containers, one of the services is going to fail.

The reason is our Management-Service and Redirection-Service are not waiting for RabbitMQ/MySQL so they will try to connect before these services are ready.

In docker-compose.yml we have depends_on attribute but it only waits for the other container to be up, not for the process it is running to start. To fix this, we can only restart the failed container or use some solution directly within the JS files.

 // Try to connect to rabbitmq until successful
let conn = null;
do {
try {
conn = await amqp.connect(rabbitSettings);
} catch (e) {}
} while (conn === null)

The third option is to use a simple script that will wait for the containers to be ready.

When sending POST requests be sure to use JSON or form-urlencoded. There is a chance that you will need to add additional middleware to handle the form-data request body.

The source code is available in this GitHub Repository.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Responses (1)

Write a response