Architecture Weekly #190 - Queuing, Backpressure, Single Writer and other useful patterns for managing concurrency
Welcome to the new week!
In the last edition, we explored why connection pooling is crucial for using databases efficiently.
Now, let's take a step further and perform a thought experiment: How would you actually implement a connection pool? We'll use Node.js and TypeScript as an illustration, as we need to choose a specific stack to make the concepts concrete.
If you think, “Well, I won’t ever do it myself”, don’t worry—the intention here isn’t to drown you in the imaginary code. Instead, I will show you how these micro-scale coding considerations can impact macro-scale design analysis.
As we go, we’ll learn a few valuable concepts, like Queuing, Backpressure and single Writer, and discuss why queuing can be useful in managing concurrency and beyond!
Connection Pool
Let’s start where we ended, so with our database access code:
As we discussed in detail in the previous article, a connection pool manages reusable database connections shared among multiple requests. Instead of opening and closing a connection for each request, we borrow a connection from the pool and return it after database operation.
Thanks to that, we’re cutting the latency required to establish a connection and get a more resilient and scalable solution (at least if we’re not into serverless).
Stop for a moment, take a sheet of paper and pen, and try to outline how you would implement such a connection pool.
Done? Now, think about the potential limitations of your thinking and the potential solutions to them.
Ready? Let’s compare that with my thoughts!
Basic definition
Let’s start by defining the basic API. It could look like this:
There is not much to explain besides that we want to manage the pool to the database represented by the connection string and limit the number of open connections. Why?
Imagine a service like a payment gateway that handles transactions for a popular online retailer. During peak shopping times, such as Black Friday, the system experiences a huge increase in payment requests. Each transaction requires a secure connection to the database to verify payment details and update the order status.
Yet, the database has its limits, so we might need to try somehow to squeeze that into the limit of 100 concurrence connections. When all 100 connections are occupied processing transactions, new incoming requests must be managed effectively.
We could try to:
- Fail Immediately: The system could reject new requests outright, leading to a poor user experience and lost sales.
- Retry Aggressively: The requests might enter a loop, repeatedly attempting to acquire a connection, which can further strain the system.
The naive implementation of a fail-fast strategy could look as such:
Now, While doing retries in the loop or even immediate failure can be a temporary solution, for obvious reasons, it won’t scale and will result in a subpar experience.
Consider a busy restaurant with limited seating. When all tables are occupied, you can be asked to wait a bit near the entrance. As tables become available, the waiter will invite you to be seated. This ensures that the restaurant operates smoothly without overcrowding or turning away customers.
We could do the same with our connection pool implementation. Instead of rejecting new requests or letting them endlessly retry (which could crash your system), they're placed in a queue and handled sequentially.
The next part of the article is for paid users. If you’re not such yet, till the end of August, you can use a free month's trial: https://www.architecture-weekly.com/b3b7d64d. You can check it out and decide if you like it and want to stay. I hope that you will!
Keep reading with a 7-day free trial
Subscribe to Architecture Weekly to keep reading this post and get 7 days of free access to the full post archives.