Last week, we discussed Distributed Locking. Today, we’ll continue with locking but doing it differently: with a full backflip.
We’ll discuss how to implement a simple in-process lock. What’s special about it? We’ll use the queue broker we delivered in Ordering, Grouping and Consistency in Messaging systems. And explain why. Then, we’ll take it back to a higher level, showing you how your business application can benefit from it.
Does it sound intriguing? I hope it does, as we aim to learn and have fun.
A quick recap about locking
Last week, we learned that locking is the simplest way to handle exclusive access to the resource. Using it, the actor says, “Until I finish, no other actor can access this resource!“. Others will need to wait or fail immediately. Who’s the actor? Think: user, service, node, processor, etc.
Without locks, you can get unpredictable states - like a read model flipping from correct to incorrect or a file partially overwritten by multiple workers. Locks sacrifice a bit of parallelism and performance for the certainty that no two actors update the same resource simultaneously. In many cases, that’s the safest trade-off, especially if data correctness is a key factor.
The basic flow looks like this:
Lock definition
Let’s now define a high-level API for locking. I’ll use TypeScript again because I like it, and I have to choose some language.
It can look as follows:
interface Lock {
acquire(options: LockOptions): Promise<void>;
tryAcquire(options: LockOptions): Promise<boolean>;
release(options: LockOptions): Promise<boolean>;
withAcquire: <Result = unknown>(
handle: () => Promise<Result>,
options: LockOptions,
) => Promise<Result>;
}
type LockOptions = { lockId: string };
It has four methods:
acquire - either get access to the resource or wait if the lock is already held,
tryAcquire - either acquire lock or returns false if the lock is already held,
release - mark locked resource as available for others,
withAcquire - wrapper that will run a specific code wrapped with acquire and release method calls.
All methods take options with a specific lock identifier. It can represent the locked resource id or any other arbitrary text format.
Of course, we could be more creative and add more options, such as timeouts, but let’s keep it focused.
Challenges with implementing locking
When implementing locking, we need to be sure we’ll acquire a specific lock for the single resource. That’s vulnerable to race conditions. We must also make it thread-safe, ensuring safety when multi-threads try to lock the same record.
The last part is simpler in TypeScript, as it uses a JavaScript event loop and is single-threaded. Node.js authors thought handling tasks sequentially (if done well) could be faster than handling and managing multiple threads. Per MDN:
A very interesting property of the event loop model is that JavaScript, unlike a lot of other languages, never blocks. Handling I/O is typically performed via events and callbacks, so when the application is waiting for an IndexedDB query to return or a
fetch()
request to return, it can still process other things like user input.
So if you’re doing a more classical approach like C#, Java, be aware that you’d need to use thread-safe constructions like ConcurrentDictionary, ConcurrentHashMap.
Luckily, in Ordering, Grouping and Consistency in Messaging systems, we implemented the Queue Broker capable of handling a single task per group of tasks. We used it to handle idempotency, which is a similar, but slightly different goal.
To recap, our Queue Broker allows multiple tasks to be enqueued and handled asynchronously. It works similarly as Amazon SQS FIFO ensures task with the same task group ID are processed in the strict order.
Groups are independent, so tasks for one group don’t block tasks in another. The other tasks from the same group will wait in the queue to be processed. Our Queue Broker uses a single-writer pattern to ensure that scheduling happens sequentially. We can enqueue multiple tasks, but they’re not processed immediately; the background queue processing processes them. Thanks to that, we can control the concurrency.
Let’s use it in our case!
Implementing Locks
Let’s use what we learn so far, starting with the Lock setup:
function InProcessLock(): Lock {
const taskProcessor = new QueueBroker({
maxActiveTasks: Number.MAX_VALUE,
maxQueueSize: Number.MAX_VALUE,
});
// Map to store ack functions of currently held locks: lockId -> ack()
const locks = new Map<string, () => void>();
return {
// (...) implementation will go here
// acquire: (...),
// tryAcquire: (...),
// release: (...),
// withAcquire: (...)
}
}
We defined the InProcessLock function that will generate a new Lock. It has a queue broker without any limits on parallel processing. The only concurrency limit that we want to enforce is for the specific lock id.
We’re also keeping the currently held locks in map cache.
Acquiring Lock
Now, let’s implement the acquire function:
Keep reading with a 7-day free trial
Subscribe to Architecture Weekly to keep reading this post and get 7 days of free access to the full post archives.