
Building an async event processor of every man's dreams
Every time whe build an event driven system, the first thing we want to do is to implement the logic within the event handler itself, and call it a day. But in reality, such a naive approach only works in dev environments, with little to no load at all. In production, things become much more complex and intricate.
In the world of high-load distributed systems, when they talk to each other via events, be that Kafka, WebHooks, or anything else, there are always the same challenges to overcome:
- Events come simultaneously and not necessarily in the same order as they were sent.
- Duplicate deliveries of the same event are possible.
- Events may and will arrive in bursts, resulting in beautiful an charming spikes of load on the system in your observability tool.
- If something goes wrong on our end, this must be communicated to the other side, so it may retry the event later.
- We may want to store the history of events, so we can analyze and re-process them later.
- Similar events may be batched together to increase throughput.
Almost neigher of these challenges can be solved in a synchronous way, so a more sophisticated approach is needed. For every requirement there is a solution, and being put together they form a powerful event processor.
Let's build one using Go and PostgreSQL.
High queue size = too many requests
Errors putting the event into the queue = internal server error
Misshapen events = bad request
Prevent parallel execution. Locking vs partitioning by whatever. Don't use advisory locks, they survive client crashes.
[Conclusion content goes here]