I’m trying to figure out the best solution for the below. Any help would be great.
So basically I have a service (that can be scaled horizontally), which listens on a queue. Every message received will be dispatched into a job and processed concurrently.
The job will (in same order):
- Generate some data based on the message payload
- Cache the data on Redis
- Send the data to another service
My issue is when another message is received for the same logical record (same table record but with altered data).
I need to make sure that the latest version of the data is cached on Redis and sent to the next service in the scenario where 2 or more messages with the same record id are being processed. Hence avoiding that a job with an old version of the payload is overwriting the latest one.
I thinking about using some distributed locking mechanism, not sure if that’s efficient, especially when I want the latest version to be sent to the next service quickly as possible.
Maybe someway to cancel a job for an outdated payload instead of locking the whole job? Using Redis pubsub to communicate between the service (When scaling) or have a better way?