I am trying to find a good approach for authentication and authorization of a user in a microservice-application.
Currently I have the following idea:
In regards to the sequence diagram I have the following questions:
- [step 12] and [step 16] check for a permission, which is necessary on the server-side. Where are those permissions supposed to be stored?
- What about the note (after [step 20])? Possibilities:
- Issue an additional signed Token, that contains permissions / roles for the message service. This token can be acquired during [step 13].
- Re-issue existing JWT, include permissions / roles for the message service. This token can be acquired during [step 13].
- Issue a server-side request, that returns information about whether a certain GUI-area should be displayed.
If in 2.1 or 2.2 the client-token is modified, the user is able to access GUI-areas, which should not be available to him. However: any subsequent service calls will not succeed, because the modified (nor the original) token permits access to corresponding server-calls to the user.
In 2.3 the microservice is queried each time the GUI has to decide whether to display a GUI-area or not.
I need an advice on securing interaction between an application App and a microservice RecordApp.
App is the main application, where all users are registered and do some activity. RecordApp is a utility application where App users can store records that publicly visible in the App (like sub-twitter). RecordApp knows nothing about users, but each record (“tweet”) needs to have an author. I need to provide ability to use the RecordApp from the frontend and don’t use the App like a proxy. I have some options to do this:
The first is: User passes authentication on the App server and receives JWT token (that was signed using App public key) that tells that the user has name “User1” and can create public records. Then user sends this to RecordApp, it validates it (using RecordApp secret key) and sees that this user can create a record and then creates it. But if some hacker steals the User cookie, then the hacker can post malicious records by the name of “User1”. I could solve this by setting httpOnly flag for the cookie, but App and RecordApp have different domains, so that wouldn’t work.
What do I do to secure such microservice?
The second option is as user passes authentication on the App server, send to the user a public_key that will be used to sign jwt with the record contents. If the hacker will be able to steal it from the-middle, then they just can post this message unchanged. But what do I do to protect the public key then?
Thanks in advance!
The IT architect of my company just have exposed us the hight architecture principles of the new system that we plan to work on. Globally this is a backend system exposing apis for desktop application (point of sale) on a private network and web application on internet (administration/backoffice managament).
One of the think that seems to me weird is that all front end apis will be decoupled from the real service implementation by a kafka bus.
Here a schema of the general architecture :
the arguments for using the kafka message broker are :
- to separate the entry point into the system from service implementations
- Allow easier scaling of components with Pub/Sub pattern
the broker will act as a persistent message buffer when services are not able to process events as they come in
I think it will add some complexity to hve all service working in this asynchronous way . What do you think of this kind of architecture?
I want to implement an expiration functionality for a domain entity. For example:
User queries a Gift Code that should expire somewhere around 30 mins after creation.
I see several approaches to this problem:
- Create a Scheduler inside a Service which will run every 30mins in a separate execution thread and expire all Gift Codes. This approach seems not the best one in case of several running instances of a Service. They all will race for the update in database.
- Create a Scheduler service which will ping GiftCode service every 30mins. Request will be sent to a single instance of GiftCode service selected by some loadbalancer/discovery server.
- Expire Gift Code during its next query/update. This approach is not quite applicable because later some actions (ex. email sending) might be added to be executed after expiration.
My question is whether there might be a better solution to this problem than 3 above.
We are working on an enterprise project following DDD architecture.
At the moment we have a microservice A that handles bookings. System for creating invoices is a 3rd party API.
Since each booking arrives in microservice A, it creates and published an event NewBookingCreated to global queue.
Now in order to consume that event, we have different options
A) introduce microservice B that will process NewBookingCreated event, get all the booking information from microservice A, get client information from maybe microservice C and then call 3rd party API to create an invoice
B) process NewBookingCreated event inside microservice A, get client information from maybe microservice C and then call 3rd party API to create an invoice (all happening from microservice A)
Basically we are interested is it a good idea to leave microservice A in order to process event and create an invoice?
Let’s assume the following architecture where
EventCreatorApp creates an
Event which is logged to DB and sent to
WorkerApp for processing
WorkerApp processes the event, handles failures, etc.
EventCreatorApp --> [ DB ] | ---------> WorkerApp
WorkerApp is done, I want to update the event row in the DB with the outcome of running that event.
My question: Which app should write the outcome to the database?
WorkerApp writes the outcome to the db the logic is nicely decoupled (one app creates events, one app executes events) but I have two apps writing to the db. What’s worst, I want to deploy
WorkerApp on an AWS Lambda, and that would mean opening a new connection to the db for each event processed.
EventCreatorApp writes the outcome to the db I don’t have any db access problems, but it feels like I’m pushing into
EventCreatorApp some logic that doesn’t really belong there, and in general I’m coupling the two apps a bit since
EventCreatorApp would have to wait for a response.
Which solution is best?
Say there are two microservices (example is simplified)
PickupRequestService: lists pick-up requests of passengers
DriverService: drivers use to accept pickup requests
On a completely decoupled communication method (via message queues),
DriverService would make an async call to accept a pick-up request e.g.,
PickupRequestService->accept(pickupId). This looks fine however, for a few seconds, this will cause other drivers to see pick-up requests that are already taken.
You could probably eventually send a compensating failed message for those hopeful drivers that accepted a taken pick-up request e.g., “your request to accept the passenger has been rejected…” but that won’t be a good UX.
My question is, for scenarios similar to this, where a state changing operation is time sensitive, is it okay to make synchronous inter-microservice REST calls? E.g.,
DriverService synchronously calls
PickupRequestService->accept(pickupId) so that the state change is instantaneous and no other driver sees old pick-up requests anymore.
I have a micro-service that sends files to clients on request.
It acquires these files by querying a database where other users have saved details of the file.
The micro-service queries the database periodically, if there is a new file(s) it contacts artifactory and downloads the file to a shared storage that all instances of the micro-service use.
How can I ensure that only one instance queries and downloads files at a time?
Organisation I work for has a single production namespace of microservices A, B, C, D, E, F, G.
Project 1 uses microservices A, B, C, D.
Project 2 uses microservices D, E, F, G.
Therefore Projects 1 and 2 have microservice D in common.
Project 1 wants to update microservice D and considers it part of its project.
Project 2 does not want to update microservice D and considers it part of its project.
Organisation does not (and will not) organise its teams at the microservice level. Organisation will continue to spawn new projects and will consider existing microservices as potential building blocks in a global namespace which it can use as part of its solution (and which it will lay shared claim to the maintenance/release/deployment lifecycles thereof).
I’m sure this is a common enterprise deployment problem but I am struggling to find the vocabulary to describe it.
Is there a well known name for what I am trying to describe? Is this namespace approach a deployment anti-pattern?
My goals in asking this question are to go away with a searchable term so that I an research the problem in more detail and seek out alternative approaches.