In an event-driven microservices architecture, services typically need to update their domain state and publish an integration event to a service bus at the same time (Either both operations complete, or none). When using a relational database this is typically achieved using the outbox pattern: a single entry is saved in the database that indicates that event X needs to be published. This entry is saved as part of the same transaction that contains the domain-state changes. A background process then polls these entries and publishes the events. This means that the event will eventually be published, making the system eventually consistent.
However, NoSQL databases do not favor the idea of updating multiple documents in a single transaction, and many of them do not support it without ugly workarounds. Below is a list of potential solutions (Some more ugly than others):
1. Outbox pattern variation:
Outbox pattern, but instead of having a separate collection of documents for the pending events, they will be saved as part of the domain entity. Each domain entity will encapsulate a collection of events that remain to be published and a background process will poll such entities and publish the events.
- If the background process publishes the event but fails to remove it from the domain entity, it will re-publish it. This shouldn’t really be a problem if updates are idempotent or if the event handler is able to identify duplicate events.
- Domain entities are corrupted with integration events.
2. Event sourcing:
Event sourcing makes this problem go away but is very complex to implemented and a big overhead for small microservices.
- Complex, might need complete re-design of the way services work with data.
3. Listening to own events:
The service will only publish an event that is also subscribed to (It will not update its state as part of the same operation). When the service bus sends the event back for handling, the service will update its domain entity.
- Other microservices may handle the event before the origin microservice. This may cause problems if they assume that the event already happened when in fact it hasn’t.
Are there any other solutions to this problem? Which is the best one?
so I have the following navigation graph:
Fragment A (start) --> Fragment B
So for some situations (firebase notifications), I need to start
Fragment B directly, passing data from the notifications. Now, this works. However, when I press the back button, it results in a crash. Is it because the leading fragment (
Fragment A) is not in the stack? If so, is there a way to properly handle this. Basically, I need the backPressed action to launch the start Fragment (
Fragment A) in a situation where
Fragment B is launched directly without passing through
Below is a snippet of my graph:
<fragment android:id="@+id/homeFragment" android:name="dita.dev.myportal.ui.home.HomeFragment" android:label="Home" tools:layout="@layout/fragment_home"> <action android:id="@+id/action_homeFragment_to_messageDetailFragment" app:destination="@id/messageDetailFragment" app:exitAnim="@anim/fade_out_animation" /> </fragment> <fragment android:id="@+id/messageDetailFragment" android:name="dita.dev.myportal.ui.messages.details.MessageDetailFragment" android:label="Message" tools:layout="@layout/fragment_message_detail"> <argument android:name="title" app:argType="string" /> <argument android:name="message" app:argType="string" /> </fragment>
I have a NodeJS app that does the following :
accept a zip file as an input.
extract the zip file, take all the PDF attachments out of it,
and merge them all into 1 single PDF. (The final PDF is what matter and store the final PDF persistently on a local drive)
Everything is working fine locally. I am trying to run this app on Public Cloud server-less services such as AWS Lambda, or Azure function, but I am not sure if server-less can fit such scenario ?
After some personal research I didn’t find any possible way or a paper to explain how to determine what architecture a given piece of shellcode may targets. The only obvious way I found would be to disassemble it for various architectures and check for which one of those the assembly code makes any sense. But this way demands that an actual person has to study the assembly code every time. So is there any other way that we can use to be certain that a piece of shellcode targets a certain architecture? Thank you in advance!
I faced few problems when generating database schema from business objects with EF Core:
- No support for struct properties
- No support for interface properties
- I need additional columns that should belong to Data layer only.
I don’t want to change my business layer (structs/interfaces to classes) just because of EF as this goes against clean architecture principles (framework should serve your application not the other way around). Also adding persistence specific columns to the business layer is out of question.
I suppose the right thing to do is to add persistence objects to persistence layer which would represent entities in the business layer. On top of that some mappers would have to be used by business layer to abstract persistence objects mapping them to the business entities.
Few question about this approach:
- Is this a correct thing to do? Are there any alternatives? For example: EF configuration capabilities that would help me to deal with the issues listed above without replicating entities in the persistence layer?
- This feels like a lot of boilerplate code and replication of same properties. Seems like this code could be auto-generated by some tool, does something like this exist?
- Is there a good example of how such approach should look?
This question might be redundant or a possible duplicate but I am using an NDK library that runs only on ARM devices. Is there a way I could know what percentage of Android devices can my app support based on CPU architecture just like these statistics on Android Platform Versions?
Currently we have setup the .NET WEBAPI 2 Projects in our application as backend project. Which consist of Kind of hybrid solution like monolithic and MicroServices. Our Project scope is getting bigger and we are going to have around 8-10 Modules which few of may act as independent and few has to be connected with each other. Now below is our current Architecture .
1) We have Separate Solution file and Separate Projects for All Background Windows Services ( separate hosting server)
2) We have Separate Solution file for our API Projects ( WEB & Mobile) which uses BL and DAL and Entity Layer ( Separate hosting for Web & Mobile APIs)
3) We have separate Solution file and API Project for our Third Party API Integrations. ( Separate hosting
But Our Module Controllers are in single API project. Now We are thinking to Implement Microservices architecture but before moving ahead We have couple of questions that need to be answered to decide whether we are on right track or not
1) Our few modules are inter connected and has to be called together (to update the entries in database tables which are in relationship) so if we separates the project for each modules so when user perform submit action on one form it will go through three different modules and lets say its going to be hit 10,000 times in a day so considering three API calls, so total 30 K Request/ Response will be traverse through network, is it feasible? what if second API call get failed .
2) Microservices thumb rule says that database are also need to be separated per modules, so how does cross database transactions can be managed and we may endup with data inconsistency .
3) How multi tenant database architecture would work in this case as each modules are to be separated in different database.
Could any one help to suggest what need to be done or our current hybrid solution is also fine .
Any help would be really appreciated .
Can anyone please tell me about SOA programming models at server side? I searched a lot but unable to find any useful answer. Thanks in advance
I read from multiple sources the definitions and explanations of a data-centric and a service-oriented architecture (SOA). However, none of them talked about a convergence of both. I have a confusion whether these two are completely disjoint concepts or in some cases they can overlap with each other. Is it necessary that independent services in an SOA communicate with each other directly or can they exchange data with a database at the centre of it all? If the latter is the case then is it better to term it as a data-centric SOA?
The Zen 2 is about to be released this July with several architectural improvements. It’s going to offer PCIe v4.0 support among other impro… | Read the rest of http://www.webhostingtalk.com/showthread.php?t=1768177&goto=newpost