What is the best architecture in a microservices ecosystem for a dynamic page that its data is provided by other services?

How do we achieve loose coupling in the below scenario:

We have a web store which sells finite types of products (e.g. Movie, Music and Book) which we adopted a microservices architecture for it. The web has a different page for each type of products (One for movies, one for musics and one for books). We want to have separate teams which each focuses on one of these products and has all authority on its business concerns. The problem lays in the designing of home page. Our home page is a dynamic page which has different rows that each may contain one of these products. For example in a day we may have a “New Movies” row at top and then a “Most Selling Books” under it and another day first row is about musics and so on. Which service and which team should have control over the home page? Should we have a separate service (and probably a team by Conway’s Law) for it?


How to ensure persistence in an event-driven microservice architecture?

Right now we have a set of micro services with a client-facing gateway responsible for authentication and routing. MySQL database and RabbitMQ for the message queue. Internal services may queue up requests as necessary but the gateway never does. What I’m finding is that the time for the gateway to respond is too long. I want to queue messages from the gateway to async the entire system and generate faster response times but I’m not sure how to guarantee persistence so we can handle failure. I’m the absence of an actual database record I’m not sure how it can handle proper persistence.

In short: how does a fully event-driven architecture guarantee persistency of data. It will be easier to elaborate with a concrete example.

We collect earning records and then later allow people to make payments based on their earnings. We have an endpoint to create an earnings record. What it looks like is the following:

client -> gateway -> earnings service

The problem with this is that it’s all synchronous. The earnings service call takes too long to do all the work. It has to write some records, do checks to make sure the data is valid, etc. There are other asynchronous portions but not this part.

If I queue at the gateway then there’s no database record yet. The first record gets written in the earnings service. In our synchronous system that’s fine because we guarantee a record on a 200-level response. If I start queueing at the gateway I now need to ensure we don’t lose messages without a db record. How would I do that?

Is there some sort of way to guarantee persistence in Rabbit? Is a database of messages an option? Our endpoints are idempotent so in a disaster recovery situation we can replay messages without harm. I’m just not sure how to go about doing this. Thanks

Why do we still use a Von Neumann Architecture in modern computers?

The Von Neumann architecture was first created in the mid 40s for use in a computing system known as ENIAC for research into the feasibility of thermonuclear weapons.

To this day the Von Neumann architeture is still primary foundation in the majority of modern computers. I have listened to a few historians and scientists mention that there is likely more efficient architectures and that Von Neumann himself didn’t believe in its universal capability(unfortunately cannot remember enough to find a link).

So why do we still use this architecture in the majority of modern computing?

Clean Architecture – Where to put business calculations when entities are autogenerated db first efcore?

I’m trying to switch to clean architecture from a traditional layered architecture approach and trying to figure out where to put business logic. Please consider the below scenario –

Employee class (exists in Efcore db first scaffolded in core)-

class Employee {     public int Id {get; set;}     public int TimeWorked {get; set;}     public int ZoneCode {get; set;}     public datetime JoiningDate {get; set;}     public decimal Reimbursement {get; set;} } class SomeMasterData { //Could be a value object     public int ZoneCode {get; set;}     public decimal OvertimeAllowedForZoneCode {get; set;} } class SomeOtherMasterData {  //Could be a value object     public int OvertimeMultiplier {get; set;} //and whatever other props } 

Business logic related to the employee –

int CalculateReimbursement(Employee emp, SomeMasterData masterData, , SomeOtherMasterData masterData2) {     // Use data from Employee (fields like zone code, joining date etc) and     // data from SomeMasterData to calculate the reimbursement amount for the employee     // The below business logic is random, created for this question     if(emp.ZoneCode == masterData.ZoneCode && emp.JoiningDate > 'some date') {         return masterData.OvertimeAllowedForZoneCode * masterData2.OvertimeMultiplier * emp.TimeWorked;     }     else //More business logic based on many entities and value objects     ... } 

My question is – Considering Clean Architecture, where to put the CalculateReimbursement method? Below are a few options –

  1. In domain services in the core – But is this what domain services are for?
  2. In the Employee class – But that class is autogenerated by Efcore db first scaffolding so I can’t change it. Should I create another Employee partial class with this method?
  3. In some sort of “helper” class within the core – if so, what do I call it?
  4. Somewhere else that makes more sense keeping Clean Architecture in mind?

Docker Minimum viable architecture [on hold]

My company is trying to adopt docker for new Web Services development on ASP dot net core. We use TFS as source control. My job is to come up with the minimum viable architecture for Asp Net core with Dockers. I have been doing POC’s with Windows server 2016.

Here are my thoughts on how we will build our architecture


There is an concern in the company about code being saved in Docker hub. We could spin our own registry. But so far only Linux container is being supported. There is a windows 2016 and image [ stefanscherer/registry-windows] . I will have a hard time convincing my architect to use any non standard images. The only option would be installing Install Docker Desktop for Windows using Windows 10 or using Linux as Host OS to use the registry feature.

Is this a fair assessment on registry ?

Is it advised to have dedicated Host for Registry ?

Continuous Integration

I’ve been looking at Team City for CI. My thought is we can live without CI for now since Team City is new to us. I don’t know the how easy/hard it is to use TFS for this

Dedicated Docker Hosts

  • Registry Host on [ Linux or windows 10 ]
  • Development VM – For development of Api and creating and pushing docker images
  • Dev; QA ; PROD – We will manually pull and run images in a container using dotnet core default Kestrel web server.

What’s your thoughts on this ?

Finding percentage memory utilization in pipelining architecture

I was solving problems from the exercise of the book “Computer Organization and Design” by Patterson. The problem reads like this:

Consider stage latencies:

IF      ID      EX      MEM      WB 250ps   350ps   150ps   300ps    200ps 

Consider instructions %:

ALU     BEQ     LW      SW 45%     20%     20%     15% 

What is utilization of memory?

The solution given was:

LW and SW stages of pipeline use memory. Thus, 20 + 15 = 35%

However, I was guessing shouldnt we also consider stage latencies? That is , shouldnt this be $ \frac{300}{250+350+150+300+200}\times \frac{35}{100}$ ?

Best architecture for a web app with highly sensitive data

I have to design a web app that contains medical information. Only the staff of the organization need to access the data in the office and on the go. A VPN is already in place. On premise server management is outsourced to a company with limited knowledge on running web servers. Still, the client is very worried about putting any data on the cloud.

What is the best way to architect the app (Angular, Python backend, database) to secure the data?

These are the options I have thought of before:

  1. Host everything on premise behind a firewall, users will have use their VPN to log in. Pro: most secure. Con: their hosting will not be as cheap and efficient as something in the cloud.
  2. Host the Angular app in the cloud, and the Python backend and the database on premise. A static IP could be used for the Angular app and the firewall sitting on top of the backend could filter all traffic that doesn’t come from that IP to offer security on top of user authentication. Pro: Easier deployment for front end changes, access without using VPN possible, less hosting trouble, password to database would never reach the cloud. Con: hosting the backend on premise will still be an issue for a company with limited experience in this area.
  3. Host the Angular app and the Python backend in the cloud with a static IP. The database is on premise and the firewall filters all traffic that doesn’t come from the Python backend. Pro: Easier deployment, cheaper and more reliable hosting. Con: the password to the database connection lies in the cloud.

Designing a WPF / MVVM architecture where view behavior changes in different states

As part of my bachelors thesis, I’m trying to develop something akin to a painting program. That means, I have a toolset, be it selection, drawing, highlighting, etc.

I’ll have a canvas that displays my current model, based on a set of rectangles/spheres/polygons I have already drawn and created.

Some tools require the view to react differently based on the tools. Rough Examples:

I have a “New Line” tool. The view now displays points you can connect from when you hover over them.

I have a “Selection” tool. When I hover over an element, it’s entire color changes.

I have a “New Element” tool. When I move my mouse, a shadow of the element follows my cursor until I press the left mouse buttons.

Now, here’s where I’m a little stuck:

All these tools require wildly different behaviour from the view, and not only that, it also requires the view to be dynamic based on calculated properties. I got a few ideas how I could architect my system, but I would rather get nudged into a direction before I do a huge mistake. Here’s my questions and thoughts :

Q: First of all, where would I even put that interactive code? I can’t do it in XML, since it requires calculation, but putting it in the ViewModel is also not correct since the ViewModel is not supposed to know about the View. Thus, do I put it in the Code of the View itself? That also seems kinda strange to me.

Now to my architectural ideas:

Idea 1: I could do a new View + Viewmodel for each tool. When A new tool is selected, the view and viewmodel are simply exchanged in a frame, and all the behaviour is encapsulated inside the new view + ViewModel. However, that seems like it not only tightly couples each View and ViewModel together, it also feels like a lot of boilerplate code.

Idea 2: Each Tool itself has “Command” class based on a ICommand interface that requires a reference to the view as well as every possible option of User Interaction. Then the ViewModel delegates the UserInput onto the current Command, which can then manipulate the given ViewInstance to display things it wants. This however feels very inflexible, as if I’m just delegating the problem from Idea 1 somewhere else, and incapable of future growth. Whenever I want to add a new way for a user to interact, I’d have to go and adjust the interface, and perhaps all underlying commands if I didn’t provide a default implementation.

None of these really satisfy me, and I’m feeling like I’m missing a crucial step for this part of the architecture. I’d gladly appreciate any pointer. Thank you for your time and reading!

Clean Architecture – What is the difference between Use Cases and Core Services?

I’m trying to apply Clean Architecture to a simple ASP.NET MVC Core app by following Microsoft’s ASP.NET architecture guidelines and their eShopOnWeb sample project.

In the standard Clean Architecture approach, business logic is put into “Use Case” classes in the core project. In the Microsoft’s example, there are no Use Case classes, but it does have Services inside the Core project. Are the services inside the core supposed to be same as use cases? If no, what is their role?

What is the appropriate architecture to access variable in parent from element of child list?

I have a Parent object which looks like this (pseudocode):

class Parent {   String token;   Child[] children; } 

It contains a token string and an array of Child objects. My problem is that each of these Child objects needs to access the token string from the Parent class.

My first hunch is to loop through the children and specifically set a reference to the Parent object. Is there a more recommended way to take care of this, or is this kind of coupling unavoidable? I’m working in C# specifically but would be interested in any language-agnostic solutions as well.