DDD and Infrastructure micro-Services – how should the interface be designed?

We’ve extracted our email sending into an EmailService – this is a microservice that provides resiliency and abstracts the email logic into an Infrastructure service. The question is how the interface to the EmailService should be defined, with respect to the information it requires about the [User] domain

Approach 1:
EmailService exposes an interface that takes all the fields of the [User] domain that it requires.

Approach 2:
The EmailService interface takes only the UserID. The EmailService then queries the UserService using this ID to fetch the fields that it requires

There are some obvious pros/cons with each approach.
Approach1 requires the calling service to know everything about a User that the EmailService requires, even if its not part of the callers Domain representation of a User. On the other hand the contract between the services is explicit and clear.

Approach2 ensures that the [User] fields are fetched as late as possible, minimising consistency problems. However this creates an implicit contract with the UserService (with its own problems)

I’ve been doing a lot of research here and on SO, but I haven’t been able to find anything around this specific interface design problem. Keeping DDD principles in mind, which approach is correct?

Moving On Premise Infrastructure into Azure/AWS

I am new to Server Fault (coming from StackOverflow) and please tell me if this is not the right place to ask this general question. Basically our company is trying to move away from an on premise infrastructure to a cloud infrastructure. We are considering Azure/Aws for this.

Currently we have some virtual machines running on our local server. A Server with the Domain Controller, a File Server, a Database Server, and one for our websites with IIS installed. Our Exchange we already moved to Exchange online with Office365 and Azure Active Directory,

However, we also wanted to move the other servers into the cloud. I thought that we can create Virtual Machines on Azure for example and join all of them to the same Active Directory by connecting them to the same Virtual Network. Would this even be the right approach?

Now, saying that we would set up everything like this, if I informed my self correctly, we would need to set up a site-to-site connection so that we can access everything on these servers from our on premise network. But is it even possible to join the Active Directory that is running on a Domain Controller on a Virtual Machine in Azure from our on premise network?

Another question, how is the performance? Of course it will be slower than having everything in house, but our files are not super large and the requests aren’t too many.

Once again, I am net to this stack exchange and I am mainly a programmer. We are a small company though and I am trying to modernize our infrastructure a little bit. I am not an expert in networking, therefore I am asking you experts here hoping to gain some knowledge. So please be nice:)!

Can DDD entity rely upon infrastructure library?

Suppose I want this on my User entity:

user.createNewSecurityToken(); 

That means:

public void createNewSecurityToken() {   var buffer = new byte[32];   new RNGCryptoServiceProvider().GetBytes(buffer);  // <--- here's the issue   this.Token = Convert.ToBase64String(buffer); } 

This is not a DI dependency, just a class from the infrastructure. But does that “break” DDD?

The alternative is this:

user.setSecurityToken(token);   // pass it in (probably from a domain service) 

But that leads to anemic entities.

Which is the preferred DDD approach, and what considerations should I take into account when faced with this sort of design decision?

Infrastructure – different variants. What are the advantages of the virtualization layer?

The infrastructure that I see most often:
First infrastructure

Is the infrastructure below not simpler and better?
Is this the future or am I wandering the forest?
Second infrastructure

The first infrastructure.
– Many layers use more resources?
– Looking at the whole is more difficult and requires knowledge of more tools?
– In which layer should the applications be scaled?
– It’s harder to find performance errors?

Considering how computer hardware and I / O devices work, more conscious allocation of resources should give better performance than VM (e.g. join Intel Xenon and Pentium 4 then assign VCPU to different machines)

Second infrastructure
– At the kubernetes level, we can also scale horizontally depending on the needs, we can also set resources for every container.
– OS, thanks to Linux users accounts, allows for secure administration.

I have one doubt which is better. Applications that grow over time and need to keep increasing the space on the disks.

How to design virtualization infrastructure

I have a university assignment that asks you to design the infrastructure for a company that wants to virtualize their servers.

It asks for number of hosts required, number of VMs running on each and what each is running, cpu, ram, storage etc.

It provides you with a list of their current non-virtualized servers to work from (shown at end of post).

I don’t understand how I’m supposed to design the infrastructure without running any tests first to determine how much cpu/ram/storage each server needs. Wouldn’t it just be a complete guess?


  • 2 x Active Directory domain controllers on Windows Server 2008 R2, (2 x Xeon 3.6GHZ, 8GB RAM, 140GB HDD). These servers are used for authentication and authorisation;
  • 3 x SQL Server 2003 database servers on Windows Server 2003 (2 x Xeon 2.8GHZ, 4GB RAM, 500GB RAID-5 array). These servers are used for database operations for Active Directory, Exchange, SharePoint and their Client Design application;
  • 1 x Exchange 2007 email server on Windows Server 2008 R2 (2 x Xeon 3.6GHZ, 8GB RAM, 250GB RAID-1 array);
  • 4 x Windows Server 2003 File and Print servers (2 x Xeon 2.8GHZ, 4GB RAM, 250GB RAID-1 array);
  • 1 x SharePoint 2007 server on Windows Server 2008 R2 (2 x Xeon 3.6GHZ, 16GB RAM, 250GB RAID-1 array);
  • 1 x Client Design and CRM (Customer Relationship Management) application server on Windows Server 2008 R2 (2 x Xeon 3.6GHZ, 8GB RAM, 250GB RAID-1 array);
  • 2 x Red Hat Enterprise 5 Linux servers running Apache and TomCat (2 x Xeon 2.8GHZ, 16GB RAM, 140GB HDD).

Transfer AWS to own infrastructure

Our company wants to use AWS services, but is worried about the future. What happens when Amazon discontinues a service, what if prices rise too much, what if a customer wants to move the services from the cloud to his own infrastructure?

Is it possible to move all services to an own network? Only some? Which ones? How much effort is it?