why not comile a designated language directly to microservices?

That’s not serious, ok? So…

there’s so much mess and annoying articles and whatever around the m-word, that the question popped up, why not to compile some language directly to m-s to eliminate all the noise once and for all and cover as many other buzzwords as possible on the way? so instead of “oh, how do I shredder my odd monolith to sexy m-s?” just: “type. compile. deploy. (covering m-s, immutability, pure functions, actors, event sourcing, polyglot programming…) “, right?

To make it easy, let’s assume everything’s super-functional, immutable and trendy and looks somehow like Scala, e.g.

let v = if (a > //php// b) {a – //erlang// b} else { b + //c++// a}

What do we do with this? We create 4 microservices (one each for “>”, “-“, “+” and another for the if-statement itself). We need some api-gateway to call this somehow, but that’s a mundane detail. We can add annotations (/c++/) to tell the compiler which target language/environment should be used per service, so each software-fundamentalist calcification in your company will be happy.

Obviously an actor based approach will be hugely helpful, giving us lock-free message processing, that the “+” microservice can process any amount of incoming calls and the “if” any amounts of propagated results from “>”, “+” etc. Everything will be phatastically parallel, except for if-statements in the context of recursions.

By persisting all messages passed we should be able to replay the whole thing and manipulate it at every level, giving debugging a new meaning. Maybe also all this service-mesh stuff can be replaced digging in the message log.

I’m also tempted to give it Javascript-like no-datatype semantics, so no matter what you feed it, you’ll always get a result, right?

Thoughts appreciated!

Integration database on micro-services

I need some wisdom on this. Currently I’m learning about micro-services architecture, and so I decided to make a simple project in a micro-services architecture. The project I’m building is just a simple Inventory management application, where the functionality is, well just to add Item and information about it. So at the current state, I have a:

  • Authentication server
  • Item management server

The Authentication server has responsibilities to handle registering a User, changing its password, and exchanging http basic auth to jwt token to be used in Item management server. Item management server is a basic CRUD server for Item.

The way I do this, I create a single database, both server connecting to this database and I have another repo that runs migration on the database that satisfies both servers. Currently I’m using PostgreSQL. Let’s say this is the schema:

MyAppDb:   - UserTable     - id ;; UUID     - username      - password   - ItemTable     - id     - owner ;; foreign key to UserTable id     - name 

This article states that using a single database for many apps is bad, because your server is actually tightly coupled to the others through the database. If I’m going to apply this, then I have to change how I work with the database:

MyUserDb:   - UserTable     - id ;; UUID     - username      - password  MyItemDb:   - ItemTable     - id      - owner ;; not sure what the type of this, should this be a plain UUID?     - name 

So the question is:

  1. What would be the suitable data type for owner in ItemTable that previously points to user id?
  2. Since I’m no longer using shared databases, for example, if I need to query an Item and I also want to display its owner, does that mean I have to do an HTTP call (assuming I’m using REST) to the Authentication server (assuming the Authentication server is capable of doing this)?

Integration database on micro-services

I need some wisdom on this. Currently I’m learning about micro-services architecture, and so I decided to make a simple project in a micro-services architecture. The project I’m building is just a simple Inventory management application, where the functionality is, well just to add Item and information about it. So at the current state, I have a:

  • Authentication server
  • Item management server

The Authentication server has responsibilities to handle registering a User, changing its password, and exchanging http basic auth to jwt token to be used in Item management server. Item management server is a basic CRUD server for Item.

The way I do this, I create a single database, both server connecting to this database and I have another repo that runs migration on the database that satisfies both servers. Currently I’m using PostgreSQL. Let’s say this is the schema:

MyAppDb:   - UserTable     - id ;; UUID     - username      - password   - ItemTable     - id     - owner ;; foreign key to UserTable id     - name 

This article states that using a single database for many apps is bad, because your server is actually tightly coupled to the others through the database. If I’m going to apply this, then I have to change how I work with the database:

MyUserDb:   - UserTable     - id ;; UUID     - username      - password  MyItemDb:   - ItemTable     - id      - owner ;; not sure what the type of this, should this be a plain UUID?     - name 

So the question is:

  1. What would be the suitable data type for owner in ItemTable that previously points to user id?
  2. Since I’m no longer using shared databases, for example, if I need to query an Item and I also want to display its owner, does that mean I have to do an HTTP call (assuming I’m using REST) to the Authentication server (assuming the Authentication server is capable of doing this)?

Proof of concept for enterprise level Microservices

I’ve been studying gRPC a lot and went through several presentations from youtube. I’m presenting two options. The idea is to develop a minimalistic but comprehensive demo in a scala and the right way of doing things in microservices.

First I thought the gRPC is inter-service communication only until I saw this video (5:40) (screenshot) and turns out there are libraries for browser as well so is that mean REST or http+json is out of the window?

I’ve read that the companies who used REST as inter-service communication are struggling.

Appreciate if someone can even upscale it to make it an enterprise level, and suggest me more ideas and features to put in, doesn’t matter over-engineer it because it’s just a proof of concept.

I’ve also read that I should have a separate database for each microservice but using shared one at the moment, I want to get more enlightenment on connecting pieces for the middleware.

enter image description here

enter image description here

Is asynchrony always necessary with microservices?

I’m reading about microservices and a consistent message I’m getting is that microservices should communicate via asynchronous means only. But I don’t see how this can work while also providing feedback to the user?

Taking an e-commerce application as an example, we may have an OrderService and a PaymentService. The client makes a request to the order service to complete the order, which in turn makes a request to the payment service to process the charge.

However, if the request to the payment service from the order service is asynchronous, how do we tell the user whether their order was successful, as the client is expecting a synchronous response from the order service?

Data replication with microservices using a messaging broker

I have a microservice based application like so:

User.Microservice – stores , user information.

Product.Microservice – stores products that user created.

Order.Microservices – stores product orders

Stack:

RabbitMQ – for Event based communication

MongoDB – for Data Storage

Each service has a public HTTP REST API.

Each service can publish and consume messages from RABBITMQ

Here is how everything communicates: current stack

Client makes a request to API GATEWAY:

lets suppose client makes a POST request to product service -> the product has user reference and other details about a product.

Well now if i want to populate product item with user details such as user’s username, the API GATEWAY should make a GET request to user service.

Here comes the data replication part i have to add the required information about user to product service.

How can i replicate user details to product service after a product is created ? And return the response to POST request along with user’s username ? if it is not present in product service…

I could use the current API GATEWAY method where the gateway calls user service , but if the user service goes down i will not have the required user details…

Somehow i need to run bus events before the response from POST request is returned…

Repository structure for microservices

As part of a larger project, my team is building a microservices API layer. We do not have experience with building microservices so we have been trying to figure out how to go about the project.

The first decision we are working on is if we should use a mono-repository or split each microservice into a separate repository. It seems like there are pros and cons to each option.

A mono-repo will increase development time, promote code reuse and make refactoring easier. However, the code base is larger, longer clone time, could run into issues merging and could increase complexity of deployments.

A multi-repo approach will have smaller code bases, less code to clone and will reduce the deployment complexity. However, this approach will increase the development effort. Debugging time can increase and makes it difficult to share common code.

To sum it up, it seems multi-repo is better when the application is in production while a mono-repo is better for development.

With all that said, is there anything else I should consider when deciding whether to use a mono-repo or a multi-repo approach?

Microservices and shared libraries

We are designing a system based on independent microservices (connected via a RabbitMq bus). The code will (for the first components at least) be written in python (both python2 and python3). We have already a monolith application implementing some of the business logic, which we want to refactor as microservices, and extend. One question that worries me is:

What is the best way to share code between the different microservices. We have common helper functions (data processing, logging, configuration parsing, etc), which must be used by several microservices.

The microservices themselves are going to be developed as separate projects (git repositories). The common libraries can be developed as a self-contained project too. How do I share these libraries between the microservices?

I see several approaches:

  • copy around the version of the library that is needed for each microservice, and update as needed
  • release the common libraries to an internal PyPi, and list those libraries as dependencies in the microservice’s requirements
  • include the library repository as a git submodule

I would like to read a bit more about suggested approaches, best practices, past experiences before deciding on how to proceed. Do you have any suggestion or link?

Connecting microservices with eachother

my only skepticism about using microservices over REST /HTTP is that there could be a performance drop using too many microservices over REST, with a REST connection, the data would always first need to pass through an HTTP server and things like latency would be an issue. imagine a data process which needs to pass-through 100’s of microservices which are connected via rest to each other. Is there a better way to achieve this without REST?

Checking tenant information in microservices

I am currently trying my hand at a microservices architecture for the first time, and I am looking to put together a multi-tenant application built on a this architecture. Tenants are created with their own subdomain, and the tenant owner can create further user accounts linked to that tenant

I currently have the identity api set up, and was thinking of composing the rest a bit like the following:

enter image description here

The Gateways are intended to be implemented as Backend-For-Frontend and would aggregate data as necessary to satisfy the client request to that gateway.

In the identity API, I use the SaasKit middleware to check the subdomain and get tenant details. I was wondering what would be the best approach to apply this tenant discovery across the rest of the services? I am wary of creating a coupling that would undermine the autonomy of microservices. Would I do my tenant discovery in the gateways and pass the tenant ID to the microservices when requests are made to the services, should I be holding local copies of tenant information in each service, or should I use SaasKit in each service and call out to the identity API in each service to get tenant information if its not already cached?

EDIT: To add some context on to how tenants are created; The tenants are created via an API call from a separate system which provides a JWT created by a central authentication service separate to this. Users are also created this way, but the users created here are authenticated here rather than the ‘other’ authentication service