What software architecture use for that service?

I have that structure of service.

It will be web application in this year, so its only project structure for now.

I have not experience in software architect like SOA or microservices, so I can not assess the risk for that structure. Main logic of service is: we must have one frontend with user dashboard and some modules (other API services). And user can register/auth only with one (main API), but only after register he can access to other APIs. But sometimes user can receive a data from module B.Ext or Module1 and aggregate it in Main front. How correct to build that structure ? I mean, what of type I can choose: SOA, microservice, monolith ? And if I use this structure, Its normal, to share some user data from Main API to others ? Like his API limits or last operations ?

I hope for help and advice. I will be very grateful! Thank you

Is there any benefits to a distributed computing architecture outside of cryptocurrencies?

I was recently talking to a German start-up that was in an accelerator programme. They are building a product which is a distributed database that they are proposing will have efficiency benefits over companies which keep their data on their own centralized database. They talked about leveraging the computational capacity of different computers in the company to increase efficiency. From what I know about distributed computing, it is more inefficient than having a centralized architecture. Are there scenarios where it provides some efficiency benefits and is this being applied?

How to handle role based access in my microservice architecture

We are trying to move from a monolith application to a microservice architecture faced by a spa application. One of the reason, is that we want to expose some services of our business to partners and another reason is to build a better user experience via an spa application.

In the old application I had a webapplication where I could register an employee (name, surname, empno, adress…), choose his company from a dropdown list and in the same time create an account for this employee in a backend erp : think of this application as a backoffice one. All data was on a single form, when the user submit the form : the backend server save the employee record, and if successful create an account in the erp system.

In order to manage the authorization for this application : the url to register an employee in the webapplication was accessible only if the loggued user have a role that able to handle the “RegisterEmployee function”.

Now in my spa application, I need to call :

  • a microservice for verify if the employee is not already registered ,
  • a microservice to have the list of the known companies
  • a microservice to create the employee account in our erp

I was first trying to define the roles allowed in each microservice, but it seems weird because : – all my microservices could be reused in different scenarios – and they don’t necessary share the same list of roles…

I had read about api gateway lately, and perhaps it is the way to go, but I am not sure how ? Does this mean that my microservices do not have to be aware of any authorization management ?

InfoPath Forms and Site Architecture

We are currently making some plans to migrate to a new version of SharePoint (either 2016 On-Prem or Online). We’ll be aiming towards a more flat design and are wondering if its better to store all forms in one location (one sub-site) or scattered in the departments they are involved with.

The first would give one central location for the workflows (should they exist), and a location that users could go to in order to find their desired form. Additionally you can link to the form with a URL/Hyperlink or WebPart so they could be accessed from anywhere on the intranet.

The second option would decrease clutter in a central location, and be slightly easier to manage who has permissions for said form.

What makes the most sense to pursue? Or would it depend on the environment we are running?

Architecture for a Reddit-esque website

I’m trying to create a website as a forum similar to Reddit, but on a smaller scale (no comments also). The website will allow users to post a link to a forum. Then they can view a list of New or Trending links based on upvotes (gathered from an external API).

So basically all I need is to have a REST API to post links to one of many long lists, each list then users can view from with GET requests. There also needs to be a quick way to get the last 7 days of links so that the most popular ones can be computed. I’m accommodating for a maximum of about 100,000 links posted daily (about 1 per second) or and maybe 10 requests to Trending and New posts.

My current architecture would be:

  • Stores past n (likely n=7) days of links in memory, then periodically empties that list into a long-term storage file.
  • Then to get New links, generally just queries memory but if a subforum is not very active gets older links from file.
  • Periodically updates the list of Trending links (short list in memory) for each subforum, then to get these Trending links queries that list in memory.

This will be mostly built for learning and it will not reach the 100,000 links daily, but I want to build something that can scale to that level so that I will be prepared if I have to in the future.

Would my above architecture be scalable, is there a better way of doing this, any other suggestions? Thanks for your help ahead of time.

Architecture patterns or SaaS solutions for throttling outgoing messages?

I’m implementing my service on AWS as a set of stateless EC2 nodes controlled by an ASG, backed by an RDS instance. I need to throttle the overall number of outgoing messages my service sends but can’t throttle the number of incoming messages.

The specific outgoing messages in this case are SES messages, but the problem seems pretty general to me (my service sends transactional mail, this is not a bulk-mailing problem).

“Use Guava RateLimiter” is not a sufficient solution, the number of instances in my ASG needs to remain variable but the overall number of messages sent across the group of nodes needs to remain under the limit. I think there’s going to have to be some shared-state here.

I’m thinking I’m going to have to implement some kind of queue of work items in the database and then some kind of slot-based throttling system to limit the amount of outgoing messages each node processes, so that the overall number of outgoing messages doesn’t exceed my limit.

My solution doesn’t need to be internet-scale, but I’d like it to be able to deal with a few-thousand outgoing messages per-second. I’m pretty sure the above-mentioned queue/slot-processor system will easily stretch that far on an RDS instance (and it can be re-factored later to work off of Elasticache or something). Another thing I like about the proposed system above is its minimum initial cost at low-scale (well, discounting my cost to write the code).

Are there any AWS services, SaaS solutions or even just libraries that can help me with this? I’m thinking cost-wise, a full-fledged AWS / SaaS solutions will have a cheap introductory tier initially, and as long as the price doesn’t get crazy to scale up it should be fine.

Clean Architecture: Dependency Rule and Libraries/Frameworks

In Clean Architecture by Robert C. Martin the dependency rule points strictly from the outermost layer/ring to the innermost.

As an example a Dependency Injection Framework should lie on the outermost layer (frameworks) to allow for it to be replaced.

However, a DI framework which relies on attributes would clearly break this, as any class which would require those attributes would have a dependency on the framework. Therefore such a library could not be used following the dependency rule strictly.

I am running into the same problem with utility libraries, e.g. a Math Library or some Rx library providing IObservables/Subjects.

The math library could be wrapped by an adapter to keep it replacable – which makes sense, but a Entity framework providing the framework for both entities (inner most layer) as well as systems (business rules) and maybe even UI (presenters) simply do not go well with this design.

However, even for the math library the cost of adding the interfaces for Dependency Inversion+Adapter sounds pretty insane.

Am I missing something or is this more or less a rule which commonly break when trying to implement Clean Architecture.

Windows Service 24/7 consistent with clean architecture

I have to write windows service running 24/7. Service must process documents from various sources. I wrote some code but, I don’t know how to apportion it on assemblies and folders consistent with clean architecture. It is windows servide so I don’t have presentation layer. Can I count on some advices? This is simpler version of my code:

Configuration

public interface IConfiguration  {       int ThreadCount { get; set; }        Dictionary<string, string> ConnectionStrings { get; set; }  }   public class Configuration : IConfiguration  {       public int ThreadCount { get; set; }        public Dictionary<string, string> ConnectionStrings { get; set; }  }   public interface IDocumentProviderConfiguration  {       string Name { get; set; }        string Type { get; set; }        string StoredProcedure { get; set; }        string ConnectionString { get; set; }  }   public class DocumentProviderConfiguration : IDocumentProviderConfiguration  {       public string Name { get; set; }        public string Type { get; set; }        public string StoredProcedure { get; set; }        public string ConnectionString { get; set; } } 

Root class

public interface IService { }  public class Service : IService {     private readonly CancellationTokenSource _cancellationTokenSource;     private readonly CancellationToken _cancellationToken;     private readonly List<Task> _tasks;      private readonly IConfiguration _configuration;     private readonly IFeedingProvider _feedingProvider;      public Service(IConfiguration configuration, IFeedingProvider feedingProvider)     {         this._configuration = configuration;         this._feedingProvider = feedingProvider;          _cancellationTokenSource = new CancellationTokenSource();          _cancellationToken = _cancellationTokenSource.Token;          _tasks = new List<Task>();     }      public void StartProcessing()     {         _feedingProvider.Start();          for (var i = 0; i < _configuration.ThreadCount; i++)         {             _tasks.Add(Task.Run(()=>RunProcessor(), _cancellationToken));         }     }      public void StopProcessing()     {         _feedingProvider.Stop();          _cancellationTokenSource.Cancel();          Task.Factory.ContinueWhenAll(_tasks.ToArray(), result =>{ }).Wait();     }      public void RunProcessor()     {         System.Diagnostics.Debug.Print("started");          var processor = new TestProcessor();         try         {             while (!_cancellationTokenSource.IsCancellationRequested)             {                 var sleepTime = 1000;                 using (var item = _feedingProvider.Dequeue())                 {                     if (item != null)                     {                         var result = processor.Process(item);                          //log                          item.Commit();                          sleepTime = 0;                     }  Task.Delay(sleepTime, _cancellationToken).Wait(_cancellationToken);                 }             }         }         catch (Exception exception)         {             if (!_cancellationToken.IsCancellationRequested)             {                //log             }         }     } } 

Feeding part

   public interface IItem: IDisposable     {         Guid Id { get; set; }          void Commit();     }      public interface IDocumentProvider     {         IEnumerable<IItem> GetItems();     }      public interface IFeedingProvider     {         void Start();          void Stop();          IItem Dequeue();     }      public class FeedingProvider: IFeedingProvider     {         private readonly ConcurrentQueue<IItem> queue;          private List<IDocumentProvider> documentProviders;          private readonly Timer feedingTimer;          public FeedingProvider()         {             queue = new ConcurrentQueue<IItem>();              feedingTimer = new Timer(2000);         }          public void Start()         {             feedingTimer.Start();         }          public void Stop()         {             throw new NotImplementedException();         }          public IItem Dequeue()         {             queue.TryDequeue(out var item);              return item;         }          public void Load()         {             foreach (var documentProvider in documentProviders)             {                 foreach (var item in documentProvider.GetItems())                 {                     queue.Enqueue(item);                 }             }         }     } 

Process Part

  public abstract class Processor     {         public abstract ProcessingResult Process(IItem item);     }      public class TestProcessor: Processor     {         private readonly ICustomerProvider _customerProvider;         private readonly IDocumentRepository _documentRepository;          public TestProcessor(ICustomerProvider customerProvider, IDocumentRepository documentRepository)         {             _customerProvider = customerProvider;             _documentRepository = documentRepository;         }          public override ProcessingResult Process(IItem item)         {             //Processsing Item             //Use _customerProvider             //Use _documentRepository             return new ProcessingResult             {                 IsSuccess = true             };         }     }      public class ProcessingResult     {         public bool IsSuccess { get; set; }     } 

DocumentProvider part

   public class TestDocumentProvider : IDocumentProvider     {         private readonly IDocumentProviderConfiguration _documentProviderConfiguration;          public TestDocumentProvider(IDocumentProviderConfiguration documentProviderConfiguration)         {             _documentProviderConfiguration = documentProviderConfiguration;         }          public IEnumerable<IItem> GetItems()         {             foreach (DataRow row in GetDataTable().Rows)             {                 yield return new TestDocument(row);             }         }          public DataTable GetDataTable()         {             throw new NotImplementedException();         }     }      public class TestDocument: IItem     {         public Guid Id { get; set; }          private DataRow dataRow;          public TestDocument(DataRow dataRow)         {             this.dataRow = dataRow;         }          public void Commit()         {             throw new NotImplementedException();         }          public void Dispose()         {             throw new NotImplementedException();         }     } 

Additional parts

 public interface IDbProvider     {         void ExSPNonQuery(string sqlCommandName, List<SqlParameter> sqlParams = null);          DataTable ExSPDataTable(string sqlCommandName, List<SqlParameter> sqlParams = null);     }      public class DbProvider: IDbProvider     {         private  string connectionString { get; set; }          public DbProvider(string connectionString)         {             this.connectionString = connectionString;         }          public void ExSPNonQuery(string sqlCommandName, List<SqlParameter> sqlParams = null)         {             throw new NotImplementedException();         }          public DataTable ExSPDataTable(string sqlCommandName, List<SqlParameter> sqlParams = null)         {             throw new NotImplementedException();         }     }   public interface ICustomerRepository     {         IEnumerable<Customer> GetCustomers();     }      public class CustomerRepository: ICustomerRepository     {         private readonly IDbProvider dbProvider;          public CustomerRepository(IDbProvider dbProvider)         {             this.dbProvider = dbProvider;         }          public IEnumerable<Customer> GetCustomers()         {             var customers = dbProvider.ExSPDataTable("dbo.GetCustomers");              return customers.AsEnumerable().Select(c => new Customer             {                 Id = c.Field<int>("Id"),                 Name = c.Field<string>("Name")             });         }     }      public class Customer     {         public int Id { get; set; }         public string Name { get; set; }     }  public interface IDocumentRepository     {         void AddSomethingToDocument();          void DeleteSomethingFromDocument();     }      public class DocumentRepository : IDocumentRepository     {         private readonly IDbProvider dbProvider;          public DocumentRepository(IDbProvider dbProvider)         {             this.dbProvider = dbProvider;         }          public void AddSomethingToDocument()         {             throw new NotImplementedException();         }          public void DeleteSomethingFromDocument()         {             throw new NotImplementedException();         }       } 

Where to store data for Microservices Architecture?

I currently build applications that are fairly monolithic. I have a one or many code bases that compile into one single binary/package and deployed on a cluster of docker containers. All of the data is stored on a single MySQL database, a redis cluster and possibly a NoSQL database for some of the data.

In this case, the bulk of my data is stored in an MariaDB RDS instance on Amazon Web Services. This works fairly well because RDS handles automated backups, and other benefits.

However, let’s say that I want to split a service into its own “microservice”. Where would it store its data? If I have 5 microservices, spinning up 5 RDS instances, 5 redis clusters doesn’t seem to be the most cost effective and seems to be a lot of management overhead.

It seems to me that a cluster of docker containers would be more manageable for a single microservice. For example using something like docker-compose, you can spin up several docker images into as a single unit very easily. AWS has a similar concept called “Task Definitions” (to my knowledge) which you can launch on AWS Fargate or ECS. However, Fargate does not allow persistent storage, so you are basically forced to launch your database in something like RDS.

I suppose this is a fairly open ended question, and might pertain to DevOps more than actual Software engineering. How would someone design a microservice to be easily deployed on the cloud whilst being easily maintained as a separate but packaged unit of sub-services (app server, databases, cache, etc)? Using docker compose and amazon Task definitions seems to be the best way to keep consistency between development/staging/production environments, however it does have some limitations such as not having persistent storage on Fargate for example.

Just looking for examples on how someone might achieve this to help my understanding.