How to better scale reads in a website: CQRS vs single domain model with redis cache

The company i’m working for it’s rewriting a legacy web application and we are evaluating the better architecture to get the job done.

The business domain is quite simple: there are few entities, few relationships and simple business rules.

We expect a low number of concurrent writes, but a huge number of reads: the expected read workload is much greater than the expected write workload.

The web application is meant to allow a few number of editors to create entities called blogs which are composed by posts. There are different type of posts: textual posts, photos, links to youtube videos, links to twitter posts and so on.

The expected workflow is that an editor, which is following a live sport event, is in charge of editing a dedicated blog which is basically a stream of posts for the event: the editor creates one post for each interesting fact of the event. The main concern is having each post available for the mobile and web clients as soon as possible so that people all around the world will be able to follow the event.

We have evaluated two possible architectures:

  • CQRS architecture with two different databases one for the command stack and one for the query stack
  • simpler architecture with one domain model supporting both writes and reads and one database

The main advantage of the CQRS approach is the possibility to distribute the content to the clients in an optimized way by using some dedicated read models in order to have simple and fast reads from the read model database. This way the read and write sides can be scaled independently, exploiting the difference between the write and the read workloads highlighted above.

The main advantage of the single domain model approach is a much simpler architecture. In this scenario the commands from the editors are processed synchronously and when a command is successfully processed the editor knows for sure that its work is saved inside the database and available for clients. No eventual consistency between write model and read model, no need to handle the asynchronous update of the read model from the perspective of the backoffice UI users (the editors), no need for any kind of messaging system involved in mission critical tasks.

In my opinion considering our requirements the best way to go is using a single domain model architecture and scaling the reads by using an aggressive caching strategy. The idea is using Redis as a cache in order to limit the database access and trying to update the cache layer each time we write something inside the database using a streaming approach (we will probably use Mongo DB and our first idea is exploiting the change stream feature).

Do you think that a properly sized database and a wise caching strategy with redis could be enough in order to handle our read needs ? Or conversely in a scenario where the read loads is much greater than the write loads the best way to go is using a CQRS architecture (even at the cost of a bigger overall complexity) ?

Better Traffic Sites

I find other small Search Engines giving more traffic and Answer sites with better marketing efforts than Yahoo since 2-5 years. Same is the case with Google as both the companies will be going into losses in their efforts to aquire small companies in their small niche.
Like Google is finding small Chat Apps and commenting anything about them with official staff they hired.
Orkut was closed down, same will happen with both these companies and Internet will be better place again.

Internal pages ranking over the homepage: How to optimise to rank better at Google?


We have experienced a shift in SERP from internal pages ranking over website homepage for more than a year. Previously website homepages used to rank for the primary keyword like for "SEO". Now we can see that internal pages like been ranking for the primary keyword "SEO". Google is picking up these "what is ABC" pages than the homepage. All our competitor sites are ranking with these internal pages which are about "what is (primary keyword)". We do have the same internal pages "what is….", but this pages is not ranking; only our homepage is ranking. Moreover we dropped more than 15 positions after this shift in SERP. How to diagnose this?


Better approach to fill in the details in a form?

There are two UX patterns that I have noticed in many applications when it comes to filling in details.

  1. Creating a new entity or editing an existing one through the sliding panel Clicking on New would bring out the sliding panel

  2. Opening a new page to enter the details. This is seen mainly in places where there is a need for tabs or there are too many entries to provide. enter image description here

What are the scenarios where one would be preferable over the other? Is it recommended to use the new page pattern if there are more than a certain number of fields to be entered into the sliding panel?

Naming conventions for .net C# Unit test projects for better sorting

I currently use the following naming scheme for my unit test projects.. if I have a project “MyApp”, I will have..

-- MyApp -- MyApp.Tests 

I see this is quite a common practice.

Now, the problem is, if I then have another project MyApp.Common. Now in solution explorer, my projects are sorted

-- MyApp -- MyApp.Common -- MyApp.Common.Tests -- MyApp.Tests 

So my MyApp.Tests is no longer “next to” the main project it is testing. As the projects grow, the tests can become mix up all over the place,

I know it is not a big thing, but just wondering if anyone else has encounters this, and thought of any other way?

Thanks in advance for any suggestions.

Using for player input handling, is it better to send input as a parameter or have multiple input handlers?

I am building a multiplayer game using a Node.js server with for gameplay communication.

While developing player input, I wondered:

Would it be more efficient if I had one event handler for a specific type of input, say movement, and handle all types of movement as (a) parameter(s); or would it be better if I handled every type of movement input as its own event?

For instance, let’s say the player can move up, down, left, or right. We are handling player input from a client delivered as a event to the server side.

Option 1: One parameterized event handler

// io is the required '' module var  io.on('move', (direction) => {     switch(direction) {         case 'up':             // handle player movement up             break;         case 'down':             // handle player movement down             break;         case 'left':             // handle player movement left             break;         case 'right':             // handle player movement right             break;         default:     } }); 

Option 2: Events for each type of movement

// io is the required '' module var  io.on('move_up', () => {     // handle player movement up });  io.on('move_down', () => {     // handle player movement down });  io.on('move_left', () => {     // handle player movement left });  io.on('move_right', () => {     // handle player movement right }); 

The first option reduces the number of events must try and match to handle. However, the payload for the event is larger and additional processing by the server is required.

The second option reduces the payload over the TCP connection. It also saves the server from having to match on the direction parameter in addition to the specific input type event (in this case ‘move’). However, there would be many more event handlers.

Ultimately I’m interested in which option provides quicker server side processing or reduces load on the server, especially when scaling the number of players. Should I optimize to have as few events as possible that must handle, or as little payload as possible per event?

[ Movies ] Open Question : Is Robert De Niro better than Kevin Spacey?

Better person obviously, Spacey is evil scum, De Niro seems a good guy However as an actor is De Niro better? I’d rather not see Spacey’s face again in any film or TV, however with House Of Cards, the character of Frank Underwood is a very good character, the fact he is so evil, yet you can’t help rooting for him. My question is, has De Niro ever played an evil character that you can’t help rooting for? I have seen many films of his where he plays a gangster bit he has some redeemable qualities, such as in Goodfellas (1990), he chastises Tommy for killing the waiter, he tells Henry off about his drug use etc Has De Niro ever played an evil villain before?

Better way of access the stateful variables objects in the given class hierarchy

I asked this question already at stackoverflow together with a serialization related part. Since the design related part receives no answers or comments I’d like to have a review on this here.

The problem: I have a complicated class hierarchy in which the classes are similar to each other but every class contains a bunch of more or less complicated stateful variables. To give you an impression, please have a look at this minimal working example:

class OptVar:     """     Complicated stateful variable     """     def __init__(self, name="", **kwargs): = name         self.parameters = kwargs   class OptVarContainer:     """     Class which contains several OptVar objects and nested OptVarContainer     classes. Is responsible for OptVar management of its sub-OptVarContainers     with their respective OptVar objects.     """     def __init__(self, name="", **kwargs): = name         for (key, value_dict) in kwargs.items():             setattr(self, key, OptVar(name=key, **value_dict))      def getAllOptVars(self):          def addOptVarToDict(var,                             dictOfOptVars={},                             idlist=[],                             reducedkeystring=""):             """             Accumulates optimizable variables in var and its linked objects.             Ignores ring-links and double links.              @param var: object to evaluate (object)             @param dictOfOptVars: optimizable variables found so far (dict of dict of objects)             @param idlist: ids of objects already evaluated (list of int)             @param reducedkeystring: to generate the dict key (string)             """              if id(var) not in idlist:                 idlist.append(id(var))                  if isinstance(var, OptVarContainer):                     for (k, v) in var.__dict__.items():                         newredkeystring = reducedkeystring + + "."                         dictOfOptVars, idlist = addOptVarToDict(v,                                                                 dictOfOptVars,                                                                 idlist,                                                                 newredkeystring)                  elif isinstance(var, OptVar):                     newreducedkeystring = reducedkeystring +                     dictOfOptVars[newreducedkeystring] = var              return dictOfOptVars, idlist          (dict_opt_vars, idlist) = addOptVarToDict(self, keystring="")         return dict_opt_vars    class C(OptVarContainer):     """     Specific implementation of class OptVarContainer     """     def __init__(self, name=""):         super(C, self).__init__(name=name,                 **{"my_c_a": {"c1": 1, "c2": 2},                    "my_c_b": {"c3": 3, "c4": 4}})   class B(OptVarContainer):     """     Specific implementation of class OptVarContainer     """     def __init__(self, name=""):         super(B, self).__init__(name=name, **{"b": {"1": 1, "2": 2}})         self.c_obj = C(name="cname")   class A(OptVarContainer):     """     Specific implementation of class OptVarContainer     """     def __init__(self, name=""):         super(A, self).__init__(name=name,                 **{"a1": {"1": 1, "2": 2},                    "a2": {"a": "a", "b": "b"}})         self.b_obj = B(name="bname")   def main():     # creating OptVarContainer with some nested OptVarContainers.     my_a_obj = A(name="aname")      # It is intended behaviour to access the OptVar objects via     # scoping within the class hierarchy.     print(my_a_obj.b_obj.b.parameters)     my_a_obj.b_obj.b.parameters["2"] = 3     print(my_a_obj.b_obj.b.parameters)     print(my_a_obj.b_obj.c_obj.my_c_a.parameters["c1"])     my_a_obj.b_obj.c_obj.my_c_a.parameters["c1"] = 6     print(my_a_obj.b_obj.c_obj.my_c_a.parameters)      # This construction is quite ugly:     # a) due to the dict: order is not fixed     # b) due to recursion (and maybe lexical sorting) it is not fast     # c) goal: hand over these optvars into some numerical optimization     # via a numpy array => should be fast     optvarsdict = my_a_obj.getAllOptVars()     print(optvarsdict)   if __name__ == "__main__":     main() 

The question is now: How to access the OptVar objects the best. I already thought about to use some kind of pool object and use some proxy within the class hierarchy to have a link to the pool. This pool should not be a singleton, since it should be possible to manage more than one pool at a time. Up to now the access to the OptVar variables within my own code is done via the getAllOptVars function. This is quite ugly and only considered temporary. Is there a better alternative to this function?

To summarize and clarify my goals:

  1. The class hierarchy is ok and reflects the model context of our problem
  2. The OptVars belong to each object in the hierarchy and should also stay there also due to model context
  3. The access and post processing of the OptVars (i.e. collecting them from the hierarchy and handing them over to an optimizer as a numpy array) is not considered to be optimal. There I need some suggestions on doing better (i.e. getting rid of isinstance and id queries).
  4. A nice to have would be: Decouple serialization of the OptVars and version management from the object hierarchy

I am aware that no unique solutions exist for this design problem, but I need further input on that.

The full context of this question is given here but after some discussions I separated the design part.