Domain Driven Design – updating part of aggregate

I’m playing around with DDD in a node.js project and struggling with updates to child entities from the aggregate root.

For the sake of example, let’s say I have two domain objects where Event is my aggregate root:

class Event {   _id   _date   _comments = [] }  class Comment {   _id   _name   _content } 

I understand that DDD dictates that an EventRepository is in charge of saving any changes to the AR and child entities and hides all the persistence details. So I should call something like EventRepository.save(event) at this point and be done.

Let’s say that an event currently has 3 comments and a user has edited the content of the second comment. The implication now is that my repository needs to fetch my event and comments and compare with my domain objects to figure out which need to be persisted. That’s doable but seems inefficient when the reality is I know that only one comment needs to be updated. It would also be simple to add another repository for comments but now I’ve veered away from pure DDD. I’ve seen discussions of the Unit of Work pattern but I’m not sure how this ties into my repository. It would also be simple enough to just have some sort of _isDirty on all my domain objects but now I’ve polluted them with persistence information.

I’m not sure it’s relevant to the design pattern I’m struggling with but I’m doing this POC with objection.js and Postgres. Appreciate any feedback.

Product discovery: the most forgotten part of e-commerce

Hey guys,

Since I'm working in the digital marketing area, I've noticed that a huge effort was made in acquisition and sales funnel optimization. However, the product discovery part is kind of forgotten.
I wrote an article explaining my point : https://medium.com/@yanis.sif_29734/product-discovery-the-most-forgotten-part-of-e-commerce-f48c9136f901

I'd like to know if you actually share this statement ? I find it incredible to spend lot of money on making people come to one's website…

Product discovery: the most forgotten part of e-commerce

how you would move forward from completing the Planning, requirements modelling and analysis part of the development process?

I am following RUP/USDP practice. I have made Use case model, High level, Use case, Expanded, Interactions Diagrams, and Class Diagram.

Question: Explain how you would move forward from this part of the development process.

Github gives me the number of commits on my branch. Can I use this as part of git rebase -i arguments?

I am not, and will never be, a git expert, so kindly be gentle.

I have a PR branch which has—Github tells me—let’s say 12 commits on it.

While I’ve been working on this PR, I’ve merged master (the branch it diverged from originally) into it a couple of times. I am planning on squashing all commits down to one before this PR is accepted and merged.

I know that I am going to use git rebase -i to do this. The general area of this question is how to come up with the argument to supply to this command.

Given the merges from master to my PR branch etc. etc., git merge-base doesn’t give me, I don’t think, the right SHA to supply as an argument to git rebase -i. It will, in fact, give me the most recent commit that “came over” from master, instead of the earliest point at which my branch “started”.

I have, of course, read this excellent answer. Its “shell magic” answer works, but I don’t fully understand it, and I don’t like relying on magic I don’t understand.

My question, therefore, is: From my PR branch, can I, instead, use this comparatively simple command: git rebase -i HEAD~12, where 12 is whatever Github tells me is the number of commits on my branch?

Is this number of commits OK to supply after the tilde (~)? Is this a reliable, relatively brain-dead-easy thing to do? Or will I trip myself up on yet another inscrutable git option or convention?

Keto Buzz: Burn Your fatness from the different part of body

Keto Buzz reviews :-Weight loss was top notch in my book. Let's face it, there is a lot to learn. This is just one of the several checks and balances. It's true that not everybody has this kind of support system for weight less. The basics behind planning your weight loss Formula are just like planning your weight loss Tips. This is something that my step-mother repeats often, "One man's loss is another man's gain."

Click here…

Keto Buzz: Burn Your fatness from the different part of body

Balancing function call overhead and testability in a code that is a part of the deep learning model training loop

I am currently implementing the transformer architecture for sequence to sequence problems. Key part of the model is the attention mechanism, which is basically a matrix multiplication, followed by a masking operation and a softmax function. My initial thought was to wrap this 3 steps in a function, that looks like this:

    def attention(self, matrix_1, matrix_2, mask=None, trans_1=False, trans_2=False):         att_stage_1 = F.matmul(matrix_1, matrix_2, transa=trans_1, transb=trans_2)*self.scale_score         att_stage_2 = F.where(mask, att_stage_1, self.np.ones(att_stage_1.shape, 'f')*(-1e9))         return F.softmax(att_stage_2, axis=3) 

I want to write unit tests for this function to test whether the output is what I expect it to be. The problem, however, is that this function, as it is, performs 3 separate operations: matmul, masking and softmax. I would prefer to determine that each of this operations does produces correct output, but as it is I could only check the final effect. This leads me to a design where I would wrap each of this 3 operations to a separate, dedicated function and test them separately. What I am concerned, however, is that the overhead of python functions calls in a training loop function that is called on each forward pass may be unnecessary.

Thus, the question is, what would be the correct approach to balance design and reliability vs performance in this scenario? Maybe I am missing some obvious approach here.