Is it problematic to have a dependency between objects of the same layer in a layered software architecture?

Considering a medium-big software with an n-layer architecture and dependency injection, I am comfortable to say that an object belonging to a layer can depend on objects from lower layers but never on objects from higher layers.

But I’m not sure what to think about objects that depend on other objects of the same layer.

As an example, let’s assume an application with three layers and several objects like the one in the image. Obviously top-down dependencies (green arrows) are ok, bottom-up (red arrow) are not ok, but what about a dependency inside the same layer (yellow arrow)?

enter image description here

Excluding circular dependency, I’m curious about any other issue that could arise and how much the layered architecture is being violated in this case.

Optimized pulling items out of arrays in nested objects

I found a couple of similar questions, but they were mostly outdated. Here’s the problem: create a flat array from the values in several arrays contained in several nested objects.

const data = [   { id: 1, items: ['one', 'two', 'three'] },   { id: 2, items: ['two', 'one', 'four'] }, ] 

Expected result const result = ['one', 'two', 'three', 'four']

My current solution:

function() {   const result = []   data.forEach(item => data.items.forEach(item => result.push(item)))    return _.uniq(result) } 

lodash is allowed

Any suggestion is more than welcome

Keras throwing an error when trying to implement a custom loss function saying Tensor objects are not iterable

I am trying to build a custom loss function for my Cycle GANs implementation where I am trying to translate images from one domain to another. I am using Keras with the tensorflow backend for this. I have defined my custom loss function as follows

def newloss(yTrue,yPred):     yTrue=yTrue     yPred =yPred      normim_original = findSeg(yTrue)     normin_reconstructed=findSeg(yPred)      orientations_original=findOri(yTrue)     orientation_reconstructed=findOri(yPred)      frequency_original=findFreq(yTrue)     frequency_reconstructed=findFreq(yPRed)      ridge_original=FindRid(yTrue)     ridge_reconstructed=findRid(yPred)      return k.sum(1*mse(ridge_original,ridge_reconstructed),0.1*mse(frequency_original,frequency_reconstructed),0.1*mse(orientations_original,orientation_reconstructed),0.1*mse(normim_original,normin_reconstructed)) 

The functions findSeg, findOri, findFreq and findRid have been defined outside and takes a single image as input. I am sure my yTrain and yTrue(not sure about yPred) are mostly of class Generator, so within the implementation I have set the code to convert the Generator to a list,but when I am trying to implement this loss function, Keras throws me an error saying “Tensor object are only iterable when eager execution is not enabled. Please use Tf.map_fn”.I tried using both eager execution and tf.map_fn but both were throwing errors again.I am not sure what is to be done here.I have set my model compliation in this manner

self.combined = Model(inputs=[img_A, img_B],                           outputs=[ valid_A, valid_B,                                     reconstr_A, reconstr_B,                                     img_A_id, img_B_id ])     self.combined.compile(loss=['mse', 'mse',                                 newloss, newloss,                                 'mae', 'mae'],                         loss_weights=[  1, 1,                                         self.lambda_cycle, self.lambda_cycle,                                         self.lambda_id, self.lambda_id ],                         optimizer=optimizer) 

I am not very good at deep learning. So I might have made some novice mistakes.

Accessible functors not preserving lots of presentable objects

Let $ F:\cal C\to D$ be an accessible functor between locally presentable categories. By Theorem 2.19 in Adamek-Rosicky Locally presentable and accessible categories, there exist arbitrarily large regular cardinals $ \lambda$ such that $ F$ preserves $ \lambda$ -presentable objects. It is tempting to expect that $ F$ should preserve $ \lambda$ -presentable objects for all sufficiently large $ \lambda$ , but that is not what the theorem says. However, I do not know a counterexample showing that the stronger claim fails. (For instance, this question asks about this property when $ F$ is the pullback functor, and has no answer yet in the general case.)

What is an example of an accessible functor $ F$ between locally presentable categories for which there exist arbitrarily large regular cardinals $ \mu$ such that $ F$ does not preserve $ \mu$ -presentable objects?

$\mu$-presentable object as $\mu$-small colimit of $\lambda$-presentable objects

Remark 1.30 of Adámek and Rosický, Locally Presentable and Accessible Categories claims that in any locally $ \lambda$ -presentable category, each $ \mu$ -presentable object can be written as a $ \mu$ -small colimit of $ \lambda$ -presentable objects. I’ve also seen this stated in the literature without any reference given, suggesting it is considered “well-known to experts”.

However, as Mike Shulman pointed out in a comment on the answer https://mathoverflow.net/a/306129, it is unclear how the argument on pages 35 to 37 of Makkai and Paré cited in Remark 1.30 proves the claim. Not only is it unclear how to apply Lemma 2.5.2 of MP, but the category $ \mathbf{K}$ constructed in its proof, which is the indexing category for the colimit produced by the lemma, has size which is not obviously bounded in terms of the sizes of the input diagrams, because it involves arbitrary morphisms between the given objects, not just ones that appear in the given diagrams.

Does anyone know how the claim of Remark 1.30 is to be proved? Alternatively, is there another, perhaps entirely different, proof in the literature?

write/read efficiently dataframe objects into memory or disk?

i’m running a for loop that loops over all the rows of a pandas dataframe, then it calculates the euclidean distance from one point at a time to all the other points in the dataframe, then it pass the following point, and do the same thing, and so on.

The thing is that i need to store the value counts of the distances to plot a histogram later, i’m storing this in another pandas dataframe. The problem is that as the second dataframe gets bigger, i will run out of memory at the some time. Not to mention that also as the dataframe size grows, repeating this same loop will get slower, since it will be heavier and harder to handle in memory.

This is the original piece of code i was using:

counts = pd.DataFrame()  for index, row in df.iterrows():      dist = pd.Series(np.sqrt((row.xx - df.xx)**2 + (row.yy - df.yy)**2 + (row.tt - df.tt)**2))     counter = pd.Series(dist.value_counts( sort = True)).reset_index().rename(columns = {'index': 'values', 0:'counts'})        counts = counts.append(counter) 

The original df has a shape of (695556, 3) so the expected result should be a dataframe of shape (695556**3, 2) containing all the distance values from all the 3 vectors, and their counts. The problem is that this is impossible to fit into my 16gb ram.

So i was trying this instead:

for index, row in df.iterrows():     counts = pd.DataFrame()     dist = pd.Series(np.sqrt((row.xx - df.xx)**2 + (row.yy - combination.yy)**2 + (row.tt - df.tt)**2))     counter = pd.Series(dist.value_counts( sort = True)).reset_index().rename(columns = {'index': 'values', 0:'counts'})        counts = counts.append(counter)     counts.to_csv('counts/count_' + str(index) + '.csv')     del counts 

In this version, instead of just storing the counts dataframe into memory, i’m writting a csv for each loop. The idea is to put it all together later, once it finishes. This code works faster than the first one, since the time for each loop won’t increment as the dataframe grows in size. Although, it still being slow, since it has to write a csv each time. Not to say it will be even slower when i will have to read all of those csv’s into a single dataframe.

Can anyone show me how i could optimize this code to achieve these same results but in faster and more memory efficient way?? I’m also open to other implementations, like, spark, dask, or whatever way to achieve the same result: a dataframe containing the value counts for all the distances but that could be more or less handy in terms of time and memory.

Thank you very much in advance

How to differentiate between colliding objects in Box2D?

I need to differentiate between different points of contact in my racing game.

I currently have staticSensors which determine if the car is on the track or not, but I have created a second sensor to act as the ‘finish line’ but I am not sure how to differentiate the contact between the ‘checkeredFlag’ and all of the other sensors?

Any help would be appreciated! Here is my code:

include “contactListener.h”

void ContactListener::BeginContact(b2Contact* contact) { b2Body* bodyA = contact->GetFixtureA()->GetBody(); b2Body* bodyB = contact->GetFixtureB()->GetBody();

bool isSensorA = contact->GetFixtureA()->IsSensor(); bool isSensorB = contact->GetFixtureB()->IsSensor();  Wheel * wheel = (Wheel*)bodyA->GetUserData(); CheckeredFlag * flag = (CheckeredFlag*)bodyB->GetUserData();  if (isSensorA) {     StaticSensor * sensor = static_cast<StaticSensor *>(bodyA->GetUserData());     sensor->action();     Wheel * wheel = static_cast<Wheel *>(bodyB->GetUserData());     wheel->offRoad();     return; }  if (isSensorB) {     StaticSensor * sensor = static_cast<StaticSensor *>(bodyB->GetUserData());     sensor->action();     Wheel * wheel = static_cast<Wheel *>(bodyA->GetUserData());     wheel->offRoad();     return; }  if (typeid(CheckeredFlag).name() == typeid(bodyA).name()) {     if (typeid(Wheel).name() == typeid(bodyB).name())         {         flag->action();         return;         } } 

Ultra white background on white transparent objects

I am in the process of creating an online shop that will sell plexiglass(acrylic) products. The material we use is crystal clear just like glass.

Now, what i need is to have a completely white background seen through the glass. If it was not transparent, i could have used the lasso tool in PS to cut the object and place it on a white background but my ”slide” paper is visible through the glass.

How do professional photographers solve this problem?

Missing bottom level item when a node is linked to two menu objects

I have been trying to create a custom menu block which shows a number of menu items with images on the current level.

I am having an issue where the active trail array doesn’t return an menu item for the active node on level 4 of my menu. Here is an example of how I get the active trail:

$  menu_tree = \Drupal::menuTree(); $  menu_parent_tree = \Drupal::menuTree(); $  menu_name = 'main';  // Build the typical default set of menu tree parameters. $  parameters = $  menu_tree->getCurrentRouteMenuTreeParameters($  menu_name);  foreach($  parameters->activeTrail as $  active_menu_link) {   if($  active_menu_link !== '') {     $  active_menu_id = explode(':', $  active_menu_link);     $  entity = \Drupal::entityManager()->loadEntityByUuid($  active_menu_id[0], $  active_menu_id[1]);      kint($  entity->getTitle());   } } 

It seems that the issue is mostly related to the fact that the same node is linked to both a child menu item and a parent menu item. If I remove this then the function works as expected. This is the my menu structure.

Menu Structure for Site

The two menu items highlighted in red are both linked to the same node since I want it to act as an overview page for the category. It could be that theres something else I can do to structure the menu differently. I am open to any alternatives on how I can get around this.