Creating Loss Ports For Multiple Output Neural Net

I am making a multi-classfication neural net for a set of data. I have created the net but i think i need to specify a loss port at for each classification

Here are the labels for the classification and the encoder & decoders.

labels = {"Dark Colour", "Light Colour", "Mixture"} sublabels = {"Blue", "Yellow", "Mauve"} labeldec = NetDecoder[{"Class", labels}]; sublabdec = NetDecoder[{"Class", sublabels}]; bothdec = NetDecoder[{"Class", Flatten@{labels, sublabels}}]  enc = NetEncoder[{"Class", {"Dark Colour", "Light Colour", "Mixture",      "Blue", "Yellow", "Mauve"}}] 

Here is the Net

SNNnet[inputno_, outputno_, dropoutrate_, nlayers_, class_: True] :=   Module[{nhidden, linin, linout, bias},   nhidden = Flatten[{Table[{(nlayers*100) - i},       {i, 0, (nlayers*100), 100}]}];   linin = Flatten[{inputno, nhidden[[;; -2]]}];   linout = Flatten[{nhidden[[1 ;; -2]], outputno}];   NetChain[    Join[     Table[      NetChain[       {BatchNormalizationLayer[],        LinearLayer[linout[[i]], "Input" -> linin[[i]]],        ElementwiseLayer["SELU"],        DropoutLayer[dropoutrate]}],      {i, Length[nhidden] - 1}],     {LinearLayer[outputno],      If[class, SoftmaxLayer[],       Nothing]}]]]  net = NetInitialize@SNNnet[4, 6, 0.01, 8, True];  

Here are the nodes that are used for the Netgraph function

nodes = Association["net" -> net, "l1" -> LinearLayer[3],     "sm1" -> SoftmaxLayer[], "l2" -> LinearLayer[3],     "sm2" -> SoftmaxLayer[],    "myloss1" -> CrossEntropyLossLayer["Index", "Target" -> enc],    "myloss2" -> CrossEntropyLossLayer["Index", "Target" -> enc]]; 

Here is what i want the NetGraph to do

connectivity = {NetPort["Data"] ->      "net" -> "l1" -> "sm1" -> NetPort["Label"],    "sm1" -> NetPort["myloss1", "Input"],    NetPort[sublabels] -> NetPort["myloss1", "Target"],     "myloss1" -> NetPort["Loss1"],    "net" -> "l2" -> "sm2" -> NetPort["Sublabel"],    "myloss2" -> NetPort["Loss2"],    "sm2" -> NetPort["myloss2", "Input"],    NetPort[labels] -> NetPort["myloss2", "Target"]}; 

The data will diverge at “net” for each classifcation and pass through the subsequent linear and softmax layer and to the relevant NetPort The problem im having is at loss port which diverges at each softmax layer.

When i run this code

NetGraph[nodes, connectivity, "Label" -> labeldec,   "Sublabel" -> sublabdec] 

I recieve the error message: NetGraph::invedgesrc: NetPort[{Blue,Yellow,Mauve}] is not a valid source for NetPort[{myloss1,Target}].

Could anyone tell me why this occurring?

Thanks for reading.

Detecting conservation, loss, or gain in a crafting game with items and recipes

Suppose we’re designing a game like Minecraft where we have lots of items $ i_1,i_2,…,i_n\in I$ and a bunch of recipes $ r_1,r_2,…,r_m\in R$ . Recipes are functions $ r:(I\times\mathbb{N})^n\rightarrow I\times\mathbb{N}$ , that is they take some items with non-negative integer weights and produce an integer quantity of another item.

For example, the recipe for cake in Minecraft is:

3 milk + 3 wheat + 2 sugar + 1 egg $ \rightarrow$ 1 cake

… and the recipe for torches is:

1 stick + 1 coal $ \rightarrow$ 4 torches

Some recipes could even be reversible, for example: 9 diamonds $ \leftrightarrow$ 1 diamond block

If there’s some combination of recipes we can repeatedly apply to get more of the items that we started with then the game is poorly balanced and this can be exploited by players. It’s more desirable that we design the game with recipes that conserve items or possibly lose some items (thermodynamic entropy in the real world – you can’t easily un-burn the toast).

Is there an efficient algorithm that can decide if a set of recipes will:

  • conserve items?
  • lose items to inefficiency?
  • gain items?

Is there an efficient algorithm that can find the problematic recipes if a game is imbalanced?

My first thoughts are that there is a graph structure / maximum flow problem here but it’s very complex, and that it resembles a knapsack problem. Or maybe it could be formulated as a SAT problem – this is what I’m considering to code it at the moment but something more efficient might exist.

We could encode recipes in a matrix $ \mathbf{R}^{m \times n}$ where rows correspond to recipes and columns correspond to items. Column entries are negative if an item is consumed by a recipe, positive if it’s produced by the recipe, and zero if it’s unused. Similar to a well known matrix method for graph cycle detection, we could raise $ \mathbf{R}$ to some high power and get sums of each row to see if item totals keep going up, stay balanced, or go negative. However, I’m not confident this always works.

Any discussion, code, or recommended reading is very appreciated.

Converting an .rtf file to a .nb file without data loss

I unfortunately had a notebook have a syntax error and I would have lost weeks of work. To fix this, I was able to save my old notebook as a .rtf file without any excessive data loss. When I convert this .rtf to .nb by saving the file as .nb, Mathematica is able to open the notebook almost as I left it. The problem is that this results in the loss of all special characters, which I have many of. Is there any way to maintain the special characters while creating the new .nb?

What happens when Aid ends on a target who has suffered maximum hp loss?

The spell aid temporarily raises a target’s hit points and maximum hit points by 5:

Each target’s hit point maximum and current hit points increase by 5 for the duration.

A vampire’s bite, for instance, causes necrotic damage with a rider that reduces the victim’s maximum hit points:

The target’s hit point maximum is reduced by an amount equal to the necrotic damage taken

If a character under the effect of aid is bitten by a vampire (or subject to any other effect that reduces their maximum hit points), what happens to their maximum hit point total when aid subsequently expires or is dispelled? I can see two interpretations:

  1. The character’s maximum hit points drop by 5 again, so ultimately their new maximum hit point total is whatever it would normally be less the maximum hit point reduction they suffered

  2. The maximum hit point loss suffered reduces the increase granted by aid first, and maximum hit points already lost this way aren’t lost again when aid ends, so the character is up to 5 points better off than they would have been without aid

Essentially, aid might or might not act as a buffer against maximum HP loss after it expires, but I’m not sure which interpretation is best supported by the rules.

Is there a term for the psychological issue of “code loss” for programmers?

(Note: I wanted to post this to the “Psychology” category, but it had no matching tags at all.)

I am a programmer. I have just deleted a huge amount of code which I painstakingly researched, thought about, coded, then improved and fixed as bugs popped up for a long time.

All of that code, which took me a ridiculous amount of time, effort and general “mind work”, has now been replaced by a very small number of lines which basically leverage PHP’s built-in “ICU” features to properly output numbers, money sums and date/time in the correct manner for every combination of language, locale, currency and timezone imaginable.

Previously, I did not know that this already existed, so I basically replicated a lot of it myself, and I now realize how far from perfect it was. But still, I did it, and that code had in my mind “hardened” or “settled” as “gold code” which I never thought I would touch again…

Basically, I mourn my now useless, superseded, obsolete code chunks. I’m annoyed by myself for doing all that unnecessary work and it took a lot of mental wrestling to finally convince myself to go through with it.

Is this common among programmers, and does it have an established term? Such as “code loss” or “code mourning”?

Basically, even though I have really improved my application/library/framework to an extreme degree, it still feels like I’ve “lost all that work” because the numbers of lines are slashed so much in one go. It’s not a nice feeling.

Dominate the Weight Loss Niche With 100% Hand-Crafted Content

Do You Want To Dominate the Weight Loss Niche With 100% Hand-Crafted Content , weightlossstayfit.com website is complete information on Weight Loss

Lucrative Weight Loss Niche With 100% Hand-Crafted Content & Bonus

Do You Want To Dominate the Weight Loss Niche With 100% Hand-Crafted Content , weightlossstayfit.com website is complete information on Weight Loss…

Dominate the Weight Loss Niche With 100% Hand-Crafted Content

What preventive measures can be taken to get over this scenario with minimal data loss

I am SQL server dba and came across weird scenario our cluster which had 10 nodes 5 primary 5 secondary each nodes had SQL role…. example Ams1pd11 to Ams1pd15 are primary.. and Ams3 side from pd11 to of 15 was secondary… in this scenario whole cluster behaved abnormally going 2 nodes down and all nodes availability groups were inaccessible leading to multicustomer outage.

Explanation of the real-time scenario as it started when I was in shift….

Ams1pd12 got down which was hosting primary server A So automatically the role A was failed over on best possible node and he choose Ams1pd11.

But Ams1pd11 already had one role on it e.g B Now Ams1pd11 was hosting both A and B as Ams1pd12 got down

As both primary roles were on one node it was a risk so to balance I failed over B node to one of secondary node Ams3 side.. now it was balanced and I was just about to investigate on Ams1pd12 why it got down and all…

But suddenly Ams1pd11 node also went down and the role was not failed over and stucked over there…

So now 2 nodes out of 10 were down and one role was stuck so the customers on that role were impacted….

We were troubleshooting with Microsoft for same and noticed that other nodes were showing up by the availability group for alll the nodes were stuck and they were not opening and stucked in expanding state….

So this way all the nodes on that cluster were impacted and as due to this our backups stopped.. There was data loss…

The one stucked role and customer on that node faced only 15 mins data loss as the service was down fr us and them too at the same time, but

The nodes which were showing up and AG groups were inaccessible weird thing was for the already logged in users they were able to change modify the data… It was only refusing new connections… But old connection were still active…

So if issue started and backup stopped at 7 am most of the customers were able to access the database till 6pm so there was 11 hours of data loss….

And it took lot of efforts to restore databases manually… Those online nodes were easy to recover as we just had to attach the dbs as we migrated data and log files bt for stucked role databases we had to manually recover them.

Please suggest the strategies which will be best to be followed in this type of disaster how we can obtain speedy recovery.

Of diets and workouts, weight loss and HEALTH

Hi, guys!

I'm pretty new here on the forum and I wanted to get to know everyone better. I'm a personal trainer and group fitness instructor, but I've had a crazy couple of years and ended up "off the wagon". So now I'm trying to lose 70 pounds.

At first I was discouraged, but it's actually been amazing getting to really feel what my overweight clients felt. I think it's going to make me an even better health and fitness professional! So I'm embracing the process.

Anybody else out…

Of diets and workouts, weight loss and HEALTH