Balancing function call overhead and testability in a code that is a part of the deep learning model training loop

I am currently implementing the transformer architecture for sequence to sequence problems. Key part of the model is the attention mechanism, which is basically a matrix multiplication, followed by a masking operation and a softmax function. My initial thought was to wrap this 3 steps in a function, that looks like this:

    def attention(self, matrix_1, matrix_2, mask=None, trans_1=False, trans_2=False):         att_stage_1 = F.matmul(matrix_1, matrix_2, transa=trans_1, transb=trans_2)*self.scale_score         att_stage_2 = F.where(mask, att_stage_1, self.np.ones(att_stage_1.shape, 'f')*(-1e9))         return F.softmax(att_stage_2, axis=3) 

I want to write unit tests for this function to test whether the output is what I expect it to be. The problem, however, is that this function, as it is, performs 3 separate operations: matmul, masking and softmax. I would prefer to determine that each of this operations does produces correct output, but as it is I could only check the final effect. This leads me to a design where I would wrap each of this 3 operations to a separate, dedicated function and test them separately. What I am concerned, however, is that the overhead of python functions calls in a training loop function that is called on each forward pass may be unnecessary.

Thus, the question is, what would be the correct approach to balance design and reliability vs performance in this scenario? Maybe I am missing some obvious approach here.

trying to merge chunks to trigger better balancing after 50% of the data was deleted by the developers

Trying to merge chunks using the following command:

         db.adminCommand          ( {            mergeChunks: "HTMLDumps.HTMLRepository",            bounds: [ {   "ShardMapId" : 2, "DomainId" : 62 },            {  "ShardMapId" : 2, "DomainId" : 162 } ]          } ) 

getting the following error when trying to run the above command to try to merge any of the available consecutive chunks available on a shard:

             {              "ok" : 0,              "errmsg" : "Failed to commit chunk merge :: caused by ::               DuplicateKey: chunk operation commit failed: version               32|6||5ba8d864bba4ff264edf0bd9 doesn't exist in               namespace: HTMLDumps.HTMLRepository. Unable to save               chunk ops. Command: { applyOps: [ { op: \"u\", b: false,               ns: \"config.chunks\", o: { _id: \"HTM               Dumps.HTMLRepository-ShardMapId_2.0DomainId_62.0\", ns:               \"HTMLDumps.HTMLRepository\", min: { ShardMapId: 2.0,               DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 162 },               shard: \"shard0000\", lastmod: Timestamp(32, 6),               lastmodEpoch: ObjectId('5ba8d864bba4ff264edf0bd9') },               o2: { _id: \"HTMLDumps.HTMLRepository-               ShardMapId_2.0DomainId_62.0\" } }, { op: \"d\", ns:               \"config.chunks\", o: { _id: \"HTMLDumps.HTMLRepository-               ShardMapId_2DomainId_109\" } } ], preCondition: [ { ns:               \"config.chunks\", q: { query: { ns:               \"HTMLDumps.HTMLRepository\", min: { ShardMapId: 2.0,               DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 109 }               }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:               ObjectId('5ba8d864bba4ff264edf0bd9'), shard:               \"shard0000\" } }, { ns: \"config.chunks\", q: { query:               { ns: \"HTMLDumps.HTMLRepository\", min: { ShardMapId:               2, DomainId: 109 }, max: { ShardMapId: 2, DomainId: 162               } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:               ObjectId('5ba8d864bba4ff264edf0bd9'), shard:               \"shard0000\" } } ], writeConcern: { w: 0, wtimeout: 0 }               }. Result: { applied: 1, code: 11000, codeName:               \"DuplicateKey\", errmsg: \"E11000 duplicate key error               collection: config.chunks index: ns_1_min_1 dup key: { :               \"HTMLDumps.HTMLRepository\", : { ShardMapId: 2.0,               DomainId: 62.0 } }\", results: [ false ], ok: 0.0,               operationTime: Timestamp(1554112692, 1), $  gleStats: {               lastOpTime: { ts: Timestamp(1554112692, 1), t: 13 },                electionId: ObjectId('7fffffff000000000000000d') },               $  clusterTime: { clusterTime: Timestamp(1554112692, 1),               signature: { hash: BinData(0,               0000000000000000000000000000000000000000), keyId: 0 } }               } :: caused by :: E11000 duplicate key error collection:               config.chunks index: ns_1_min_1 dup key: { :               \"HTMLDumps.HTMLRepository\", : { ShardMapId: 2.0,               DomainId: 62.0 } }",              "code" : 11000,              "codeName" : "DuplicateKey",              "operationTime" : Timestamp(1554112687, 1),              "$  clusterTime" : {              "clusterTime" : Timestamp(1554112687, 1),              "signature" : {                 "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),                 "keyId" : NumberLong(0)                }               }              } 

This is happening regardless of which chunks i select. Main reason for me to try to do this is to achieve true data balancing and not just chunk numbers. Recently developers deleted 90% of the data from these chunks that caused the distribution to get to 90% 10% state from 60/40 earlier. I hope to merge/remove empty chunks to ensure balancing of the data to reach as close to 60/40 as possible.

Applications for Service Discovery outside of Client-Side Load Balancing

I’ve been told that service discovery and client-side load balancing are two distinct concepts, however:

  1. I don’t see what you would use service discovery for outside of client-side load balancing; and
  2. I don’t see how you could implement auto-scale-enabed client-side load balancing without service discovery!

My understanding of service discovery is that you have some kind of client/agent running on each of your nodes that all use a consensus service (Consul, ZooKeeper, Eureka, etc.) to communicate the IPs of the healthy/active instances of all the backing services/resources that your nodes depend on. So if a 5-node Service A talks to a 10-node Service B, and one of those 10 Service B nodes goes “down”, the consensus service will alert all 5 Service A nodes not to talk to that particular Service B instance (IP). To me, this is client-side load balancing.

My understanding of client-side load balancing is when each node of Service A makes the decision as to which Service B node it talks to. Advantages of this, as opposed to a classic centralized load balancer sitting in front of all Service B nodes, is that there is now no single point of failure (SPoF) should that centralized load balancer go down. But the only way (that I can see!) to implement this and enable auto-scaling of both services is to use service discovery.

So I ask: how are these concepts really different if you can’t have one without the other? Or is there a whole universe of functionality that you get with service discovery that has nothing to do with client-side load balancing?!

HAProxy Load Balancing for Multiple URI Paths on the Same Server

I have an application “process-engine” that can run more than one instance in same container (in this case it is WildFly 10). The application’s client use a standard URL like the following to start a process:

https://myserver.mydomain.com:8443/engine-rest/engine/default/process-definition/key/myProcess/start

However, the “second” process engine must be accessed like this:

https://myserver.mydomain.com:8443/engine-rest/engine/engine_2/process-definition/key/myProcess/start

Note the change of the word “default” to “engine_2”.

I want HAProxy to load balance the two different URL paths, but I want the client to be able to use only the “default” URL every time. In other words, I want X amount of requests to go to the “default” path, and X amount of requests to go to the “engine_2” path.

There are no ACLs, proxies, etc. involved here. While I have no idea if such a configuration would work, conceptually I’m thinking about something like this in the “backend” section of the configuration:

server myserver 192.168.1.50:8443  server myserver 192.168.1.50:8443 reqrep ^(https:\/\/[a-zA-Z0-9\-]*\.mydomain\.com:[0-9]{2,5}\/engine-rest\/engine\/)([a-zA-Z0-9_]*)\/(.*)  engine_2/ 

In the conceptual example above, requests would be balanced across both servers, but the URI path would be modified prior to sending them to the send server.

Thank you.

Does load balancing multiple WAN connections improve anonymity?

I would like to understand the advantages and disadvantages of load balancing outbound connections for anonymity.

Scenario 1: My router (ip A) > VPN router (ip B) > VPN router (ip C) > web host

Scenario 2: My router (ip A) > 3 load balanced VPN client connections ( ips B C D ) > 3 seperate connections exiting VPN routers (ips E F G) > web host

Furthering my curiosity sorry, What if senario 2 to was 3 connections to the same VPN server but the outgoing VPN or source IPs to web host were different obviously.

One issue I identified was in senario 2 you have a bigger connection fingerprint/ pattern which is an issue visit obscure sites vs very popular sites.

This is asuming the user is ok with latency and authentication or SSL issues etc

nginx load balancing localhost

I have two nginx servers which are serving images: server Foo and Bar. I want to set up load balancing in a way that every other request server Foo receives is redirected to server Bar. I read up on load balancing in nginx documentation and it seems I should define upstream section on server Foo like this:

upstream imgserver {     server localhost;     server server-bar.com; }  location / {     proxy_pass http://imgserver; } 

Now, I suspect that this configuration will yield that only server Bar serves the images, since whenever server Foo receives a requests it will try to proxy it again. Is that correct? If so, how do I set this up correctly?

Do I need to use another port for Foo redirection? Or maybe add a custom header on redirection?

Supporting role: Group efficiency vs Table “fun balancing”

So, I am in need of advice on this one. Here’s how things are:

I’m playing a cleric in a 4 player group. 2 other players play a paladin and a warlock/bladesinger melee multi. The paladin is lvl 5 and the rest of us are lvl 4. The table uses stat rolling and while we all rolled better than average stats, the paladin rolled really, really good.

Now I find myself facing this dilemma: I could be casting warding bond, shield of faith and whatever other spell I’m using (playing order domain, which grants allies reaction attacks when i target them with spells) on the paladin. Which is clearly the most efficient way to go about it, having a 21 AC monster on the field, resistant to all damage, destroying every melee opponent.

Or I could be casting my concetration buffs on the more mobile character who rolled average stats so he doesn’t feel left out.

We have played 1 session yet, and the only encounter we had was pretty hard. I was forced to use all my slots, at one point the multiclassing character was knocked unconscious with 2 hits from hidden opponents. The 4th character was a rogue, failed a hold person save and ate 2 full fireballs, me and the pala barely saved him.

What would you guys do in this situation? I understand this is a moral question, and unlike the mechanical questions usually asked here, so I apologise if I unknowingly violated some rule regarding the issue.

Thanks in advance!

Help balancing a dragon fight for an 8th-level party

When my 5 players go up to 8th level, they will be encountering their first dragon. I need help building an encounter that will feel tense without a TPK.

To create a deadly encounter using Kobold Fight Club, I determined this encounter:

  • 1x Young Silver Dragon (CR 9)

  • 3x Magma Mephit (CR 1/2)

But is a CR 9 too weak, and should I be looking at a higher CR? I know the DMG warns against using CRs higher than your average party level.

I know a lot can change based on terrain, but I just wanted to get a ball-park idea of what you all think.

Load balancing Availability groups with MSSQL Standard

So i have the current scenario, and it looks to be working fantastically, but i just want to get some input on the configuration. Is it smart? Are there any issues i am not thinking of?

We have MSSQL Standard, and as such, you can only have 1 DB per AG (we have 20 databases), and no Read-Only secondaries. Basically meaning, the primary server is doing ALL the lifting, with the secondary doing alot less. So essentially, you are paying for resources for Node 2, that are sitting at 10% workload, while node 1 is at 70/80% workload. Both nodes are fully licensed with regards to MSSQL Cores.

What i have done, to assist this, is slit the database primaries up. So about 50% of the databases are Primary on node1, while the other 50% are primary on node2.

The Results :

The applications all connect great to either node via their respective listener If a failover occurs, just the databases on the failing node are effected, and failover to the other node (We have tested this fairly well).

Each node, can now split the load, essentially load balancing. It is a manual process to set it up this way and when deploying new DB’s and groups they go to the lighter node. but a small price to pay for “more” hardware punch without much cost (Licenses which we already have and a bit of admin)

What are your guys thoughts on this?