Load balancing with IPTabels not working

I have a Ubuntu server (19.04 / 5.0.0-15-generic) with three interfaces.

eth0: LAN  (192.168.10.253/24)   eth1: WAN1 (172.29.13.201/24, gateway .253)   eth2: WAN2 (172.29.14.201/24, gateway .253)   

When I run the following script at Debian (9.9 / 5.0.0-15-generic), the load-balancer is working. I get a 20/20+20/20= 40/40 connection.

However, when I this same script at Ubuntu. The load-balance is working, traffic is natted over eth1 and eth2. But the returning traffic never reach the client, I see the returned traffic in eth1 and eth2, but not in eth0.

I use the following script to set the iptables and the IP-Routes.

#!/bin/bash  echo 1 >| /proc/sys/net/ipv4/ip_forward echo 0 >| /proc/sys/net/ipv4/conf/all/rp_filter  #   flush all iptables entries iptables -t filter -F iptables -t filter -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -t filter -P INPUT ACCEPT iptables -t filter -P OUTPUT ACCEPT iptables -t filter -P FORWARD ACCEPT  # initialise chains that will do the work and log the packets iptables -t mangle -N CONNMARK1 iptables -t mangle -A CONNMARK1 -j MARK --set-mark 1 iptables -t mangle -A CONNMARK1 -j CONNMARK --save-mark iptables -t mangle -A CONNMARK1 -j LOG --log-prefix 'iptables-mark1: ' --log-level info  iptables -t mangle -N CONNMARK2 iptables -t mangle -A CONNMARK2 -j MARK --set-mark 2 iptables -t mangle -A CONNMARK2 -j CONNMARK --save-mark iptables -t mangle -A CONNMARK2 -j LOG --log-prefix 'iptables-mark2: ' --log-level info  iptables -t mangle -N RESTOREMARK iptables -t mangle -A RESTOREMARK -j CONNMARK --restore-mark iptables -t mangle -A RESTOREMARK -j LOG --log-prefix 'restore-mark: ' --log-level info  iptables -t nat -N SNAT1 iptables -t nat -A SNAT1 -j LOG --log-prefix 'snat-to-172.29.13.201: ' --log-level info iptables -t nat -A SNAT1 -j SNAT --to-source 172.29.13.201  iptables -t nat -N SNAT2 iptables -t nat -A SNAT2 -j LOG --log-prefix 'snat-to-172.29.14.201: ' --log-level info iptables -t nat -A SNAT2 -j SNAT --to-source 172.29.14.201  #   iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE # iptables -t nat -A POSTROUTING -o eth2 -j MASQUERADE   # restore the fwmark on packets that belong to an existing connection iptables -t mangle -A PREROUTING -i eth0 \      -m state --state ESTABLISHED,RELATED -j RESTOREMARK   # if the mark is zero it means the packet does not belong to an existing connection iptables -t mangle -A PREROUTING -m state --state NEW \      -m statistic --mode nth --every 2 --packet 0 -j CONNMARK1 iptables -t mangle -A PREROUTING -m state --state NEW \      -m statistic --mode nth --every 2 --packet 1 -j CONNMARK2   iptables -t nat -A POSTROUTING -o eth1 -j SNAT1 iptables -t nat -A POSTROUTING -o eth2 -j SNAT2  if ! cat /etc/iproute2/rt_tables | grep -q '^51' then     echo '51     rt_link1' >> /etc/iproute2/rt_tables fi  if ! cat /etc/iproute2/rt_tables | grep -q '^52' then     echo '52     rt_link2' >> /etc/iproute2/rt_tables fi  ip route flush table rt_link1 2>/dev/null ip route add 172.29.13.0/24 dev eth1 src 172.29.13.201 table rt_link1 ip route add default via  172.29.13.253 table rt_link1 ip route flush table rt_link2 2>/dev/null ip route add 172.29.14.0/24 dev eth2 src 172.29.14.201 table rt_link2 ip route add default via 172.29.14.253 table rt_link2  ip rule del from all fwmark 0x1 lookup rt_link1 2>/dev/null ip rule del from all fwmark 0x2 lookup rt_link2 2>/dev/null ip rule del from all fwmark 0x2 2>/dev/null ip rule del from all fwmark 0x1 2>/dev/null  ip rule add fwmark 1 table rt_link1 ip rule add fwmark 2 table rt_link2   ip route flush cache  

traffic load balancing for openvpn

I want to make a openvpn tunnel that one client to multiple servers.

I found a way:

remote-random remote server1 remote server2 

However, this load balancing method of openvpn is to randomly select one from the server list each time the tunnel is connected. Once a server is selected, the tunnel will not change.

What I want is load balancing for traffic, such as the first http request sent through server1, and the second http request may be sent through server2. Is there any way to do this on openvpn?

Thanks.

Balancing homebrew magic items for my players

Warning for my players: Ayla, Lothar, Okazaki and Agmar: GTFO spoilers ahead!!

So I created some magic items and I want to know if I made them about equally, I’m aware these are probably broken, what I want to know is are the about the same in terms of power, practicality, thematic and fun over all. All of them require attunement.

For barbarian, totem warrior, basic brute who loves smashing I made Maul of conqueror: he absorbs enemy strength. If he gets killing blow he can use the slain enemy’s strength score instead of his own until he finishes long rest. If he kills a giant kin the strength becomes permanent. that’s the simplest one

next is the monk, open hand, loves to pick losing fights, he gets Grit of the underdog (fist weapon): this one evolves in stages

+1 to attack rolls and damage, you score critical hit on rolls 19-20. Additionally when you score a crit or your opponent score crit on you, you regain 1 ki point. If the number of ki points exceed your Monk level you lose all extra points after short or long rest

+2 you can spend 1 ki point to heal yourself. Spend 1 hit die and add your constitution modifier. you can use this even when unconscious.

+3 spent 5 ki points, for next minute you can add your opponent’s Strength bonus to your damage rolls

-Since he often saves ki points he rarely uses them and ends up not spending them.

for swashbuckler rouge, likes fancy stuff, plans to multiclass into fighter to get fighting style and riposte as battle maneuver. Basically Cabal’s ruin (I couldn’t think of cool armor that boosts riposte so I copied Matt T.T) but he +1/+2/+3 to AC to improve chances to riposte:

+1 Cabal’s Ruin has 4 charges, and regains 1d4 expended charges daily at dawn. When the wearer is targeted by an enemy’s spell, they can use their reaction to have the cloak swallow part of the spell. The cloak gains a number of charges equal to the spell level of the triggering spell. The wearer is still subject to whatever effects the spell would normally inflict on them. The ability cannot be used again until they finish a short or long rest. When the wearer hits with an attack, they can choose to expend any number of charges from the cloak, dealing an additional 1d6 lightning damage per charge expended, which is inflicted on the target of that attack. If the attack strikes multiple targets, the wearer must choose one target from the group to be subject to this damage.

+2 Cabal’s Ruin grants the wearer advantage on all saving throws against spells and other magical effects. The cloak’s maximum charges becomes 6, and it regains 1d4+2 charges daily at dawn. When using the cloak’s ability to swallow a spell, the wearer gains resistance to the damage of that spell.

+3 The cloak’s maximum charges becomes 10, and it regains 1d6+4 charges daily at dawn.

And lastly for Beast master ranger, she has a wolf and she kinda feels under powered. Her’s is most complicated and not 100% done. Link of the devourer, 2 piece item, collar and bracelet must be equiped by both companion and ranger:

Base form- when you cast a spell with a range of touch, your companion can deliver the spell as if it had cast the spell. Your companion must be within 100 feet of you, and it must use its reaction to deliver the spell when you cast it. If the spell requires an attack roll, you use your attack modifier for the roll.

Evolved form- the bond with companion evolves beyond natural. When either you or your companion take damage, you can chose which one of you takes the damage. Damage must be transfered whole and cannot be split betwen the two.

Ascended form- your link is complete, you are always aware how far your companion is away from you and in which general direction. While your companion is within 100 feet of you, you can communicate with it telepathically. Additionally, as an action, you can see through your companion’s eyes and hear what it hears until the start of your next turn, gaining the benefits of any special that the companion has. During this time, you are deaf and blind with regard to your own senses.

now for complicated part: based on what companion devours it gains traits of it’s prey, the traits also apply to you. you can switch unlocked forms during short rest.

Feral form: devour Beast to unlock. Use dire wolf stat block for companion, Ranger gets +10ft speed and temporary hit points equal to character level

Fey form: devour Fey to unlock. Use blink dog stat block for companion, Ranger gets +2 to Charisma, can cast invisibility without spending a spellslot once per short rest

Dragon form: Apply dragon template from monster manual to companion, Ranger gets +1 to AC, resistance to elemental damage based on dragon’s color and can add 1d6 of same elemental damage to damage rolls

Fiend form: not finished

Abberation form: not finished

Celestial form: not finished

So that is basically it. Any Ideas if some of them are more powerfull than the others, did they follow the theme and any ideas for 3 remaining forms forms??

How to compose load balancing and circuit breaking for external data source

So I have this issue. My website uses data, that is scraped from a different site – sports results. This data can update in relatively random intervals, but I do not care if my data is a bit stale – it does not have to be instant, but it should update on some regular basis.

At the same time, I cannot just cache the responses from the external site -> I process them and import into a graph database so that I can do other analytics over them.

I would like to have a system like this:

interface IDataSource {  public function getData(): array; }  class ExternalDataSource implements IDataSource { // gets data from the external website - the ultimate source of truth }  class InternalDataSource implements IDataSource { // gets data from my own graph database }  class InternalImportDecorator implements IDataSource {   private $  external;   public function __contruct(ExternalDataSource $  external) {     $  this->external = $  external   }    public function getData(): array   {     $  data = $  this-external->getData();     // import the data into my internal DB     return $  data;   } }  class CompositeDataSource implements IDataSource {     public function __construct(ExternalDataSource $  external, InternalDataSource $  internal)     {         $  this->external = new InternalImportDecorator($  external);         $  this->internal = $  internal;     }      public function getData(): array //HERE I NEED HELP     {       if(rand(0, 100) > 95) {//in 95% of the cases, go for internal DB for data - like weighted load-balancer somewhat         //here I need something like "chain of responsibility" in case the internal DB is not yet populated       } else { // go for the external data source, so that I can update my internal data         //what if the external data source is not available? I need a circuit breaker with fallback to internal         //what if I fall back to internal and the internal DB has not yet been populated       }     }  }  

I have a general idea about the code and the composition, I just need help with one method implementation. Or maybe just some nomenclature, how is this situation properly called, so that I can google it myself.

n coin balancing problem

Rank weights of coins with a balance scale

I want to generalized above problem into $ n$ coins.

i.e.,

using balance scale, sort $ n$ coins in order.

Slightly more generalizing the above post, [In that post, they didn’t consider equal weight case] Let’s consider, when we balance scale, there are three possibilities. [Let a,b the two coins, then a=b, a>b, a

Making trees for $ n=5$ , I obtain the number of weighting is 7.

And by the similar computation [with more effort] I figure out for $ n=6$ , 10 is enough.

How about its generalization to $ n$ ?

At this moment I have no idea how to generalize for $ n$ .

In searching internet, I found one particular arXiv,
1409.0250, but in there analysis is not matched even in $ n=5$ . [For example, I thought the section 4 of that paper is the same case with mine, but it seems not…]

Balancing function call overhead and testability in a code that is a part of the deep learning model training loop

I am currently implementing the transformer architecture for sequence to sequence problems. Key part of the model is the attention mechanism, which is basically a matrix multiplication, followed by a masking operation and a softmax function. My initial thought was to wrap this 3 steps in a function, that looks like this:

    def attention(self, matrix_1, matrix_2, mask=None, trans_1=False, trans_2=False):         att_stage_1 = F.matmul(matrix_1, matrix_2, transa=trans_1, transb=trans_2)*self.scale_score         att_stage_2 = F.where(mask, att_stage_1, self.np.ones(att_stage_1.shape, 'f')*(-1e9))         return F.softmax(att_stage_2, axis=3) 

I want to write unit tests for this function to test whether the output is what I expect it to be. The problem, however, is that this function, as it is, performs 3 separate operations: matmul, masking and softmax. I would prefer to determine that each of this operations does produces correct output, but as it is I could only check the final effect. This leads me to a design where I would wrap each of this 3 operations to a separate, dedicated function and test them separately. What I am concerned, however, is that the overhead of python functions calls in a training loop function that is called on each forward pass may be unnecessary.

Thus, the question is, what would be the correct approach to balance design and reliability vs performance in this scenario? Maybe I am missing some obvious approach here.

trying to merge chunks to trigger better balancing after 50% of the data was deleted by the developers

Trying to merge chunks using the following command:

         db.adminCommand          ( {            mergeChunks: "HTMLDumps.HTMLRepository",            bounds: [ {   "ShardMapId" : 2, "DomainId" : 62 },            {  "ShardMapId" : 2, "DomainId" : 162 } ]          } ) 

getting the following error when trying to run the above command to try to merge any of the available consecutive chunks available on a shard:

             {              "ok" : 0,              "errmsg" : "Failed to commit chunk merge :: caused by ::               DuplicateKey: chunk operation commit failed: version               32|6||5ba8d864bba4ff264edf0bd9 doesn't exist in               namespace: HTMLDumps.HTMLRepository. Unable to save               chunk ops. Command: { applyOps: [ { op: \"u\", b: false,               ns: \"config.chunks\", o: { _id: \"HTM               Dumps.HTMLRepository-ShardMapId_2.0DomainId_62.0\", ns:               \"HTMLDumps.HTMLRepository\", min: { ShardMapId: 2.0,               DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 162 },               shard: \"shard0000\", lastmod: Timestamp(32, 6),               lastmodEpoch: ObjectId('5ba8d864bba4ff264edf0bd9') },               o2: { _id: \"HTMLDumps.HTMLRepository-               ShardMapId_2.0DomainId_62.0\" } }, { op: \"d\", ns:               \"config.chunks\", o: { _id: \"HTMLDumps.HTMLRepository-               ShardMapId_2DomainId_109\" } } ], preCondition: [ { ns:               \"config.chunks\", q: { query: { ns:               \"HTMLDumps.HTMLRepository\", min: { ShardMapId: 2.0,               DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 109 }               }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:               ObjectId('5ba8d864bba4ff264edf0bd9'), shard:               \"shard0000\" } }, { ns: \"config.chunks\", q: { query:               { ns: \"HTMLDumps.HTMLRepository\", min: { ShardMapId:               2, DomainId: 109 }, max: { ShardMapId: 2, DomainId: 162               } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:               ObjectId('5ba8d864bba4ff264edf0bd9'), shard:               \"shard0000\" } } ], writeConcern: { w: 0, wtimeout: 0 }               }. Result: { applied: 1, code: 11000, codeName:               \"DuplicateKey\", errmsg: \"E11000 duplicate key error               collection: config.chunks index: ns_1_min_1 dup key: { :               \"HTMLDumps.HTMLRepository\", : { ShardMapId: 2.0,               DomainId: 62.0 } }\", results: [ false ], ok: 0.0,               operationTime: Timestamp(1554112692, 1), $  gleStats: {               lastOpTime: { ts: Timestamp(1554112692, 1), t: 13 },                electionId: ObjectId('7fffffff000000000000000d') },               $  clusterTime: { clusterTime: Timestamp(1554112692, 1),               signature: { hash: BinData(0,               0000000000000000000000000000000000000000), keyId: 0 } }               } :: caused by :: E11000 duplicate key error collection:               config.chunks index: ns_1_min_1 dup key: { :               \"HTMLDumps.HTMLRepository\", : { ShardMapId: 2.0,               DomainId: 62.0 } }",              "code" : 11000,              "codeName" : "DuplicateKey",              "operationTime" : Timestamp(1554112687, 1),              "$  clusterTime" : {              "clusterTime" : Timestamp(1554112687, 1),              "signature" : {                 "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),                 "keyId" : NumberLong(0)                }               }              } 

This is happening regardless of which chunks i select. Main reason for me to try to do this is to achieve true data balancing and not just chunk numbers. Recently developers deleted 90% of the data from these chunks that caused the distribution to get to 90% 10% state from 60/40 earlier. I hope to merge/remove empty chunks to ensure balancing of the data to reach as close to 60/40 as possible.

Applications for Service Discovery outside of Client-Side Load Balancing

I’ve been told that service discovery and client-side load balancing are two distinct concepts, however:

  1. I don’t see what you would use service discovery for outside of client-side load balancing; and
  2. I don’t see how you could implement auto-scale-enabed client-side load balancing without service discovery!

My understanding of service discovery is that you have some kind of client/agent running on each of your nodes that all use a consensus service (Consul, ZooKeeper, Eureka, etc.) to communicate the IPs of the healthy/active instances of all the backing services/resources that your nodes depend on. So if a 5-node Service A talks to a 10-node Service B, and one of those 10 Service B nodes goes “down”, the consensus service will alert all 5 Service A nodes not to talk to that particular Service B instance (IP). To me, this is client-side load balancing.

My understanding of client-side load balancing is when each node of Service A makes the decision as to which Service B node it talks to. Advantages of this, as opposed to a classic centralized load balancer sitting in front of all Service B nodes, is that there is now no single point of failure (SPoF) should that centralized load balancer go down. But the only way (that I can see!) to implement this and enable auto-scaling of both services is to use service discovery.

So I ask: how are these concepts really different if you can’t have one without the other? Or is there a whole universe of functionality that you get with service discovery that has nothing to do with client-side load balancing?!