What happens when identical overlapping effects have their end-condition met?

The Dungeon Master’s Guide errata (direct download) added the “Combining Game Effects” section which states (emphasis mine):

Different game features can affect a target at the same time. But when two or more game features have the same name, only the effects of one of them—the most potent one—apply while the durations of the effects overlap. […]

So with overlapping things, one instance will be “active” (will have effects) and the other will be “inactive” (won’t have effects). That said, I think my question will make more sense with examples:

  1. The Fire Elemental’s Fire Form trait:

    […] The first time it enters a creature’s space on a turn, that creature takes 5 (1d10) fire damage and catches fire; until someone takes an action to douse the fire, the creature takes 5 (1d10) fire damage at the start of each of its turns.

    If a creature is under the effects of multiple instances of Fire Form, and somebody uses their action to douse the fire, are both instances removed or only one?

  2. The booming blade spell:

    […] If the target willingly moves before then, it immediately takes 1d8 thunder damage, and the spell ends. […]

    If a creature is under the effects of multiple instances of booming blade, and they move, are both instances removed or only one?

  3. The hold monster spell:

    […] At the end of each of its turns, the target can make another Wisdom saving throw. On a success, the spell ends on the target. […]

    If a creature is under the effects of multiple instances of hold monster, and they succeed on a save, are both instances removed or only one?

Can an “inactive” effect end when its end-condition is met (letting them end simultaneously), or do these sorts of things always end one at a time? Or perhaps the answer is something in-between?


There is also the following related question:

  • Can multiple creatures grapple a single target?

Both answers there supports that if you are grappled by multiple creatures and you make a check to remove a grapple, you only remove one of the grapples. It’s not perfectly analogous situation but it is somewhat similar.

Identical records in IIS logs

I am not very knowledgeable about IIS 7, so I thought this was the right place to ask.

While inspecting the web server logs, I came across several instances of separate records that look just the same. For example:

#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken 2020-04-21 00:00:10 ABC.128.138.15 GET /MY/API/HERE departmentId=&prodLineId=&prodUnitId=&puGroupId=&FLId=null&startTime=2019-09-08T21:00:00.000Z&endTime=2019-09-09T21:00:00.000Z&status=null&itemType=&itemComponent=&howFound=&priority=&foundBy=&fixedBy=&flList= 80 - ABC.128.138.15 Apache-HttpClient/4.5.6+(Java/1.8.0_92) - 200 0 0 1453 2020-04-21 00:00:10 ABC.128.138.15 GET /MY/API/HERE departmentId=&prodLineId=&prodUnitId=&puGroupId=&FLId=null&startTime=2019-09-08T21:00:00.000Z&endTime=2019-09-09T21:00:00.000Z&status=null&itemType=&itemComponent=&howFound=&priority=&foundBy=&fixedBy=&flList= 80 - ABC.128.138.15 Apache-HttpClient/4.5.6+(Java/1.8.0_92) - 200 0 0 1453 

The question is – do those records correspond to two actual separate requests, or perhaps it was just one that for some reason got duplicated? This is not an isolated occurrence (there are hundreds more). Just for the record, these are all GET requests coming from the same source (an Apache Tomcat-based application that resides in the same web server and is invoking APIs in different application pools).

Thanks in advance,

Gabriel

Is multiplying hashes a valid way to ensure two sets of data are identical (but in arbitrary order)

Let’s say “User A” has a set of data like below. Each entry has been hashed (sha256) to ensure integrity within a single entry. You can’t modify data of a single entry without also modifying the corresponding hash:

[ { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  ] 

And “User B” has the same data but in a slightly different order. Hashes are the same of course:

[ { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  ] 

I want to allow both users to verify they have the exactly same set of data, ignoring sort order. If, as an extreme example, a hacker is able to replace User B’s files with otherwise valid-looking data, the users should be able to compare a hash of their entire datasets and detect a mismatch.

I was thinking to calculate a “total hash” which the users could compare to verify. It should be next to impossible to fabricate a valid looking dataset that results in the same “total hash”. But since the order can change, it’s a bit tricky.

I might have a possible solution, but I’m not sure if it’s secure enough. Is it, actually, secure at all?

My idea is to convert each sha256 hash to integer (javascript BigInt) and multiply them with modulo to get a total hash of similar length:

 var entries = [ { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  ];  var hashsize = BigInt("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"); var totalhash = BigInt(1); // arbitrary starting point  for (var i = 0; i < entries.length; i++) {   var entryhash = BigInt("0x" + entries[i].hash);   totalhash = (totalhash * entryhash) % hashsize;  } totalhash = totalhash.toString(16); // convert from bigint back to hex string 

This should result in the same hash for both User A and User B, unless other has tampered data, right? How hard would it be to create a slightly different, but valid-looking dataset that results in the same total checksum? Or is there a better way to accomplish this (without sorting!).

Aurora PostgreSQL database using a slower query plan than a normal PostgreSQL for an identical query?

Following the migration of an application and its database from a classical PostgreSQL database to an Amazon Aurora RDS PostgreSQL database (both using 9.6 version), we have found that a specific query is running much slower — around 10 times slower — on Aurora than on PostgreSQL.

Both databases have the same configuration, be it for the hardware or the pg_conf.

The query itself is fairly simple. It is generated from our backend written in Java and using jOOQ for writing the queries:

with "all_acp_ids"("acp_id") as (     select acp_id from temp_table_de3398bacb6c4e8ca8b37be227eac089 )  select distinct "public"."f1_folio_milestones"."acp_id",      coalesce("public"."sa_milestone_overrides"."team",      "public"."f1_folio_milestones"."team_responsible")  from "public"."f1_folio_milestones"  left outer join      "public"."sa_milestone_overrides" on (         "public"."f1_folio_milestones"."milestone" = "public"."sa_milestone_overrides"."milestone"          and "public"."f1_folio_milestones"."view" = "public"."sa_milestone_overrides"."view"          and "public"."f1_folio_milestones"."acp_id" = "public"."sa_milestone_overrides"."acp_id" ) where "public"."f1_folio_milestones"."acp_id" in (     select "all_acp_ids"."acp_id" from "all_acp_ids" ) 

With temp_table_de3398bacb6c4e8ca8b37be227eac089 being a single-column table, f1_folio_milestones (17 million entries) and sa_milestone_overrides (Around 1 million entries) being similarly designed tables having indexes on all the columns used for the LEFT OUTER JOIN.

When we run it on the normal PostgreSQL database, it generates the following query plan:

Unique  (cost=4802622.20..4868822.51 rows=8826708 width=43) (actual time=483.928..483.930 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.004..0.005 rows=1 loops=1)   ->  Sort  (cost=4802598.60..4824665.37 rows=8826708 width=43) (actual time=483.927..483.927 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Hash Left Join  (cost=46051.06..3590338.34 rows=8826708 width=43) (actual time=483.905..483.917 rows=4 loops=1)               Hash Cond: ((f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.view = (sa_milestone_overrides.view)::text) AND (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text))               ->  Nested Loop  (cost=31.16..2572.60 rows=8826708 width=37) (actual time=0.029..0.038 rows=4 loops=1)                     ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.009..0.010 rows=1 loops=1)                           Group Key: all_acp_ids.acp_id                           ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.006..0.007 rows=1 loops=1)                     ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..12.65 rows=5 width=37) (actual time=0.018..0.025 rows=4 loops=1)                           Index Cond: (acp_id = all_acp_ids.acp_id)               ->  Hash  (cost=28726.78..28726.78 rows=988178 width=34) (actual time=480.423..480.423 rows=987355 loops=1)                     Buckets: 1048576  Batches: 1  Memory Usage: 72580kB                     ->  Seq Scan on sa_milestone_overrides  (cost=0.00..28726.78 rows=988178 width=34) (actual time=0.004..189.641 rows=987355 loops=1) Planning time: 3.561 ms Execution time: 489.223 ms 

And it goes pretty smoothly as one can see — less than a second for the query. But on the Aurora instance, this happens:

Unique  (cost=2632927.29..2699194.83 rows=8835672 width=43) (actual time=4577.348..4577.350 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.001..0.001 rows=1 loops=1)   ->  Sort  (cost=2632903.69..2654992.87 rows=8835672 width=43) (actual time=4577.348..4577.348 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Merge Left Join  (cost=1321097.58..1419347.08 rows=8835672 width=43) (actual time=4488.369..4577.330 rows=4 loops=1)               Merge Cond: ((f1_folio_milestones.view = (sa_milestone_overrides.view)::text) AND (f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text))               ->  Sort  (cost=1194151.06..1216240.24 rows=8835672 width=37) (actual time=0.039..0.040 rows=4 loops=1)                     Sort Key: f1_folio_milestones.view, f1_folio_milestones.milestone, f1_folio_milestones.acp_id                     Sort Method: quicksort  Memory: 25kB                     ->  Nested Loop  (cost=31.16..2166.95 rows=8835672 width=37) (actual time=0.022..0.028 rows=4 loops=1)                           ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.006..0.006 rows=1 loops=1)                                 Group Key: all_acp_ids.acp_id                                 ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.003..0.004 rows=1 loops=1)                           ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..10.63 rows=4 width=37) (actual time=0.011..0.015 rows=4 loops=1)                                 Index Cond: (acp_id = all_acp_ids.acp_id)               ->  Sort  (cost=126946.52..129413.75 rows=986892 width=34) (actual time=4462.727..4526.822 rows=448136 loops=1)                     Sort Key: sa_milestone_overrides.view, sa_milestone_overrides.milestone, sa_milestone_overrides.acp_id                     Sort Method: quicksort  Memory: 106092kB                     ->  Seq Scan on sa_milestone_overrides  (cost=0.00..28688.92 rows=986892 width=34) (actual time=0.003..164.348 rows=986867 loops=1) Planning time: 1.394 ms Execution time: 4583.295 ms 

It effectively has a lower global cost, but takes almost 10 times as much time than before!

Disabling merge joins makes Aurora revert to a hash join, which gives the expected execution time — but permanently disabling it is not an option. Curiously though, disabling nested loops gives an even better result while still using a merge join…

Unique  (cost=3610230.74..3676431.05 rows=8826708 width=43) (actual time=2.465..2.466 rows=1 loops=1)   CTE all_acp_ids     ->  Seq Scan on temp_table_de3398bacb6c4e8ca8b37be227eac089  (cost=0.00..23.60 rows=1360 width=32) (actual time=0.004..0.004 rows=1 loops=1)   ->  Sort  (cost=3610207.14..3632273.91 rows=8826708 width=43) (actual time=2.464..2.464 rows=4 loops=1)         Sort Key: f1_folio_milestones.acp_id, (COALESCE(sa_milestone_overrides.team, f1_folio_milestones.team_responsible))         Sort Method: quicksort  Memory: 25kB         ->  Merge Left Join  (cost=59.48..2397946.87 rows=8826708 width=43) (actual time=2.450..2.455 rows=4 loops=1)               Merge Cond: (f1_folio_milestones.acp_id = (sa_milestone_overrides.acp_id)::text)               Join Filter: ((f1_folio_milestones.milestone = sa_milestone_overrides.milestone) AND (f1_folio_milestones.view = (sa_milestone_overrides.view)::text))               ->  Merge Join  (cost=40.81..2267461.88 rows=8826708 width=37) (actual time=2.312..2.317 rows=4 loops=1)                     Merge Cond: (f1_folio_milestones.acp_id = all_acp_ids.acp_id)                     ->  Index Scan using f1_folio_milestones_acp_id_idx on f1_folio_milestones  (cost=0.56..2223273.29 rows=17653416 width=37) (actual time=0.020..2.020 rows=1952 loops=1)                     ->  Sort  (cost=40.24..40.74 rows=200 width=32) (actual time=0.011..0.012 rows=1 loops=1)                           Sort Key: all_acp_ids.acp_id                           Sort Method: quicksort  Memory: 25kB                           ->  HashAggregate  (cost=30.60..32.60 rows=200 width=32) (actual time=0.008..0.008 rows=1 loops=1)                                 Group Key: all_acp_ids.acp_id                                 ->  CTE Scan on all_acp_ids  (cost=0.00..27.20 rows=1360 width=32) (actual time=0.005..0.005 rows=1 loops=1)               ->  Materialize  (cost=0.42..62167.38 rows=987968 width=34) (actual time=0.021..0.101 rows=199 loops=1)                     ->  Index Scan using sa_milestone_overrides_acp_id_index on sa_milestone_overrides  (cost=0.42..59697.46 rows=987968 width=34) (actual time=0.019..0.078 rows=199 loops=1) Planning time: 5.500 ms Execution time: 2.516 ms 

We have asked the AWS support team, they are still looking at the issue, but we are wondering what could cause that issue to happen. What could explain such a behaviour difference?

While looking at some of the documentation for the database, I read that Aurora favors cost over time — and hence it uses the query plan that has the lowest cost.

But as we can see, it’s far from being optimal given its response time… Is there a threshold or a setting that could make the database use a more expensive — but faster — query plan?

Collision detection when falling: two identical cases?

I am looking for a conceptual solution to my problem. It’s a simple platformer-alike game where player can move horizontally during free-fall.

Consider those two cases: enter image description here

In the first case, from game experience point of view, the player should land on top of the box; and in the other case he hit the left edge, hence the player should fall down.

However, from my code point of view (“real behaviour”), both those collision detection cases are identical. I am not sure how to separate them.

In both cases the vertical velocity is positive (falling down) and the user is moving with some fixed positive horizontal velocity. (moving right)

From a collision-standpoint the two cases are identical, I think. How can I tell whether I should put the player on top of it or let him fall?

Should a retail web-site always return the same items for identical searches made by different users?

We supply a large number of products for purchase through our web-site. There is a new initiative to apply a third-party AI product to hijack searches to return products based on both the search term and predictions from browsing history and other people’s search history with successfully processed sales. If I now search for a keyword, I will get a different set of products returned than if someone else searches for the same thing. If I pass an URL of my search to a friend to compare the products, we will have different lists, so cannot discuss. The list, I find, changes day-to-day, on my own machine due to the AI’s suggestions.

Is this a design “no-no”?

Should the AI be solely used for recommendations and not for the core search results?

Is there any guideline to cite that makes suggestions on this?

I have also put this on: Software Recommendations

Running CMake through GUI – After identical reinstall of Ubuntu, cmake-gui throws an error and does not launch

I’ve been reinstalling Ubuntu on my server a bunch of times, as I’ve inadvertently mucked something up every time, and found it easier to just do a fresh install.

Now, for some reason, after doing a fresh install with Ubuntu 19.04 minimal, as provided by myserver host Hetzner, the cmake-gui command does not run.

The server itself does not come with a GUI, so I installed LXDE on it, and can run GUI applications just fine while remote connecting through X2Go. I’ve installed cmake, and cmake-gui, yet for some reason, it throws the following error: https://i.imgur.com/ifLX1Zn.png

From the few google results that were relevant, it’s got something to do with display drivers not interfacing well with x-server, with the solution being to downgrade the kernel. Never had to do anything like that in prior Ubuntu installs, so I assume that’s not the issue at all.

So how do I resolve this?

Not possible to change monitor brightness for one of 2 identical monitors

At work I have a computer with two identical samsung monitors connected. The monitors have buttons to control brightness. On one of them I can just change the brightness normally. On the other one, when I open the menu using the monitors buttons, it will tell me this option is not available.
So I tried this answer, but it will only change the brightness relative to the brighntess set on the monitor. So I can only make it darker, not brighter.
I think this is not a problem with the monitor, but rather a problem with ubuntu and it’s settings. Unfortunately I don’t have sudo rights on the machine though.
Anyone has an idea, why the monitors are behaving like this?