Are there any instances of any rules for Tabaxi that say they only need 4 hours of rest?

One of my players has just started playing a tabaxi, she is sure she read rules that as a racial trait she only needs 4 hours for a long rest. However neither of us can find rules stating this. She is happy not doing it but I just wanted to confirm have these long rest rules ever stood for tabaxi?

I have referred to DnD beyond and have the official rules for tabaxi in physical form and confirmed for 5e it is 8 hours.

What I am asking is if there is a source she may have read, either from an older version of DnD, or pathfinder or an unofficial rule or UA that was not progeressed at all. She accepts it is possible she either mis read or just imagined reading it somewhere.

Retaining earliest instances of duplicate entries from a table

We have a situation where duplicate entries have crept into our table with more than 60 million entries (duplicate here implies that all fields, except the AUTO_INCREMENT index field have the same value). We suspect that there are about 2 million duplicate entries in the table. We would like to delete these duplicate entries such that the earliest instances of the duplicate entries are retained.

Let me explain with an illustrative table:

CREATE TABLE people ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, name VARCHAR(40) NOT NULL DEFAULT '', age INT NOT NULL DEFAULT 0, phrase VARCHAR(40) NOT NULL DEFAULT '', PRIMARY KEY (id) );  INSERT INTO people(name, age, phrase) VALUES ('John Doe', 25, 'qwert'), ('William Smith', 19, 'yuiop'), ('Peter Jones', 19, 'yuiop'), ('Ronnie Arbuckle', 32, 'asdfg'), ('Ronnie Arbuckle', 32, 'asdfg'), ('Mary Evans', 18, 'hjklp'), ('Mary Evans', 18, 'hjklpd'), ('John Doe', 25, 'qwert');  SELECT * FROM people; +----+-----------------+-----+--------+ | id | name            | age | phrase | +----+-----------------+-----+--------+ |  1 | John Doe        |  25 | qwert  | |  2 | William Smith   |  19 | yuiop  | |  3 | Peter Jones     |  19 | yuiop  | |  4 | Ronnie Arbuckle |  32 | asdfg  | |  5 | Ronnie Arbuckle |  32 | asdfg  | |  6 | Mary Evans      |  18 | hjklp  | |  7 | Mary Evans      |  18 | hjklpd | |  8 | John Doe        |  25 | qwert  | +----+-----------------+-----+--------+ 

We would like to remove duplicate entries so that we get the following output:

SELECT * FROM people; +----+-----------------+-----+--------+ | id | name            | age | phrase | +----+-----------------+-----+--------+ |  1 | John Doe        |  25 | qwert  | |  2 | William Smith   |  19 | yuiop  | |  3 | Peter Jones     |  19 | yuiop  | |  4 | Ronnie Arbuckle |  32 | asdfg  | |  6 | Mary Evans      |  18 | hjklp  | |  7 | Mary Evans      |  18 | hjklpd | +----+-----------------+-----+--------+ 

On smaller sized tables the following approach would work:

CREATE TABLE people_uniq LIKE people;  INSERT INTO people_uniq SELECT * FROM people GROUP BY name, age, phrase;  DROP TABLE people;  RENAME TABLE people_uniq TO people;  SELECT * FROM people; +----+-----------------+-----+--------+ | id | name            | age | phrase | +----+-----------------+-----+--------+ |  1 | John Doe        |  25 | qwert  | |  2 | William Smith   |  19 | yuiop  | |  3 | Peter Jones     |  19 | yuiop  | |  4 | Ronnie Arbuckle |  32 | asdfg  | |  6 | Mary Evans      |  18 | hjklp  | |  7 | Mary Evans      |  18 | hjklpd | +----+-----------------+-----+--------+ 

Kindly suggest a solution that would scale to a table with tens of millions of entries and many more columns. We are using MySQL version 5.6.49.

Aggregate Multiple Instances of Each Row Without Multiple Seq Scans

I am trying to perform some mathematical operations in PostgreSQL that involve calculating multiple values from each row, then aggregating, without requiring multiple Seq Scans over the whole table. Performance is critical for my application, so I want this to run as efficiently as possible on large data sets. Are there any optimizations I can do to cause PostgreSQL to only use a single Seq Scan?

Here’s a simplified example:

Given this test data set:

postgres=> CREATE TABLE values (value int); postgres=> INSERT INTO values (value) SELECT * from generate_series(-500000,500000); postgres=> SELECT * FROM values;   value ---------  -500000  -499999  -499998  -499997  -499996 ...  499996  499997  499998  499999  500000 

And I want to perform this query that counts 2 instances of each row, once by the value column and once by the abs(value). I’m currently accomplishing this with CROSS JOIN:

SELECT   CASE idx   WHEN 0 THEN value   WHEN 1 THEN abs(value)   END,   COUNT(value) FROM values CROSS JOIN LATERAL unnest(ARRAY[0,1]) idx GROUP BY 1; 

Here’s the EXPLAIN ANALYZE result for this query. Notice the loops=2 in the Seq Scan line:

postgres=> EXPLAIN ANALYZE SELECT ....                                                           QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------  HashAggregate  (cost=82194.40..82201.40 rows=400 width=12) (actual time=997.448..1214.576 rows=1000001 loops=1)    Group Key: CASE idx.idx WHEN 0 THEN "values".value WHEN 1 THEN abs("values".value) ELSE NULL::integer END    ->  Nested Loop  (cost=0.00..70910.65 rows=2256750 width=8) (actual time=0.024..390.070 rows=2000002 loops=1)          ->  Function Scan on unnest idx  (cost=0.00..0.02 rows=2 width=4) (actual time=0.005..0.007 rows=2 loops=1)          ->  Seq Scan on "values"  (cost=0.00..15708.75 rows=1128375 width=4) (actual time=0.012..82.584 rows=1000001 loops=2)  Planning Time: 0.073 ms  Execution Time: 1254.362 ms 

I compared this to the case of only using 1 instance of each row rather than 2. The 1 instance query performs a single Seq Scan and runs ~50% faster (as expected):

postgres=> EXPLAIN ANALYZE SELECT postgres-> value, postgres-> COUNT(value) postgres-> FROM values postgres-> GROUP BY 1;                                                        QUERY PLAN -------------------------------------------------------------------------------------------------------------------------  HashAggregate  (cost=21350.62..21352.62 rows=200 width=12) (actual time=444.381..662.952 rows=1000001 loops=1)    Group Key: value    ->  Seq Scan on "values"  (cost=0.00..15708.75 rows=1128375 width=4) (actual time=0.015..84.494 rows=1000001 loops=1)  Planning Time: 0.044 ms  Execution Time: 702.806 ms (5 rows) 

I want to scale this up to a much larger data set, so performance is critical. Are there any optimizations I cause my original query to run with only 1 Seq Scan? I’ve tried tweaking query plan settings (enable_nestloop, work_mem, etc)

Other Attempts

Here are some other approachs I tried:

  1. Using UNION still performs 2 Seq Scans:
SELECT    value,    COUNT(value) FROM (   SELECT value FROM values UNION   SELECT abs(value) AS value FROM values ) tbl GROUP BY 1; 
postgres=> EXPLAIN ANALYZE ...                                                                   QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------------------  HashAggregate  (cost=130150.31..130152.31 rows=200 width=12) (actual time=1402.221..1513.000 rows=1000001 loops=1)    Group Key: "values".value    ->  HashAggregate  (cost=73731.56..96299.06 rows=2256750 width=4) (actual time=892.904..1112.867 rows=1000001 loops=1)          Group Key: "values".value          ->  Append  (cost=0.00..68089.69 rows=2256750 width=4) (actual time=0.025..343.921 rows=2000002 loops=1)                ->  Seq Scan on "values"  (cost=0.00..15708.75 rows=1128375 width=4) (actual time=0.024..86.299 rows=1000001 loops=1)                ->  Seq Scan on "values" values_1  (cost=0.00..18529.69 rows=1128375 width=4) (actual time=0.013..110.885 rows=1000001 loops=1)  Planning Time: 0.067 ms  Execution Time: 1598.531 ms 
  1. Using PL/PGSQL. This performs only 1 Seq Scan, but ARRAY operations in PL/PGSQL are very slow, so this actual executes slower than the original:
CREATE TEMP TABLE result (value int, count int); DO LANGUAGE PLPGSQL $  $     DECLARE     counts int8[];     row record;   BEGIN      counts = array_fill(0, ARRAY[500000]);     FOR row IN (SELECT value FROM values) LOOP       counts[row.value] = counts[row.value] + 1;       counts[abs(row.value)] = counts[abs(row.value)] + 1;     END LOOP;      FOR i IN 0..500000 LOOP       CONTINUE WHEN counts[i] = 0;       INSERT INTO result (value, count) VALUES (i, counts[i]);     END LOOP;   END $  $  ; SELECT value, count FROM result; 
postgres=> \timing Timing is on. postgres=> DO LANGUAGE PLPGSQL $  $   ... DO Time: 2768.611 ms (00:02.769) 
  1. Tweaking Query Plan Configuration. I tried changing enable_seqscan, enable_nestloop, work_mem, and cost constraints and could not find a configuration that performed better than original.

Do multiple instances of the Extra Attack feature stack?

The WotC article Unearthed Arcana: Modifying Classes creates the "Favored Soul" sorcerous origin, which grants the the sorcerer the Extra Attack feature at level 6.

Say I made Favored Soul sorcerer go up to level 6, and then multiclassed into Ranger to get its Extra Attack at level 5. Would the two Extra Attack features stack so that I could get 3 or 4 attacks in a turn?

Dataset of Hard Instances of SUBSET-SUM

I know for factoring we have the RSA Numbers, in which factoring one of them quickly (usually) indicates a breakthrough in the field. However, I want to know if there’s something similar for SUBSET-SUM, in which there are hard instances that if solved, would be a "big deal"? I found this, but they don’t seem to be unsolved.

One way would to take the RSA numbers, convert them to 3-SAT, then convert to SUBSET-SUM, but the weights generated are very large. Maybe there’s a way to convert FACTOR (the special case of two prime factors, to be specific) to SUBSET-SUM?

How to leverage the fact that I’m solving 1000’s of very similar SMT instances?

I have a core SMT solver problem consisting of 100,000 bit-vector array clauses, and one 10000-dimensional bit-vector array. Then, my program takes as input k << 100,000 new clauses, and adds them to the core SMT problem. My goal is, for any input of k clauses, to solve the entire problem.

Is there any static optimization or learning I could do on the core problem in order to find a better way to solve each of its siblings? For instance, is there some property of the graph of bit-vector variables being constrained in each clause that I could use as a heuristic for solving the specific instances?

Thanks!

How come non creatable/destroyable Roblox instances have :Destroy() :Clone() etc?

Roblox has many types of instances. But services and other NonCreatable Instances (ReplicatedStorage, Workspace, etc.) still have methods for creating or destroying. Why? Why do they have :Destroy() and :Clone() methods if they cannot be destroyed or created? What’s the point of inheriting these from the Instance class?

Is it possible to sync up LMKs in two Thales PayShield 9000 instances>

Basically, when we execute a generate key command such as A0 then we receive a key-under-lmk for future use. What if we have multiple HSMs in a high availability configuration? How would we make sure that all keys-under-LMK mean the same thing to all HSM instances?

The documentation I have doesn’t cover this and I didn’t find anything online about that particular model.

Can you add your the same modifier to damage rolls multiple times if they come from different instances?

A friend of mine is wanting to do a specific build and as far as I am aware you can not apply a modifier multiple times in this way. Can someone confirm if this is a RAW legal tactic?

Scenario: lv 6 tiefling celestial warlock, assuming 16 charisma

Cast shillelagh to make staff 1d8+cha [source: pact of the tome]

Then cast searing smite for 1d6+cha [source: teifling and celestial warlock’s radiant soul feature to add charisma mod]

Then green-flame blade attack for another 1d8+cha [celestial warlock’s radiant soul feature to add charisma mod]

A single hit would be 2d8+1d6+9

Do all these instances of charisma modifier stack like this?

Are there instances in which a character can choose to trade in a spell slot for some negative consequences?

Last night, I DMed a game with only two players, and neither of them had any prior experience with D&D. All in all, it went great, but towards the end, they got into some serious trouble and the druid seemed somewhat surprised by the fact that she had used all of her spell slots. I didn’t want to punish a beginner too hard and the only alternatives appeared to be a near inevitable TPK or some cheesy deus ex machina, so I told her she could try to cast Healing Word despite having used all of her magic power for the day. I let her make a CON save to decide how she could handle the enormous stress of stretching her abilities to such an extend. She rolled quite high, but not extra-ordinarily high, so I decided that she could indeed successfully cast the spell, but that it might backfire later in some way. I haven’t decided the specifics yet and in order to keep it interesting but balanced, I am looking for something similar to this in any official source book.

I’m aware that I’m well into homebrew territory with that ruling, and that this is not the right site to ask for inspiration. This is why I am specifically asking the following:

Is there any class or racial feature or any item that allows a character with no remaining spell slots to cast a spell of level 1 or higher at the cost of some negative consequence (e.g. taking a level of exhaustion)?

I am not asking for general ways to simply cast spells without expiring a spell slot. There has to be some immediate trade-off. Taking 18 wizard levels in order to gain access to Spell Mastery can of course be seen as quite a trade-off, but I hope it is obvious that this is not what I am looking for.