## What is the correct interpretation of the Gambling Results table in Xanathar’s Guide to Everything?

In Xanathar’s Guide to Everything, one of the downtime options provided in “Downtime, Revised” allows a character to gamble during their downtime to earn extra money.

### Gambling

Games of chance are a way to make a fortune—and perhaps a better way to lose one.

[…]

Gambling Results
$$\begin{array}{|l|l|} \hline \text{Result} & \text{Value} \ \hline \text{0 Successes} & \text{Lose all the money you bet, and accrue a debt equal to that amount.} \ \hline \text{1 Success} & \text{Lose half the money you bet.} \ \hline \text{2 Successes} & \text{Gain the amount you bet plus half again more.} \ \hline \text{3 Successes} & \text{Gain double the amount you bet.} \ \hline \end{array}$$

—Xanathar’s Guide to Everything, pg. 130

So if I place a bet of 100gp and make my checks against this table, how much would I have, for each category?

## PostgreSQL: query slow, planner estimates 0.01-0.1 of actual results

I will preface this by saying I’m not well-versed in SQL at all. I mainly work with ORMs, and this recent headache has brought me to dive into the world of queries, planners, etc..

A very common query is behaving weirdly on my website. I have tried various techniques to solve it but nothing is really helping, except narrowing down the released_date field from 30 days to 7 days. However, from my understanding the tables we’re talking about aren’t very big and PostgreSQL should satisfy my query in acceptable times.

Some statistics:

core_releasegroup row count: 3,240,568

core_artist row count: 287,699

core_subscription row count: 1,803,960

Relationships:

Each ReleaseGroup has M2M with Artist, each Artist has M2M with UserProfile through Subscription. I’m using Django which auto-creates indices on foreign-keys, etc..

Here’s the query:

SELECT "core_releasegroup"."id", "core_releasegroup"."title", "core_releasegroup"."type", "core_releasegroup"."release_date", "core_releasegroup"."applemusic_id", "core_releasegroup"."applemusic_image", "core_releasegroup"."geo_apple_music_link", "core_releasegroup"."amazon_aff_link", "core_releasegroup"."is_explicit", "core_releasegroup"."spotify_id", "core_releasegroup"."spotify_link"  FROM "core_releasegroup"  INNER JOIN "core_artist_release_groups"  ON ("core_releasegroup"."id" = "core_artist_release_groups"."releasegroup_id")  WHERE ("core_artist_release_groups"."artist_id"  IN  (SELECT U0."artist_id" FROM "core_subscription" U0 WHERE U0."profile_id" = 1)  AND "core_releasegroup"."type" IN ('Album', 'Single', 'EP', 'Live', 'Compilation', 'Remix', 'Other')  AND "core_releasegroup"."release_date" BETWEEN '2020-08-20'::date AND '2020-10-20'::date); 

Here’s the table schema:

CREATE TABLE public.core_releasegroup (     id integer NOT NULL,     created_date timestamp with time zone NOT NULL,     modified_date timestamp with time zone NOT NULL,     title character varying(560) NOT NULL,     type character varying(30) NOT NULL,     release_date date,     applemusic_id character varying(512),     applemusic_image character varying(512),     applemusic_link character varying(512),     spotify_id character varying(512),     spotify_image character varying(512),     spotify_link character varying(512),     is_explicit boolean NOT NULL,     spotify_last_refresh timestamp with time zone,     spotify_next_refresh timestamp with time zone,     geo_apple_music_link character varying(512),     amazon_aff_link character varying(620) ); 

I have an index both on type, release_date, and applemusic_id.

Here’s the PostgreSQL execution plan: (notice the estimates)

 Nested Loop  (cost=2437.52..10850.51 rows=4 width=495) (actual time=411.911..8619.311 rows=362 loops=1)    Buffers: shared hit=252537 read=29104    ->  Nested Loop  (cost=2437.09..10578.84 rows=569 width=499) (actual time=372.265..8446.324 rows=36314 loops=1)          Buffers: shared hit=143252 read=29085          ->  Bitmap Heap Scan on core_releasegroup  (cost=2436.66..4636.70 rows=567 width=495) (actual time=372.241..7707.466 rows=32679 loops=1)                Recheck Cond: ((release_date >= '2020-08-20'::date) AND (release_date <= '2020-10-20'::date) AND ((type)::text = ANY ('{Album,Single,EP,Live,Compilation,Remix,Other}'::text[])))                Heap Blocks: exact=29127                Buffers: shared hit=10222 read=27872                ->  BitmapAnd  (cost=2436.66..2436.66 rows=567 width=0) (actual time=366.750..366.750 rows=0 loops=1)                      Buffers: shared hit=15 read=8952                      ->  Bitmap Index Scan on core_releasegroup_release_date_03a267f7  (cost=0.00..342.46 rows=16203 width=0) (actual time=8.834..8.834 rows=32679 loops=1)                            Index Cond: ((release_date >= '2020-08-20'::date) AND (release_date <= '2020-10-20'::date))                            Buffers: shared read=92                      ->  Bitmap Index Scan on core_releasegroup_type_58b6243d_like  (cost=0.00..2093.67 rows=113420 width=0) (actual time=355.071..355.071 rows=3240568 loops=1)                            Index Cond: ((type)::text = ANY ('{Album,Single,EP,Live,Compilation,Remix,Other}'::text[]))                            Buffers: shared hit=15 read=8860          ->  Index Scan using core_artist_release_groups_releasegroup_id_cea5da71 on core_artist_release_groups  (cost=0.43..10.46 rows=2 width=8) (actual time=0.018..0.020 rows=1 loops=32679)                Index Cond: (releasegroup_id = core_releasegroup.id)                Buffers: shared hit=133030 read=1213    ->  Index Only Scan using core_subscription_profile_id_artist_id_a4d3d29b_uniq on core_subscription u0  (cost=0.43..0.48 rows=1 width=4) (actual time=0.004..0.004 rows=0 loops=36314)          Index Cond: ((profile_id = 1) AND (artist_id = core_artist_release_groups.artist_id))          Heap Fetches: 362          Buffers: shared hit=109285 read=19  Planning Time: 10.951 ms  Execution Time: 8619.564 ms 

Please note that the above is a stripped down version of the actual query I need. Because of its unbearable slowness, I’ve stripped down this query to a bare-minimum and reverted to doing some filtering and ordering of the returned objects in Python (which I know is usually slower). As you can see, it’s still very slow.

After a while, probably because the memory/cache are populated, this query becomes much faster:

 Nested Loop  (cost=2437.52..10850.51 rows=4 width=495) (actual time=306.337..612.232 rows=362 loops=1)    Buffers: shared hit=241776 read=39865 written=4    ->  Nested Loop  (cost=2437.09..10578.84 rows=569 width=499) (actual time=305.216..546.749 rows=36314 loops=1)          Buffers: shared hit=132503 read=39834 written=4          ->  Bitmap Heap Scan on core_releasegroup  (cost=2436.66..4636.70 rows=567 width=495) (actual time=305.195..437.375 rows=32679 loops=1)                Recheck Cond: ((release_date >= '2020-08-20'::date) AND (release_date <= '2020-10-20'::date) AND ((type)::text = ANY ('{Album,Single,EP,Live,Compilation,Remix,Other}'::text[])))                Heap Blocks: exact=29127                Buffers: shared hit=16 read=38078 written=4                ->  BitmapAnd  (cost=2436.66..2436.66 rows=567 width=0) (actual time=298.382..298.382 rows=0 loops=1)                      Buffers: shared hit=16 read=8951                      ->  Bitmap Index Scan on core_releasegroup_release_date_03a267f7  (cost=0.00..342.46 rows=16203 width=0) (actual time=5.619..5.619 rows=32679 loops=1)                            Index Cond: ((release_date >= '2020-08-20'::date) AND (release_date <= '2020-10-20'::date))                            Buffers: shared read=92                      ->  Bitmap Index Scan on core_releasegroup_type_58b6243d_like  (cost=0.00..2093.67 rows=113420 width=0) (actual time=289.917..289.917 rows=3240568 loops=1)                            Index Cond: ((type)::text = ANY ('{Album,Single,EP,Live,Compilation,Remix,Other}'::text[]))                            Buffers: shared hit=16 read=8859          ->  Index Scan using core_artist_release_groups_releasegroup_id_cea5da71 on core_artist_release_groups  (cost=0.43..10.46 rows=2 width=8) (actual time=0.003..0.003 rows=1 loops=32679)                Index Cond: (releasegroup_id = core_releasegroup.id)                Buffers: shared hit=132487 read=1756    ->  Index Only Scan using core_subscription_profile_id_artist_id_a4d3d29b_uniq on core_subscription u0  (cost=0.43..0.48 rows=1 width=4) (actual time=0.002..0.002 rows=0 loops=36314)          Index Cond: ((profile_id = 1) AND (artist_id = core_artist_release_groups.artist_id))          Heap Fetches: 362          Buffers: shared hit=109273 read=31  Planning Time: 1.088 ms  Execution Time: 612.360 ms 

This is still slow in SQL terms (I guess?), but much more acceptable. The problem is, even though this is a very common view in my web-app (an often executed query), it is still not kept around in RAM/cache, and so I see these huge response-time spikes too often.

I’ve tried every combination of constructing those queries. My last attempt is to see whether the planner estimations are to blame here, and if they’re fixable. If not, I’ll start considering de-normalization.

Or is there something else I’m missing?

## Add post featured image to default WP search results page function snippet

I did search a lot for such a snippet, but did not find any. Could anyone assist?

## Which function results from primitive recursion of the functions g and h?

Which function results from primitive recursion of the functions $$g$$ and $$h$$?

1. $$f_1=PR(g,h)$$ with $$g=succ\circ zero_0, h=zero_2$$
2. $$f_2=PR(g,h)$$ with $$g=zero_0, h=f_1\circ P_1^{(2)}$$
3. $$f_3=PR(g,h)$$ with $$g=P_1^{(2)}, h=P_2^{(4)}$$
4. $$f_4=PR(g,h)$$ with $$g=f_3\left(f_1(x),succ(x),f_2(x)\right)$$

(1.) $$g:N^0\to N$$, $$h:N^2\to N$$
$$f(0)=1$$
$$f(0+1)=h(0,f(0))=h(0,1)=0$$
$$f(1+1)=h(1,f(1))=h(1,0)=0$$
$$\forall n\in N_{>0}:f(n+1)=h(n,f(n))=0$$, $$f_1$$ is defined as $$f_1:N^1\to N$$ with $$f_1(x)=\begin{cases}1, x=0\ 0, x>0\end{cases}$$

(2.) $$g:N^0\to N$$, $$h:N^2\to N$$
$$f(0)=0$$
$$f(0+1)=h(0,f(0))=h(0,0)=1$$ $$f(1+1)=h(1,f(1))=h(1,1)=0$$ $$\forall n\in N_{>0}: f(n+1)=h(n,f(n))=0$$, $$f_2$$ is defined the same as $$f_1$$, $$f_1(x)=f_2(x)$$

(3.) $$g:N^2\to N$$, $$h:N^4\to N$$
$$f(x,y,0)=x$$
$$f(x,y,0+1)=h(x,y,0,f(x,y,0))=h(x,y,0,x)=y$$ $$f(x,y,1+1)=h(x,y,1,f(x,y,1))=h(x,y,1,y)=y$$ $$\forall z \in N_{>0}: f(x,y,z+1)=h(x,y,z,f(x,y,z))=y$$, $$f_3$$ is defined as $$f_3:N^3\to N$$ with $$f_3(x,y,z)=\begin{cases}x, z=0\ y, z>0\end{cases}$$

Is this correct up to here? It looks way too easy, that’s why I’m not sure.

## I’v indexed my pages in search console, but Google didnt show them in search results [duplicate]

I have indexed all of my webpages into google search console tools.. but Google did not show the results in search results.

my website is : https://voyage-actualite.com/

Can you help me please.. see the latests articles, they wont show in search results

Thank you

## Using an example to comprehend why “safely” erasing a drive yields better results than filling it up with meaningless data

A hypothetical 1GB USB stick is full of sensitive documents/images/etc. and it is not encrypted.

The owner wishes to discard it and is aware of having to safely erase it first.

There are several tools and utilities to do this. Some can be configured to do it “faster yet less safely”, others do it “slower but more safely”.

As opposed to have it erased using all the different ways known to do this, the owner chooses to simply drag all the current items to the recycle bin and then paste one 1GB (~2-hour) black screen movie file to the USB stick.

Again, no fancy erase utilities are used. The USB stick is then discarded.

If it falls into the wrong hands, can any of the sensitive files (that filled the stick before the movie file was pasted) be retrieved?

(1) If no, why do complex hard drive erase utilities exist? Some of them feature “safe” erase procedures that take houuurs, when simply filling a soon to be discarded HD with meaningless files, can do the job?

(2) If yes, how can 2GB (movie file + sensitive files) co-exist in a 1GB stick? Seems to me like the only logical explanation is (a) the movie file was in fact less than 1GB, (b) the USB stick was secretly larger than 1GB as stated, or (c) the movie file was copy-pasted only partially and the owner did not notice.

## Google Search Console not showing proper other pages results (they are indexed)

i have a website and than i have a wordpress blog in /blog route of that site they are kind of independent for now since i havent done any internal linking on them so i created this new post n my blog and submitted it for indexing it got positive results and its also showing up on google but i cant see its results on google search console does any one know why?

side note: i havent submitted site map for now.(and if its because of site map than is it possible to view results without sitemap?)

## Adder-Subtractor Circuit With Negative Results

So, I understand how binary arithmetic works, and I understand how an adder-subtractor works for signed numbers. There is only one thing I am not sure about:

All the cases work ok in the circuit I have, except if the result of a subtraction is negative, I need to take the two’s complement of the output byte to get the actual result. What can I do about it? Do I need an extra array of adders to compute the two’s complement only in that specific way, or is there any smarter solution I can apply?

## Can’t understand difference in fulltext results – contains, contains with wildcard, freetext

I have a table with an fulltext index on the column named Filecontent. The table has a row where content contains "W 917". For context, the content column on this rows contains much more than just what I’m searching for.

I don’t understand why I’m getting different results depending on whether I’m using contains, contains with wildcard or freetext. Why is CONTAINS without wildcard getting results, but CONTAINS with wildcard doesn’t?

-- Searching for "W 917" -- No match - CONTAINS with wildcard SELECT * FROM InvoicePDFContent t1 WHERE CONTAINS(t1.Filecontent, '"W 917*"')  -- Match - CONTAINS SELECT * FROM InvoicePDFContent t1 WHERE CONTAINS(t1.Filecontent, '"W 917"')  -- Match - FREETEXT SELECT * FROM InvoicePDFContent t1 WHERE FREETEXT(t1.Filecontent, '"W 917"')   -- Searching for "W" -- Match - CONTAINS with wildcard SELECT * FROM InvoicePDFContent t1 WHERE CONTAINS(t1.Filecontent, '"W*"')  -- No match - CONTAINS SELECT * FROM InvoicePDFContent t1 WHERE CONTAINS(t1.Filecontent, '"W"')  -- No match - FREETEXT SELECT * FROM InvoicePDFContent t1 WHERE FREETEXT(t1.Filecontent, '"W"') ´´´ 

## How could I make the results of a yes/no vote inaccessible unless it’s unanimous in the affirmative, without a trusted third party?

A family of N people (where N >= 3) are members of a cult. A suggestion is floated anonymously among them to leave the cult. If, in fact, every single person secretly harbors the desire to leave, it would be best if the family knew about that so that they could be open with each other and plan their exit. However, if this isn’t the case, then the family would not want to know the actual results, in order to prevent infighting and witch hunting.

Therefore, is there some scheme by which, if everyone in the family votes yes, the family knows, but all other results (all no, any combination of yes and no) are indistinguishable from each other for all family members?

Some notes:

• N does have to be at least 3 – N=1 is trivial, and N=2 is impossible, since a yes voter can know the other person’s vote depending on the result.
• The anonymous suggestor is not important – it could well be someone outside the family, such as a someone distributing propoganda.
• It is important that all no is indistinguishable from mixed yes and no – we do not want the family to discover that there is some kind of schism. However, if that result is impossible, I’m OK with a result where any unanimous result is discoverable, but any mixed vote is indistinguishable.