Faculty for binary trees instead of lists

i know how to count permutations within a list (–> n!), but what about binary trees? Subtrees from “siblings” are allowed to have their own order, otherwise it would be trivial.

here is an example for n=5

the root has 5 possibilities

each of the 2 children has 4 possibilities

each of the 4 grandchildren has 3 possibilities

each of the 8 grandgrandchildren has 2 possibilities

and the grandgrandgrandchildren have no choice

how to calculate the total number for n?

Generate certificate v3 instead of v1

Trying to get certificate v3, but getting v1. I’m using following commands:

openssl req -out server.csr -newkey rsa:2048 -nodes -keyout server.key -config san_server.cnf openssl ca -config san_server.cnf -create_serial -batch -in server.csr -out server.crt 

Configuration file san_server.cnf content:

[ca] default_ca=CA_default  [CA_default] dir=./ca database=$  dir/index.txt new_certs_dir=$  dir/newcerts serial=$  dir/serial private_key=./ca.key certificate=./ca.crt default_days=3650 default_md=sha256 policy=policy_anything copy_extensions=copyall  [policy_anything] countryName=optional stateOrProvinceName=optional localityName=optional organizationName=optional organizationalUnitName=optional commonName=optional emailAddress=optional  [req] prompt=no distinguished_name=req_distinguished_name req_extensions=v3_req x509_extensions=v3_ca  [req_distinguished_name] countryName=EN stateOrProvinceName=Some-State localityName=London organizationName=Internet Widgits Pty Ltd commonName=192.168.1.8  [v3_req] subjectAltName=@alt_names  [v3_ca] subjectAltName=@alt_names  [alt_names] IP.1=127.0.0.1 IP.2=192.168.1.8 DNS.1=localhost 

Why does this code return $\frac{count}{2}$ instead of $count$?

I am reading “Algorithms 4th Edition” by Sedgewick & Wayne.

The following method computes the number of self loops of an undirected graph $ G$ .
Why does this code return $ \frac{count}{2}$ instead of $ count$ ?
I think self loop appears in adjacency list only once.

// number of self-loops public static int numberOfSelfLoops(Graph G) {     int count = 0;     for (int v = 0; v < G.V(); v++)         for (int w : G.adj(v))             if (v == w) count++;     return count/2;   // self loop appears in adjacency list twice }  

Thank you for your comment, Zach Langley.
Now I have read the following code, and I understand why the above code is right.

 /**  * Adds the undirected edge v-w to this graph.  *  * @param  v one vertex in the edge  * @param  w the other vertex in the edge  * @throws IllegalArgumentException unless both {@code 0 <= v < V} and {@code 0 <= w < V}  */ public void addEdge(int v, int w) {     validateVertex(v);     validateVertex(w);     E++;     adj[v].add(w);     adj[w].add(v); } 

When does a vampire spawn decide to grapple instead of dealing damage?

The vampire spawn’s claw attack reads:

Instead of dealing damage, the vampire can grapple the target (escape DC 13)

When exactly is this decision made? In particular, I am interested in the interaction with the Drunken Master’s Redirect Attack:

When a creature misses you with a melee attack roll, you can spend 1 ki point as a reaction to cause that attack to hit one creature of your choice, other than the attacker, that you can see within 5 feet of you.

Can the vampire spawn choose to simply grapple to avoid hurting an ally after the attack has been redirected?

O(V+E) algorithm for computing chromatic number X(g) of a graph instead of brute-force?

I came up with this O(V+E) algorithm for calculating the chromatic number X(g) of a graph g represented by an adjacency list:

  1. Initialize an array of integers “colors” with V elements being 1
  2. Using two for loops go through each vertex and their adjacent nodes and for each of the adjacent node g[i][j] where j is adjacent to i, if j is not visited yet increment colors[g[i][j]] by 1.
  3. After doing this the maximum integer in the array “colors” is the chromatic number of the graph g(if the algorithm works).

Here is my C++ code:

#include <bits/stdc++.h> using namespace std;  struct graph {     vector<vector<int>> adjL;     vector<int> colours;     vector<bool> vis; };  int chrNUM(graph& G) {     int num = 1;      for(int i = 1; i < G.adjL.size(); i ++) {         for(int j = 0; j < G.adjL[i].size(); j ++) {             if(!G.vis[G.adjL[i][j]]) {                 G.colours[G.adjL[i][j]] ++;                 num = max(num, G.colours[G.adjL[i][j]]);             }         }         G.vis[i] = true;     }     return num; }  void initGET(graph& G, int N, int M) {     cin >> N >> M;     G.adjL.assign(N + 1, vector<int>(0));     G.colours.assign(N + 1, 1);     G.vis.assign(N + 1, false);     for(int i = 0; i < M; i ++) {         int u,v;         cin >> u >> v;         G.adjL[u].push_back(v);         G.adjL[v].push_back(u);     } }   int main() {     graph g;     int n;  //number of vertices     int m;  //number of edges     initGET(g, n, m);     cout << chrNUM(g); } 

I am wondering if there is a flaw? Maybe it works for certain graphs only? Maybe it gives X(g) for smaller graphs but a value higher than X(g) for larger graphs? I found it worked correctly for all the graphs I have tried (up to 20 vertices). I know this is an NP complete problem but I want some counterexamples for my algorithm if possible or an explanation as to why the method won’t work. I have also got a recursive (DFS) solution which is a bit different but mostly similar to this. Any ideas?

Thanks in advance!

How to stop shrinking mobile web and start scaling instead?

I have standard PC web + mobile web. Using:

<meta name=viewport content="width=device-width, initial-scale=1">
@media (min-width: 1023px) { }
@media (max-width: 1022px) { }

But this website contains object that is 480px wide. So once the shrinking mobile web reaches 480px and under I would like to lock shrinking. And start scaling the screen instead. So the 480px website is 100% of the screen without overflowing even…

How to stop shrinking mobile web and start scaling instead?

Postgres 12.1 uses Index Only Scan Backward instead of index when LIMIT is present

I have a medium sized table (~4M rows) of “functionCalls” which consists of 2 columns, input and function (both ids for another table):

  Column  |  Type   | Collation | Nullable | Default  ----------+---------+-----------+----------+---------  input    | integer |           | not null |   function | integer |           | not null |  Indexes:     "functionCall_pkey" PRIMARY KEY, btree (input, function) CLUSTER     "functionCallSearch" btree (function, input) Foreign-key constraints:     "fkey1" FOREIGN KEY (function) REFERENCES function(id) ON UPDATE CASCADE ON DELETE CASCADE     "fkey2" FOREIGN KEY (input) REFERENCES input(id)  

I want to find all rows that match a certain function, which is why I added the functionCallSearch index. Here is my query:

SELECT c.input FROM "functionCall" c INNER JOIN "function" ON (function.id = c.function) WHERE function.text LIKE 'getmyinode' ORDER BY c.input DESC LIMIT 25 OFFSET 0; 

This takes forever (currently ~ 20s) because pg refuses to use the index, and decides to do a Index Only Scan Backward on the primary key instead:

 Limit  (cost=0.71..2178.97 rows=25 width=4) (actual time=12903.294..19142.568 rows=8 loops=1)    Output: c.input    Buffers: shared hit=59914 read=26193 written=54    ->  Nested Loop  (cost=0.71..135662.48 rows=1557 width=4) (actual time=12903.292..19142.561 rows=8 loops=1)          Output: c.input          Inner Unique: true          Join Filter: (c.function = function.id)          Rows Removed by Join Filter: 3649900          Buffers: shared hit=59914 read=26193 written=54          ->  Index Only Scan Backward using "functionCall_pkey" on public."functionCall" c  (cost=0.43..80906.80 rows=3650225 width=8) (actual time=0.040..17083.489 rows=3649908 loops=1)                Output: c.input, c.function                Heap Fetches: 3649909                Buffers: shared hit=59911 read=26193 written=54          ->  Materialize  (cost=0.28..2.30 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=3649908)                Output: function.id                Buffers: shared hit=3                ->  Index Scan using function_text on public.function  (cost=0.28..2.30 rows=1 width=4) (actual time=0.023..0.026 rows=1 loops=1)                      Output: function.id                      Index Cond: ((function.text)::text = 'getmyinode'::text)                      Buffers: shared hit=3  Planning Time: 0.392 ms  Execution Time: 19143.967 ms 

When I remove the LIMIT this query is blazingly fast:

 Sort  (cost=5247.53..5251.42 rows=1557 width=4) (actual time=3.762..3.763 rows=8 loops=1)    Output: c.input    Sort Key: c.input DESC    Sort Method: quicksort  Memory: 25kB    Buffers: shared hit=6 read=4    ->  Nested Loop  (cost=0.71..5164.97 rows=1557 width=4) (actual time=0.099..3.739 rows=8 loops=1)          Output: c.input          Buffers: shared hit=6 read=4          ->  Index Scan using function_text on public.function  (cost=0.28..2.30 rows=1 width=4) (actual time=0.054..0.056 rows=1 loops=1)                Output: function.id                Index Cond: ((function.text)::text = 'getmyinode'::text)                Buffers: shared hit=2 read=1          ->  Index Only Scan using "functionCallSearch" on public."functionCall" c  (cost=0.43..5103.71 rows=5897 width=8) (actual time=0.039..3.670 rows=8 loops=1)                Output: c.function, c.input                Index Cond: (c.function = function.id)                Heap Fetches: 8                Buffers: shared hit=4 read=3  Planning Time: 0.514 ms  Execution Time: 3.819 ms 

Why is this? And how can I fix this?

I’ve checked https://dba.stackexchange.com/a/249676/106982 but n_distinct is not that far off, pg_stats says n_distinct: 623 while SELECT COUNT(*) FROM (SELECT DISTINCT function FROM "functionCall") returns 1065