Weight of common jewelry (rings and necklaces)

Does D&D 5e contain official weights for jewelry? Where can I read up on this?

For example, I can’t find in the books what an ordinary silver ring or golden necklace weighs. Until now I’m using RL weights: average weight of a piece of (metal) jewelry between 0,2-0,5kg (or 0,4-1 lbs).

A table or ‘official rule of thumb’ would definitely be helpful. If 5e leaves this up to the DM, I’d like to borrow such tables from previous editions (if they exist).

Overview of common languages per plane

For my current campaign I’m exploring lore options regarding planar travel for NPCs and PCs. Where in the books can I find more information about which languages are common in which planes?*

I’m aware that the Monster Manual states which languages a creature knows and/or understands. But the details don’t offer me demographics on how common certain languages are. Or am I missing something in the book?


*My campaign takes place in a personal adaptation on the planes of the Forgotten Realms.

Time complexity – Algorithm to find the lowest common ancestor of all deepest leaves

This is the problem statement I came across today.

Given a binary tree, find the lowest common ancestor of all deepest leaves.

I came up with a correct algorithm, but I would like to confirm the running time of this approach.

The algorithm is as follows:

  1. Traverse the tree and find the deepest level of the tree, dmax.
  2. Traverse the tree and find all leaves at depth dmax.
  3. Given that LCA(A,B,C) = LCA(LCA(A,B),C), traverse all nodes found at step 2 and calculate LCA.

The subroutine for LCA(A,B) is simple. Starting at A, go all the way up to the root and store each visited node in a hashset. Then, starting at B, go up until you find a node which is contained in the hashset.

I know the first two steps of the algorithm are both O(n), where n corresponds to the number of nodes in the tree. However, I am uncertain about the last step.

LCA(A,B) subroutine has a linear complexity. But how many nodes, in the worst scenario, can be found at step 2?.

Intuitively, I would argue that it has to be far less than n nodes.

What are the common practices to weight tags relations?

I am working on a webapp (fullstack JS) where the user create documents and attach tags to them. They also select a list of tags they are interested in and attach them to their profile.

I am not a math guy, but I did some NLP as hobbyist and learnt about latent semantic indexation: as I understand it, you create a table where you store each couple of words you parsed, and then add weight to each of these couples of words when both are found next to each other.

I was thinking of doing the same thing with tags: when 2 tags appear on the same document or profile, I increase the weight of their couple. That would allow me to get a ranking of the “closest” tags of a given one.

Then I remembered that I came across web graphs, where websites were represented in a 2D space (x and y coordinates) and placed depending on their links using an algorithm called force vector.

While I do know how I would implement my first idea, I am not sure about the second one. How do I spread the tag coordinates when created? Do they all have an x:0, y:0 at the start?

Since I assume this is a common case of data sorting, I wondered what would be the common/best practices recommended by people of the field.

Is there documents, articles, libraries (npm?) or wikipedia pages you could point me out to help me understand what can or should ideally be done? Is my first option a good one by default?

Also, please let me know in comments if I should add or remove a tag to this question or edit its title: I’m not even sure of how to categorize it.

Is there an updated (non-Nmap) top 100 or top 1000 common ports list?

I know Nmap has nmap-services file which gives us the list of top 1000 ports/services found on the Internet. But this list seems to be outdated, as the Nmap top 1000 list doesn’t include several services used now-a-days (like 27017/mongoDB, 6379/redis, 11211/memcached, etc). Is there any source other than Nmap, which can provide the updated list of top 1000 common ports/services used in the Internet?

Confluence to show equivalent terms have one common reduct

In lemma 30.3.9, Pierce states a confluence property for $ F_{\omega}$ :

$ S \to_* T \land S \to_* U \implies \exists V. T \to_* V \land U \to_* V$

He then states the following proposition:

$ S \leftrightarrow_* T \implies \exists U. S \to_* U \land T \to_* U$

However, he doesn´t use the above property to prove it. I remember this was the case for other books on term rewriting systems that I read. However, to me it looks very simple to prove using the confluence lemma.

From $ S \leftrightarrow_* T$ one has $ S \to_* T$ and $ S \to_* T \to_* S$ thus by confluence $ \exists U. S \to_* U \land T \to_* U$ .

Why is this approach not correct?

Git Merge Lowest common ancestor problem

How does Git find the ‘merge base’ of two branches in a commit graph? From what I understand, this reduces to finding the lowest common ancestor of a directed acyclic graph. After reading up on several algorithms with which one may do this, I am wondering how Git manages to solve the LCA problem.

(Apologies if this is more of a stack overflow question. I am however more interested in the internal algorithm Git uses so it may be appropriate for this forum.)