Are there alternative XP/Level Tables in 5e?

In an old tweet from Mearls, he states the Dungeon Master Guide would contain a "quick progression XP table" (as well as a slow progression, by the way). However, the section Experience Points (p. 260) of the DMG has nothing of sorts, and I do not remember seeing anything like that in the DMG. Instead, we have alternative ways of rewarding experience or levels, such as Milestones, but these aren’t clearly "slower" or "faster", they are entirely different approaches. Similarly, the other book that has lots of DM tools, Xanathar Guide’s to Everything, also does not contain any information on alternative slower/faster progressions (at least I could not find any).

However, a fast/slow, official/playtested table would still be interesting, from my point-of-view. There were campaigns where I wanted the PCs to level a little bit slower, and others I wanted them to level faster, and then I would just guess some numbers that made sense for me and work on it on the run, which is not the best approach for a problem usually.

So, is there any published table on Experience/Level, or, equivalently, Experience/CR (i.e., in a quick progression style, monsters with the same CR would reward more experience, and in a slower mode, less experience)? If not, was something like this printed in any Unearthed Arcana, at least?

benefits of storing columns in JSON instead of traditional tables?

Are there any benefits in using JSON over traditional table structures?

Imagine having a table structure like this:

 create table table1 (         t_id int,         first_name varchar(20),         last_name varchar(20),         age int     ) 

What if you stored the same columns inside a json like this:

{     "first_name":"name",     "last_name":"name",     "age":2 } 

and have a table like this:

create table table2 (     t_id int,     attribute jsonb ) 

Correct me if im wrong, but since both variants are causing a row to be completely rewritten if there have been any updates or deletes on that row, then both variants are identical in that regard.

Object with many-to-many relationships with multiple tables

Here’s an example for a use-case that matches what I’m trying to better understand. Say I have 3 objects I need to deal with that are similar in many ways, such as invoice, PO, and receipt. They can all have a number, and amount, etc. They can also have many similar relationships, such as line items, images attached, etc. I can think of a number of ways of modeling this.

Option 1
Make everything separate. Have tables for invoice, invoice_line, invoice_image, then po, po_line, po_image, etc. This means strong referential integrity with foreign keys and least possible number of joins, but a ton of duplication for each table.

Option 2
Have parent document table with common fields and a type field, then have invoice, po, and receipt table have a foreign key to document_id. I can then have a single document_image table. For the lines, there are again some differences but many similarities between all, so could have a document_line table with a foreign key to document_id and invoice_line, po_line, and receipt_line tables with foreign key to document_id. Here, we have less duplication, keep referential integrity with foreign keys, but start having many more joins to get all the info we need. If I have an invoice line item and wanted to get all the info, I’d need to join invoice_line to document_line, invoice, and document.

Option 3
Use separate invoice/po/receipt tables, but for image relationship (or any other) add multiple nullable foreign keys, so image would have invoice_id, po_id, and receipt_id nullable foreign keys. With this we can still enforce referential integrity but we have some fields that will often be useless polluting things, plus we now can’t make a necessary field required because they all need to be nullable. We do cut down on duplication.

Option 4
Use separate invoice/po/receipt tables, but for image relationship (or any other) have a type field and fk_id field. This way I don’t need to have multiple many-to-many tables so it cuts down on duplication, especially if you have lots of these many-to-many, since it would always be 3 tables each time. I like this option the least because you can’t have a foreign key, I pretty much not even considering it as a valid option.

I’m leaning towards option 1 or 2, I think options 3 or 4 seem like bad design and I’d likely only consider if somebody explained that there are major performance benefits. For option 1, even though there’s duplication, you can get around it with code generation, so maybe not so big a deal. But would be interested in knowing if there’s a major advantage to breaking it down like option 2.

How to improve wordpress mysql performance on large tables?

I’ve installed WordPress 5.4.1 on Ubuntu 20.04 LTS on AWS EC2 (free tier as I’m starting).

My instance has 30 GB of disk space and 1 GB of RAM.

My website has at about 9000 pages and I’ve imported 7800 so far with “HTML Import 2” plugin.

wp_posts table has 7,800 rows and 66 MB size and, since this table has grown, wordpress has become super slow. Any change I make to the database is super slow as well.

While trying to make changes, I keep getting this error:

Warning: mysqli_real_connect(): (HY000/2002): No such file or directory in /var/www/wordpress/wp-includes/wp-db.php on line 1626 No such file or directory

Error reconnecting to the database This means that we lost contact with the database server at localhost. This could mean your host’s database server is down.

What could I do in order to achieve a better speed and make it usable?

MariaDB Replication – Replicate only specific tables and views

Note: A backend developer here with little to no experience in setting up database servers and replication.

What needs to be done

Setup DB replication of an existing database with the following constraints:

  1. Replicate only a specific list of tables/views with different names in the replicated database.
  2. Change the name of the tables/views in the replicated database (during the replication process)
  3. Setup a user on the replicated DB with further restrictions with which only a set of table/view can be viewed/updated/deleted

Progress so far

I have already read the document here, however, I did not find anything concrete to help me move forward with all the use-cases I wish to support!

Use Case

Show only essential data to the external vendor.

PS: If there are any other approaches other than replication, would be happy to consider and implement that as well.

Is there a way to group often used tables and queries in SSMS?

I am not a DBA by training, but due to role shifts I am having to modify and maintain tables and stored procedures fairly often. I would like to visually group tables and stored procedures together in a sort of shortcuts folder. So that when I need to open several related views I don’t have to scroll all over the database. Is something like this possible?

Sorry if this is a dumb question. I am just trying to make this part of my job a little less tedious.

Local variables in sums and tables – best practices?

Stumbled on Local variables when defining function in Mathematica in math.SE and decided to ask it here. Apologies if it is a duplicate – the only really relevant question with a detailed answer I could find here is How to avoid nested With[]? but I find it sort of too technical somehow, and not really the same in essence.

Briefly, things like f[n_]:=Sum[Binomial[n,k],{k,0,n}] are very dangerous since you never know when you will use symbolic k: say, f[k-1] evaluates to 0. This was actually a big surprise to me: by some reason I thought that summation variables and the dummy variables in constructs like Table are automatically localized!

As discussed in answers there, it is not entirely clear what to use here: Module is completely OK but would share variables across stack frames. Block does not solve the problem. There were also suggestions to use Unique or Formal symbols.

What is the optimal solution? Is there an option to automatically localize dummy variables somehow?

Are elements of the Hash Table’s backing array Linked Lists from the initial point when using Separate Chaining?

As usual, did quite a research in different books and academic articles, but can’t really get a clear picture.

For the Hashing Collision resolution in Hash Tables, we have one very popular strategy for resolving it, and it’s called Separate Chaining.

I’m aware, that in the Separate Chaining strategy, elements, which end up being collided due to hashed into same particular index, are (or will be becoming) Linked Lists.

One instructor even said so, that:

Elements of the backing array in separate chaining, are linked lists.

My question is following: is the type of backing array Linked List from the moment of creation of Hash Table (during separate chaining strategy implementation), or it gets converted to that array after first collision? because, having Linked Lists as each element of the backing array means, that those Linked Lists, should be a list of the elements, which in turn, are Entries/Buckets of a pair of key-value. This all really consumes a lot of memory and resource, I reckon.

Thank you.

Merging 4 tables that also include Union All

I have this code WHICH IS JOIN OF 3 TABLES,    SELECT   [Provider] AS Publisher ,[Price_Rate] ,[Source_ID] ,Media.Date ,Media.Impressions ,Media.Clicks ,'0' AS [Opt-buyers] ,'0' AS [Completed buyers] ,Media.accounting FROM [dbo].[budget]  AS BUDGET  LEFT JOIN  (    SELECT CASE WHEN [Ad_Set_Name] LIKE 'tw_%'    THEN'twitter'    WHEN [Ad_Set_Name] LIKE 'IN_%'   THEN 'insta'    ELSE '?'       END AS Publisher   ,CAST([Day] AS Date) AS Date   ,SUM(CAST([Impressions] AS INT)) AS Impressions   ,SUM(CAST([Link_Clicks] AS INT)) AS Clicks   ,SUM(CAST([Amount_Spent__USD_] AS money)) AS Spend   FROM [dbo].[twitter]   Group by Day,[Ad_Set_Name]     UNION ALL      SELECT CASE WHEN [Site__PCM_] = 'acuits.com'     THEN 'acqt'      WHEN [Site__PCM_]= 'PulsePoint'      THEN 'plpt'     WHEN [Site__PCM_] = 'SRAX'     THEN 'srax'     ELSE [Site__PCM_]     END AS Publisher     ,CAST(Date AS Date) AS Date     ,SUM(CAST(impressions AS INT)) AS Impressions      ,SUM(CAST(clicks AS INT)) AS Clicks     ,SUM(CAST(media_cost AS money)) AS Spend    FROM [dbo] [pcm]    Group by [Site__PCM_]   ,Date   ) AS new_sources_budget  ON BUDGET.Source_ID = Media.Publisher   WHERE source_id IS NOT NULL    and I'm trying to join another table **called Email** to what's this code is currently providing, but    I'm  having tough time passing thus far. the goal is to add this code     SELECT    SUM(CAST(_Send2 AS INT)) AS [Email Sent]  ,SUM(CAST(_Open2 AS INT)) AS [Email Open]  ,SUM(CAST(Click2 AS INT)) AS [Email Click] FROM [dbo].[behaviour] Group by _Send2,_Open2,Click2   any help will be appreciated.