Starting with D&D: Starter Set vs Dungeon Master’s Guide

For some time I have been GMing Call of Cthulhu 7th ed. and now, to try something different, I would like to go with D&D 5.0 since in a few weeks it will be available in my native tongue (the DM Guide is coming in few weeks, the Monster Manual and Starter Set are already translated and easily purchasable).
But I wonder if I should buy the Dungeon Master’s Guide or should I start with Starter Set?
I don’t mind waiting for the book to be available so it’s not an issue of any sort.
I just wonder if it’s better to start with the Starter or the Guide?
I suppose the Starter is easier to swallow, but what is your experience?

Regular expression and Right Regular grammar for decimals starting with 1 ending with 9?

I was trying to do the following:

Consider the set of all strings over the alphabet { 0,1,2,9,.}{ 0,1,2,9,.} that are decimal numbers beginning with 1 and ending with 9 and having exactly one decimal point (..). For example 12.912.9, would be a valid decimal number while 0.1290.129 would be not since it does not begin with 11.

but it didn’t seem to work:

1(0|1|2|9)*.(0|1|2|9)*9 

or

S::=1X X::=E|0X|1X|2X|9X|Y Y::=.Z Z::=E|0Z|1Z|2Z|9Z|A A::=9 

Why? Whats the correct answer?

Starting with SQL Server 2019, does compatibility level no longer influence cardinality estimation?

In SQL Server 2017 & prior versions, if you wanted to get cardinality estimations that matched a prior version of SQL Server, you could set a database’s compatibility level to an earlier version.

For example, in SQL Server 2017, if you wanted execution plans whose estimates matched SQL Server 2012, you could set the compatibility level to 110 (SQL 2012), and get execution plan estimates that matched SQL Server 2012.

This is reinforced by the documentation, which states:

Changes to the Cardinality Estimator released on SQL Server and Azure SQL Database are enabled only in the default compatibility level of a new Database Engine version, but not on previous compatibility levels.

For example, when SQL Server 2016 (13.x) was released, changes to the cardinality estimation process were available only for databases using SQL Server 2016 (13.x) default compatibility level (130). Previous compatibility levels retained the cardinality estimation behavior that was available before SQL Server 2016 (13.x).

Later, when SQL Server 2017 (14.x) was released, newer changes to the cardinality estimation process were available only for databases using SQL Server 2017 (14.x) default compatibility level (140). Database Compatibility Level 130 retained the SQL Server 2016 (13.x) cardinality estimation behavior.

However, in SQL Server 2019, that doesn’t seem to be the case. If I take the Stack Overflow 2010 database, and run this query:

CREATE INDEX IX_LastAccessDate_Id ON dbo.Users(LastAccessDate, Id); GO ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140; GO SELECT LastAccessDate, Id, DisplayName, Age   FROM dbo.Users   WHERE LastAccessDate > '2018-09-02 04:00'   ORDER BY LastAccessDate; 

I get an execution plan with 1,552 rows estimated coming out of the index seek operator:

SQL 2017, compat 2017

But if I take the same database, same query on SQL Server 2019, it estimates a different number of rows coming out of the index seek – it says “SQL 2019” in the comment at right, but note that it’s compat level 140:

SQL 2019, compat 2017

And if I set the compatibility level to 2019, I get that same estimate of 1,566 rows:

SQL 2019, compat 2019

So in summary, starting with SQL Server 2019, does compatibility level no longer influence cardinality estimation the way it did in SQL Server 2014-2017? Or is this a bug?

How to deal with lack of advantages for a character starting at low point total?

I’m currently playing a character, and just made another, in a Dungeon Fantasy/DFRPG group that starts at 75 points (+50 disads, +5 quirks). It’s not all that difficult to make a viable character, even at that point total — you won’t have skills at “expert” levels, or only one at the expense of having others above default, but the group adventures are balanced for this power level.

What has just occurred to me, however (and appears to be an equally valid concern for vanilla GURPS), is that a common 250 point starting value, or even a 125 point value, will start with more Advantages than a 75 pointer — because the lower point total spends everything on attributes and skills (the template for a mage, for instance, pretty much limits advantages to Magery 2 over the required Magery 1, or an attribute increase).

One of the underlying tenets of “leveling” in DFRPG is that, once you’ve gained, say, 50 points and “leveled up”, your 75 point character will be roughly indistinguishable from a new character who started with 125 points — but that seems not to be the case, because the 125 point character will have started with one or more Advantages that the 75 point character couldn’t afford, and most advantages (like Magery, Trained by a Master, or Luck) can’t be gained through experience spending.

What’s the best way to ensure that a 75 point character isn’t at a, um, disadvantage relative to Advantages compared to a 125 point start, after playing a while and earning 50 experience? Is there a mechanism in the DF/DFRPG rules that I’m missing that allows adding Advantages on leveling up, or under other circumstances in play?

Weighted Activity Selection Problem with allowing shifting starting time

I have some activities with weights, and I would like to select non overlapping activities by maximizing the total weight. This is known problem and solution exists.

In my case, I am allowed to shift the start time of activities in some extend while duration remains same. This will give me some flexibility and I might increase my utilization.

Example scenario is something like the following:

(start, end, profit) a1: 10 12 120 a2: 10 13 100 a3: 14 18 150 a4: 14 20 100 

Without shifting flexibility, I would choose (a1, a3) and thats it. On the other hand I might shift the intervals by 5 units (In real case even 1000x greater than original task duration) to the left/right. In that case I might come up this schedule and all tasks can be selected.

a1: 8 10 120 (shifted -2 to left) a2: 10 13 100 a3: 14 18 150 a4: 18 23 100 (shifted +4 to right) 

Are there any feasible solution to this problem?

Starting with an initially empty AVL-tree, draw the resulting AVL-tree after insertion

Starting with an initially empty AVL-tree, draw the resulting AVL-tree after inserting the following elements one after another. 50, 70, 30, 10, 20, 15.

I’m not sure if I am doing it correctly since I am new to AVL trees. Do I rotate if there’s an inequality after the insertion or do I insert everything then only rotate the AVL tree?

Here’s my attempt:

  1. First I insert 50, 70, 30, 10, 20.
      50       /  \     30  70     /    10     \      20 

Then I realise there’s an imbalance so I do a rotation and get

      50       /  \     20  70    /  \   10  30 

There’s one more element to insert which is 15. so, I insert it into the tree

      50       /  \     20  70    /  \   10  30    \     15 

and there’s an imbalance again. So I rotate it

      20      /   \   10    50     \   / \     15 30  70 

My question here is do I insert everything then only rotate or rotate whenever there’s an imbalance while inserting? I did the latter. May I know if my approach is correct? I’m new to AVL trees. Thank you

Context Free Grammar – Starting with x Number of Chars and Ending with n != x Number of Chars

I am trying to create a context free grammar in Extended Backus–Naur form, which starts with a non-empty sequence of A‘s and is followed by a non-empty sequence of B‘s. With the special condition that the number of B‘s has to be unequal to the number of A‘s.

Thus, the grammar should generate words like:

  • AAAABBB
  • AAABB
  • ABBB

So basically I could do something like this:

$ \ G=(N,T,P,Sequence)$

$ \ N = \{Sequence\}$

$ \ T = \{A,B\}$

$ \ P = \{Sequence=AA(Sequence|\epsilon)B\}$

But then the words would always have $ \ 2n$ A‘s and n B‘s:

  • AAB
  • AAAABB
  • AAAAAABBB

So how is it possible to make the number of A‘s uncorrelated of the number of B‘s, without being equal?