Problem Creating Rainbow Table

I’m taking a security course and I’m working on a project where I have to use password cracking tool. I’m using Rainbowcrack tool for password cracking to create Rainbow Tables but I get this error message when I try to generate the tables.

“can’t create file md5_loweralpha#1-7_0_1000x1000_0.rt”

This is the command I used:

rtgen md5 loweralpha 1 7 0 1000 1000 0

Here is a screenshot: Screenshot that shows the error message I get when creating Rainbow Tables

I don’t know what’s the problem, I really need your help. Thank you.

Select into query, insert fails, but the table is created

I am using SQL Server 2016

I tried to the following query.

SELECT CONVERT(BIGINT, ‘A’) col1 INTO #tmp This query is obviously in error. Because it does not convert. However, the temporary table (#tmp) is created even if the query fails.

Why? I think this is by design, but I want to know.

P.S. PDW (parallel datawarehouse) does not create temporary table.

Does rolling on ‘you cast …’ on Wild Magic Surge table requires the usual components?

Several effects from the Wild Magic Surge table tell you, the sorcerer, to cast a spell as the result, for example:

You cast Fireball as a 3rd-level spell centered on yourself.

Fireball have verbal, somatic, and material components.

Does this mean I need to do the verbal and somatic component and have the material or arcane focus to cast fireball?

What if I don’t want to? Is it automatic? Can I remove component pouch or arcane focus (and only casts material-less spells) to avoid fireball?

If possible, please elaborate on your answer the consequences if it does not require components, or it does require components.

Does the result of ‘you cast …’ from rolling in Wild Magic Surge table can trigger Wild Magic Surge?

Wild Magic Surge triggers when

Immediately after you cast a sorcerer spell of 1st level or higher, the DM can have you roll a d20. If you roll a 1, roll on the Wild Magic Surge table to create a random magical effect.

I rolled on the table, and rolled on “You cast Fireball as a 3rd-level spell centered on yourself”.

Because fireball is a sorcerer spell, does this mean that this casting triggers Wild Magic Surge feature again?

I know that the DM can choose not to roll on the table, but theoretically, this can build up an infinite chain of fireball!

List all column names for multi-joined table

I have a slightly more complicated join:

SELECT person.given_name, person.family_name, person.age,, person.state,, person.street, person.residential_number, as citizen_of_country, as hospitalization_country, as infection_country  FROM patients      JOIN person on patients.person_id =     JOIN country c1 on person."citizenOf_country_id" =     JOIN country c2 on patients.hospitalized_in_country_id =     JOIN country c3 on patients.infected_in_country_id =; 

.. and I’d like to somehow get the name of all columns in this new table. Based on some answers I found, I tried

SELECT DISTINCT column_name FROM information_schema.columns WHERE table_schema = 'public' AND [code]; 

Where [code] is the block above.. But it didn’t work, giving the error “Subquery must return only one column”. I’m new to databases so I’m not sure how to handle this correctly.

Table-Driven Lexer and the Classification Table

I’m trying to implement a compiler for a custom language as part of an assignment.

I am still trying to figure out how to build the lexer. From what I understand, for a table-driven lexer, we have 3 tables:

  1. Classification Table
  2. Transition Table
  3. Token Type Table

My problem is mainly coming from the fact that the only example I’ve seen of the concept of a table-driven lexer is the “famous” (because I see it in every University’s online notes) Cooper & Torczon DFA for reading digits. Page 25

From what I gather, the purpose of each of these is as follows:

1: To classify the atomic parts of the language, such as digits (0,1,2,3….) and letters (a,b,c,…)

2: To define what should happen next according to what’s just been classified (If digit, go to state X, if letter, go to state Y)

3: Apparently this is used to check whether or not the string is accepted. Honestly I don’t even know what the point of this is.

The grammar I’m trying to build a compiler for is much more complicated than the examples I’ve seen online. It contains more “atomic” symbols, such as operators (*,+,-,/,>, etc..) and reserved keywords (if, for, while, etc…)

By atomic, I mean symbols that stand on their own. (I.e. if is a symbol in its own right, not i and f) This poses a problem for me, since I won’t be able to know if I’m reading if or a string of the form aifb

Here’s what I’m currently trying to do:

  1. First, I’m building a CAT (classifier table) for all the atomic symbols of the language. I don’t know if this is the right thing to do, especially when I have 52 letters (English alphabet), 10 digits and reserved words.
  2. I will then merge all the CATs together. So I will have one big CAT that covers letters, digits, and reserved words.
  3. Then, I will build a (big) transition table, so that when I read a character and determine its classification (problem: What about reserved words that take more than 1 character?) I will know where to transition to next.
  4. These tables are used by a simple DFA class which, once the lexeme is read, will spit out a token.

The assignment specifies that I have to use a table-driven lexer.

DELETE a single row from a table with CASCADE DELETE picks a slow plan… but not always

Schema Summary

A dozen tables related by foreign key to a central table (call it TableA). All tables have a PK that is INT IDENTITY, and Clustered. All the tables’ FKs are indexed.

This looks like a star configuration. TableA has fairly static personal info such as name, & DOB. The surrounding tables each have lists of items about the person in TableA that change or grow over time: for example, a person might have several emails, several addresses, several phone numbers, etc…

In the unusual event that I want to delete from TableA (test data that gets inserted during performance checks, for example), the FKs all have CASCADE DELETE to handle removing all subordinate data lists if they exist in any of the surrounding tables. I have three environments to play with: DEV, QA, and UAT (well, four if you count PROD, but “play” is not something I would want to do to PROD). DEV has about 27 million people in TableA with various counts upward of 30M in the surrounding tables. QA and UAT are only a few hundred thousand rows.

The Problem

The simple “delete from TableA where Id = @Id” takes < 1ms on DEV (the big one) and the execution plan looks fine, lots of tiny thin lines and all index seeks… but here’s the rub: infrequently on DEV, and ALWAYS on QA and UAT, the simple delete takes about 1 second and the plan shows almost all the indexes are being scanned, with big fat lines showing the entire row counts.


The delete statement is issued by Entity Framework Core running inside an API so I have limited capability to mess with it (making it into a stored procedure, index hinting, using a different predicate, and other ideas…)

Despite all three environments being identical (same script created all three environments), nothing I have done so far has improved QA and UAT, but DEV is usually fast.

When DEV becomes slow, it remains slow until “something” happens. I haven’t figured out what the “something” is, but when it occurs, the performance reverts to fast again and remains that way for days.

If I catch DEV at a slow time, and use SSMS to manually run a delete statement, the plan is fast (<1ms); but the deletes coming from the API use a slow plan (1s). Entity Framework is (as best I can tell) using sp_executesql to run a parameterized “delete from tableA where Id = @Id”. The manual query is “DELETE FROM TableA WHERE Id = 123456789”

The row being deleted is always a recently-added row, meaning that the Id is right at the “top” and probably not within the range of the index statistics (although I speak from a position of profound ignorance on that topic and probably have my wires crossed…)

What I have tried so far

Reading up on FK cascade delete issues, it seems all the FKs need to be indexed, so I did that.

Rebuild (not just Reorg) every index.

Selectively delete the bad plans from the plan cache using DBCC FREEPROCCACHE (plan handle)

Running the excellent tools from Brent Ozar got me checking that the FKs were all is_not_trusted = 0

Looked at these (and other) previous StackExchange questions:1, 2, 3, 4

Of those, I suspect that the last one, with a description of how the cardinality estimator gets confused, might be pointing to the source of the problem, but I need help figuring out what to do next…

The plan shot below (from ssms) shows the slow plan: some of the FK indexes are being scanned (but not all) and there is an excessive memory grant. The fast plan would show all index seeks. The whole plan is at ShowMyPlan Link

I hope someone can point out what I have missed, or what I can investigate further.


enter image description here

Bad Plan

How to convert a decision table to a consistent decision tree?

I am working from presentation from school, but I am not able to understand how they go from one step to the other. This is the decision table that I have.

enter image description here

From it, the first thing we have to do is to choose the root node among these nodes:

enter image description here

But I do not understand why for example from Q1 the Y branch has a1, a2 and the N branch has a1,a3.

This is then the final solution for the tree:

enter image description here

I am not sure if this is the correct name in English, maybe that’s the reason I wasn’t able to find any literature online.

I would appreciate an explanation or at least a link to a guide on how to convert decision tables to optimal decision trees that are consistent with the decision table.