Do the Tunnel Fighter UA fighting style and the Polearm Master feat combine to grant indefinite opportunity attacks?

Here’s the scenario:

The character in question has both the Tunnel Fighter fighting style from Unearthed Arcana: Light, Dark, Underdark!, which says:

You excel at defending narrow passages, doorways, and other tight spaces. As a bonus action, you can enter a defensive stance that lasts until the start of your next turn. While in your defensive stance, you can make opportunity attacks without using your reaction, and you can use your reaction to make a melee attack against a creature that moves more than 5ft while within your reach.

As well as the Polearm Master feat (PHB, p. 168), the second bullet point of which says:

While you are wielding a glaive, halberd, pike, or quarterstaff, other creatures provoke an opportunity attack from you when they enter your reach.

Assuming you hit every target that enters your reach (10 feet, unless you’re using a quarterstaff), could you essentially take out a stampede of kobolds who have no ranged weapons? (Or some other instance of lots of easy-to-hit targets rushing at the player.)

Am I correct in assuming this means all approaching targets are attacked once as they enter my reach? Am I missing something that says otherwise in this scenario?

This combination seems really good to protect the rest of the group from a swarm of enemies and to abuse choke points. It’s almost too good against weaker enemies.

Can I combine 2 UnrealEngine forks?

I am using Daz Gen 8 characters and clothes in my project.

After I read this I decided to use a fork of UE4 which uses Dual Quaternion Skinning.

Basically if you use Daz characters under UE4, they going to distort better if you use DQS, which UE4 doesn’t support by default.
You can get the DQS modded engine from here.

I compiled it and I managed to get all my stuff from my original project to a DQS one.

No problem.

But I also need a soft body and a clothing solution in the project.

4.25 has a clothing solution and it does work fine, but there is no soft body, that I know about…

So there are a few options:

  1. Vico Dynamics from the Marketplace
  2. using a Flex branch of UE4
  3. if you know any more please please let me know

Tested Vico, and it works fine for simple meshes, up to 20k poligons. Over that the editor becomes unresponsive… But this works with my modded DQS engine.

The latest version of an UE4 branch with Flex is 4.22. I could get this compiled and running.

I would like to know if I can merge the 2 repositiories (4.22 DQS and Flex) into one, and source build it… What software or workflow do you recommend to do merge these big branches of UE4? I guess there will be quite a few conflicts, because Flex distorts the mesh in it’s own way, so does the DQS version of Unreal…

Can you combine Polearm Master, Tunnel Fighter, and Warcaster (with Repelling Blast)?

We have a player who is playing a Fighter with a level of Warlock, wielding a polearm, and they say they should be able to get the initial opportunity attack from Polearm Master when a foe enters their reach, then if the foe continues to move in they proc the Tunnel Fighter opportunity attack, which the character uses for an eldritch blast with the Repelling Blast invocation, knocking them back 10 feet…and then could hit them with Polearm Master again if they continue to advance.

Is this legal? I know the Polearm Master feat specifies (according to Mearls) that the opportunity attack from it has to be with the same weapon being used when the opportunity attack procs, but there seems to be no such limitation on Tunnel Fighter.

How do you combine multiple update statements for the same row using MySQL trigger


Each time a column is modified, I need to update the associated column (which has the same name) in a second table. This is my first attempt at using a trigger.


Here’s a simplified example of what I’m trying to do, which does its job fine, but inefficiently:

DROP TRIGGER IF EXISTS update_second_table; DELIMITER // CREATE TRIGGER update_second_table   BEFORE UPDATE ON first_table    FOR EACH ROW BEGIN   /* putting IF statements on one line so it's easier to see what's happening */   IF NOT(OLD.firstname <=> NEW.firstname)   THEN UPDATE second_table SET firstname  = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT(OLD.middlename <=> NEW.middlename) THEN UPDATE second_table SET middlename = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT(OLD.lastname <=> NEW.lastname)     THEN UPDATE second_table SET lastname   = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT(OLD.nickname <=> NEW.nickname)     THEN UPDATE second_table SET nickname   = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT(OLD.dob <=> NEW.dob)               THEN UPDATE second_table SET dob        = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT( <=>           THEN UPDATE second_table SET email      = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT(OLD.address <=> NEW.address)       THEN UPDATE second_table SET address    = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT( <=>             THEN UPDATE second_table SET city       = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT(OLD.state <=> NEW.state)           THEN UPDATE second_table SET state      = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT( <=>               THEN UPDATE second_table SET zip        = CURRENT_TIMESTAMP WHERE id =; END IF;   IF NOT( <=>           THEN UPDATE second_table SET phone      = CURRENT_TIMESTAMP WHERE id =; END IF; END; // DELIMITER; 

The problem:

As you can see, depending on how many columns are updated in `first_table`, there can be as many as 11 update statements on the same row in `second_table`.

The question:

Is there any way to combine the update statements into one?

How can I combine 2 PDA’s into 1

I need to form PDA for this language: {$ a^nb^m|n=m \vee n=2m$ }

I know the idea of building each one separately but how do I combine them into 1 PDA?

LHS: for every ‘a’ I push ‘A’ inside stack and for every ‘b’ I eject ‘A’.

RHS:for every ‘a’ I push ‘A’ inside stack and for every ‘b’ I eject 2 times ‘A’.

How can I combine them? Can I somehow use non determinism?

Can multiple casters combine Flesh to Stone to petrify a target faster?

The flesh to stone spell states:

A creature restrained by this spell must make another Constitution saving throw at the end of each of its turns. If it successfully saves against this spell three times, the spell ends. If it fails its saves three times, it is turned to stone and subjected to the petrified condition for the duration. The successes and failures don’t need to be consecutive; keep track of both until the target collects three of a kind.

Say a wizard casts flesh to stone on an enemy, an orc to make it simple, and the orc fails the first saving throw.

Then the wizards friend, in the same round, casts a second flesh to stone spell on the poor orc, to which he also fails his saving throw. Then the wizards other friend casts a third flesh to stone in that same round on the poor orc and he fails that saving throw as well.

Does the orc fully petrify in that round? In other words, do the effects of the spells starting to petrify the orc stack with each other? He did fail 3 saving throws but it was against 3 separate spells instead of one spell.

Can you combine an Artificer’s Enhanced weapon infusion with a +3 magic weapon?

The Artificer’s Enhance Weapon infusion (ERftLW) grants a +1 (+2 at 10th level) bonus to attack and damage rolls made with the infused weapon. Is it possible to combine this infusion with a magic weapon that already has a +3 magical bonus? Thereby granting a +4(+5 at 10th level) bonus to attack and damage rolls.

I believe it is ok, if you first apply the enhanced weapon infusion and then the +3 from the magic weapon. However I don’t know if "(any)" weapon applies to the enhanced weapon (that become magic).

Best way to combine many disparate schemas for database table creation?

I have a bunch of data that consists of public records from the state government dating back to the early 90s. Along the way, the data organization and attributes have changed significantly. I put together an Excel sheet containing the headers in each year’s file to make sense of it and it came out like this:

enter image description here

As you can see by looking at my checksum colum on the left, there are 8 different schemas from 1995 through 2019. Also, you can see that the data between each can vary quite a bit. I’ve color-coded columns that are logically similar. Sometimes the data is mostly the same but the names of the columns have changed. Sometimes, there is different data altogether that appears or disappears.

I think it is pretty clear that the best goal here is to have 1 table combining all of this information rather than 8 disparate tables, since I want to be able to query across all of them efficiently. Each table contains ~150,000 rows so the table would have around 4 million records. Each table has 55-60 fields approximately.

I’ve been struggling for a few days with how to tackle it. Half of the files were fixed-width text files, not even CSVs, so it took me a long time to properly convert those. The rest are thankfully already CSVs or XLSX. From here, I would like to end up with a table that:

  • includes a superset of all available logically distinct columns – meaning that the ID number and ID Nbr columns would be the same in the final table, not 2 separate tables
  • has no loss of data

Additionally, there are other caveats such as:

  • random Filler columns (like in dark red) that serve no purpose
  • No consistency with naming, presence/absence of data, etc.
  • data is heavily denormalized but does not need to be normalized
  • there’s a lot of data, 2 GB worth just as CSV/XLS/XLSX files

I basically just want to stack the tables top to bottom into one big table, more or less.

I’ve considered a few approaches:

  • Create a separate table for each year, import the data, and then try to merge all of the tables together
  • Create one table that contains a superset of the columns and add data to it appropriately
  • Try pre-processing the data as much as possible until I have one large file with 4 million rows that I can convert into a database

I’ve tried importing just the first table into both SQL Server and Access but have encountered issues there with their inability to parse the data (e.g. duplicate columns, flagging columns with textual data as integers). In any case, it’s not practical to manually deal with schema issues for each file. My next inclination was to kind of patchwork this together in Excel, which seems the most intuitive, but Excel can’t handle a spreadsheet that large so that’s a no-go as well.

The ultimate goal is to have one large (probably multi-GB) SQL file that I can copy to the database server and run, maybe using LOAD IN FILE or something of that sort – but with the data all ready to go since it would be unwieldy to modify afterwards.

Which approach would be best? Additionally, what tools should I be using for this? Basically the problem is trying to "standardize" this data with a uniform schema without losing any data and being as non-redundant as possible. On the one hand, it doesn’t seem practical to go through all 25 tables manually and try to get them imported or try to change the schema on each one. I’m also not sure about trying to figure out the schema now and then modifying the data, since I can’t work with it all at once? Any advice from people who have done stuff like this before? Much appreciated!