## Why wouldn’t a Gunsmith Artificer use the Thunder Monger ability every time?

As part of the Artificer’s Gunsmith subtree, you gain the Thunder Monger ability. Why would a player ever choose to not use this ability vs a normal attack with the Thunder Cannon? It always increases the damage done by the Thunder Cannon, has no “X per day” caveats, and (seemingly) no negative repercussions. So with an unlimited number of uses and no down-sides, why not just always use it?

I ask this not as an opinion, more a clarification. I want to be sure there isn’t a rule I am missing or effect I am misunderstanding. Also, I am aware this is a UA class and therefore still under playtesting, but I want to try this class out.

## Longest common substring in linear time

We know that the longest common substring of two strings can be found in O(N^2) time complexity. Can a solution be found in only linear time?

## Reducing SAT to a P problem in polinomial time

Does reducing SAT in polynomial time to a P problem would mean that P = NP?

## what is JIT (Just in time compiler) [on hold]

As we know there are some languages like c++ , c, java etc so this just-in-time compiler is used in the java language. Suppose we have some applications of java and we want to improve the performance of those applications the first step is choosing a compiler and this JIT do it on run time.

I need some more information on JIT.

## A question on verifying the mixing time of finite groups such as the Rubik’s Cube Group

I’m interested in some questions about the computational complexity of bounding the mixing time of random walks on Cayley-graphs of finite groups like that of the Rubik’s Cube Group $$G$$. Determining that $$20$$ is the diameter (God’s number) of the Rubik’s Cube Group under the half-turn metric with Singmaster generating set $$s=\langle U, U’, U^2, D, D’, D^2,\cdots\rangle$$ was a wonderful result. I’m curious about follow-up questions, such as determining how many half-turn twists would it take to get the cube fully “mixed” to $$\epsilon$$-close to the uniform stationary distribution $$\pi$$, say in the variational distance sense.

For example, noting that there are $$18$$ moves in the half-turn metric, and calling $$n$$ the mixing time, can we say something like:

For all but a very small number of elements of $$g\in G$$, are there very close to $$\frac{18^n}{\vert G \vert}$$ ways of writing $$g$$ as words of length $$\le n$$?

My intuition is that, after the cube is fully mixed with $$n$$ moves, there should not be a large special subset $$A\subset G$$ of elements that need a lot more or a lot less than $$18^n$$ ways of writing them, starting from the solved position. On the other hand, if the cube has only been scrambled with $$m\lt n$$ twists, then there should be a large subset $$A$$ that has elements that are in some sense maybe only writable with no more than $$\frac{18^m}{2\vert G \vert}$$ different words of length $$\le m$$.

I think we can combine approximate counting techniques to parlay such gaps into an Arthur-Merlin protocol to verify the mixing time is $$\ge n$$:

• Arthur chooses a random element of $$g$$, a random hash $$h$$ mapping words of $$G$$ onto a set of size $$\frac{18^n}{\vert G \vert}$$, and a random image $$y$$ of $$h$$
• Merlin tells Arthur a word $$W$$ of length up to $$n$$ that, when applied to the starting position of the cube, equals $$g$$
• The word $$W$$ must also satisfy $$h(W)=y$$ – indicating that there are likely a lot of words of length $$\le n$$ that equal $$g$$
• Arthur repeats with Merlin to amplify as needed

Because, for groups I think, the mixing time is at least the diameter, this also may provide an Arthur-Merlin approach to bound the diameter of a large group.

## Inconsistent elapsed time for mysql/mariadb query

Checking the elapsed time for a database query for munin monitoring,I have created a script, the time measurement part being:

start=$(sed 's/^0*//'<<< date +%N) /usr/bin/mysql -u 3da_d9 -p****** --host="127.0.0.1" --port=4002 -e "SELECT f.*, AVG(l.value) AS vote_value, COUNT(l.value) AS vote_count, 0 AS active_feature, c.name from 3dallusions_joomla_d9.jom3_downloads_files AS f INNER JOIN 3dallusions_joomla_d9.jom3_downloads_containers AS c ON f.containerid = c.id LEFT JOIN 3dallusions_joomla_d9.jom3_downloads_log AS l ON l.type = 3 AND l.fileid = f.id AND l.value != 0" > /dev/null end=$  (sed 's/^0*//'<<< date +%N) if [ "$end" -lt "$  start" ]; then     end=$(($  end+1000000000)) fi elapsed=$(($  end - $start)) elapsed=$  (($elapsed/1000000)) echo$  elapsed 

I’ve checked the logic of the time measurement by replacing the SQL call with a simple sleep. That gives a consistent result every time.

The database and client are MariaDB 10.1 from the Debian Stretch repository.

The thing that is puzzling me is that the first time I run the script, the answer is around 10 milliseconds. Subsequent runs give about 50 milliseconds. After not running the script for a while, a result of around 10 is obtained again.

Why would a query be five times quicker the first time than when repeated? One might expect caching to cause the opposite effect. What is happening?

## How to get local system date and time in sharepoint 2013 workflow

Here my workflow is running on server time,It’s a VM server. Now I want to run my workflow with my local system date and time. For example if my workflow is running on a server which is in USA base time and my local system time is based on Indian time. Now my workflow should run by my system time not on server time. How can I do it? Is there any way to do it?