Box2d: High screen resolution / frequency causes high friction?

I’m using Cocos Creator with (built-in) box2d for physics.

Recently our game behaves weirdly on our new device Galaxy S20 Ultra 5G – which has screen size = 1440 x 3200 – frequency = 120Hz.

After stop pushing, all our physical bodies almost stop immediately like they has very high friction. No other device react that way.

Anyone experienced this issue can give me an advice?

OpenGL GLSL ES 3.10 – Referencing a uniform variable, causes the vertex shader to not draw anything

I have this project, that has a default shader, that just draws models and textures. Recently I decided to add a second shader that does a fancy effect, and is used only on some of the objects drawn.

After compiling the project for Linux or Windows, it all works as expected. When compiling the project to Android, only on specific devices, the new shader doesn’t work, while on other devices I tried, it all works.

My shaders

Below is my default vertex shader specifically made for Android devices, this one works on all devices and draws everything without any editing or effect. As far as I understand, the fragment shaders work, so I’ll omit them.

    #version 310 es      in vec4 position;     in vec3 colour;     in vec2 texCoord;      uniform mat4 matrix;     uniform mat4 matrixProjection;      out vec2 outTexCoord;     out vec4 outColour;      void main() {             gl_Position = matrixProjection *matrix *position;             outTexCoord  = texCoord;             outColour  = vec4(colour.rgb, 1);     } 

I hope this looks fairly straight-forward. matrixProjection is the projection matrix, and matrix is the model-view matrix. They both work as expected and I’m able to render a whole scene without issue.

Now here is a simplified version of my new shader:

    #version 310 es      in vec4 position;     in vec3 colour;     in vec2 texCoord;      uniform mat4 matrix;     uniform mat4 matrixProjection;     uniform float animationCurrent;      out vec2 outTexCoord;     out vec4 outColour;      void main() {             gl_Position = matrixProjection *matrix *position;              if (animationCurrent > 0.0) {                     gl_Position.y += 5.0;             }              outColour = vec4(colour.rgb, 1.0);             outTexCoord  = texCoord;     } 

The only difference of the new shader is the new uniform animationCurrent, and an extra if statement that will modify the gl_Position.y of some vertices. Any object that is using this shader, is not drawn at all on the screen on some devices.

What I’ve tried

From the new shader, if I remove the entire if statement, it all works and it displays objects as-is. If I replace the if statement with if (true) it still works, but it just displays all vertices of objects drawn with it slightly higher. If I replace it with if (false) it still works as expected.

So for some reason, just referencing animationCurrent causes the object to not be drawn.

I also tried replacing the if statement with if (matrix[0][0] > 0.0) and it still draws the object, it looks like there’s something specifically wrong with animationCurrent. I tried adding another matrix uniform variable, and set its value the same way as I do matrix, but it wouldn’t draw the object either.

This should mean that the value of animationCurrent is not relevant, and the fact that it’s a uniform float doesn’t matter either.


The problem occurs on an android phone with this hardware:

Device: Moto E (4) Plus - 7.1.1 Vendor graphic card: ARM Renderer: Mali-T720 Version OpenGL: OpenGL ES 3.1 v1.r12p1-01alp0.62f282720426ab7712f1c6b996a6dc82 Version GLSL: OpenGL ES GLSL ES 3.10 

And this android tablet with similar hardware:

Device: Kindle Fire 8 Vendor graphic card: ARM Renderer: Mali-T720 Version GL: OpenGL ES 3.1 v1.r26p0-01rel0.526d936ea9da20486773a9aaceecd920 Version GLSL: OpenGL ES GLSL ES 3.10 

This is an android tablet where everything works as expected:

Device: Lenovo TB-X505F - 10 Vendor graphic card: Qualcomm Renderer: Adreno (TM) 504 Version GL: OpenGL ES 3.2 V@415.0 (GIT@f345350, I0760943699, 1580221225) (Date:01/28/20) Version GLSL: OpenGL ES GLSL ES 3.20 

And here is a slightly older device that works. I’ve modified the shader a bit to support an older glsl version, but the idea is the same:

Device: Kindle Fire 7 Vendor graphic card: ARM Renderer: Mali-450 MP Version GL: OpenGL ES 2.0 Version GLSL: OpenGL ES GLSL ES 1.00 


My primary goal, is to understand what is causing this. Have I missed something very obvious? Is this a very edge-case bug related to the hardware?

I’m still learning how to support different devices with different versions of glsl, so it’s very likely I’ve missed something.

Any information you have, let me know. I’m willing to try a few things on different devices to find more about this issue.

Solving simultaneous ODEs with NDSolve causes unexpected singularity error (ndsz)

I’m trying to model the evaporation of a binary droplet in a square well by solving a couple of PDEs for its height profile and composition. I can manage it for a single liquid pretty well, but when I introduce the PDE for composition NDSolve quickly finds a singularity in the composition variable (importantly the singularity moves slightly if I change the number of points). I can’t seem to find any way to stabilise the system, can anyone help me? Is there something fundamental about NDSolve that I’m missing?

(I asked a similar question a while ago but I’ve restructured my code to make it easier to share here)

The equations are:

$ $ \frac{\partial h}{\partial t} = – C \frac{\partial}{\partial x}\Big[\frac{1}{3} \frac{\partial^3h}{\partial x^3}h^3 + \frac{B1}{2} \frac{\partial X}{\partial x}\Big] – (1-AX) $ $ $ $ \frac{\partial X}{\partial t} = -\frac{C}{3} h^2 \frac{\partial^3h}{\partial x^3}\frac{\partial X}{\partial x} – \frac{hB1}{2} \Big(\frac{\partial X}{\partial x}\Big)^2 + \frac{A}{h}X(1-X) $ $

with height $ h$ and composition $ X$ .

And the boundary conditions are

$ h(t)=1$ and $ X(t)=X(t=0)$ at $ x=1$ , $ \frac{\partial h}{\partial x}=\frac{\partial X}{\partial x}=0$ at $ x=0$ , and $ \frac{\partial^3h}{\partial x^3}=0$ at $ x=0$ and $ x=1$ .

I discretise all this using finite differences.

This is what I get for a single liquid droplet (when I suppress equation 2) and I expect something qualitatively similar for a binary droplet. Droplet curves when the composition equation is suppressed

Here is a minimum replicable example:

    N1 = 100; dx = 1/N1; (*Discretise*)     C1  = 1; (*parameter C*)     a = 1.5; b = 1 - a; hInitial = Table[a + b i^2 dx^2, {i, 0, N1}]; (*Initial height profile*)     E1 = 1;(*evaporation on/off switch*)     Cc = C1/(24 * dx^3);     A1 = 0.01; B1 = 1;  (*Control constants in the equations above*)     CcX = (C1 B1)/(2 dx^2); CcX2 = C1/(6 dx^2);     X0 = 0.4; XInitial = Table[X0, {i, 0, N1}]; (*Initial composition ratio*)     Sc = 1; (*Composition on/off switch*)     TGap = 0.085;(*for spacing curves in the plot*)         dv[v_List] :=       With[{h = Take[v, Length[v]/2 - 1], hN = v[[(Length[v]/2)]],          X1 = v[[Length[v]/2 + 1 ;; Length[v] - 1]], XN = v[[-1]]},        With[{          dh1 =            ListCorrelate[{0, 0, 1 , 1, 0}, #] &, (*derivatives are discretised as finite differences.            ListCorrelate achieves this.*)           dh2 = ListCorrelate[{0, 1, 1, 0, 0}, #] &,          dh3 = ListCorrelate[{0, -1, 3, -3, 1}, #] &,          dh4 = ListCorrelate[{-1, 3, -3, 1, 0}, #] &,          hi = ListCorrelate[{0, 0, 1, 0, 0}, #] &,          dh5 = ListCorrelate[{-2, 1, 0, -1, 2}, #] &,            dx1 = ListCorrelate[{0, -1, 1}, #] &,          dx2 = ListCorrelate[{-1, 1, 0}, #] &,          Xi = ListCorrelate[{0, 1, 0}, #] &          },         Flatten@{             -Cc*(dh1[#1]^3*dh3[#1] - dh2[#1]^3*dh4[#1]) -               CcX*(dh1[#1]^2*dx1[#2] - dh2[#1]^2*dx2[#2]) -              E1*(1 - A1 Xi[#2]), (*ODE for heights*)              0,(*height is pinned at edge, (1, 1)*)              Sc*(-CcX2* hi[#1]^2*dh5[#1]*dx2[#2] -              CcX/C1* hi[#1]*dx2[#2]^2 +              A1 *(hi[#1])^-1*Xi[#2]*(1 - Xi[#2])), (*ODE for composition*)                      0 (*composition ratio is fixed at edge, XN = 0.4*)             } &[          Flatten@Join[{h[[3]]}, {h[[2]]}, {h}, {1}, {3 - 3 h[[-1]] + h[[-2]]}],                       Flatten@Join[{X1[[2]]}, {X1}, {XN}]]]                (*this contains the boundary conditions*)         ];      v0 = Join[hInitial, XInitial]; (*Initial conditions*)     system2d =       NDSolve[{v'[t] == dv[v[t]], v[0] ==  v0,          WhenEvent[          Min@Table[             Take[values[[tt]], Length[values[[tt]]]/2],               {tt, 1,  Length[values]}] < 0,           "StopIntegration"](*WhenEvent to stop integration at touchdown*)},          v, {t, 0, 2}][[1, 1, 2]];       Needs@"DifferentialEquations`InterpolatingFunctionAnatomy`";     values = InterpolatingFunctionValuesOnGrid@system2d;      times = Flatten@InterpolatingFunctionGrid@system2d;      T1 = 0; timesteps = {}; For[i = 1, i < Length[times], i++,       If[times[[i]] > T1, {T1 = T1 + TGap,         timesteps = Append[timesteps, {i, times[[i]]}]}]];     finaltime = {{-1}}; firstandfinaltime = {{1}, {-1}};      heights =        Table[Take[values[[tt]], Length[values[[tt]]]/2],          {tt, 1, Length[values]}];      comps = Table[        Take[values[[tt]], -(Length[values[[tt]]]/2)],           {tt, 1, Length[values]}];      Show[Table[       ListPlot[Table[{i*dx, heights[[ttt[[1]], i + 1]]}, {i, 0, N1}],         PlotRange -> {{0, 1}, {0, 1.05*a}}, Joined -> True,         AxesLabel -> {x, h}], {ttt, finaltime}]]      Show[Table[       ListPlot[Table[{i*dx, comps[[ttt[[1]], i + 1]]}, {i, 0, N1}],         PlotRange -> {{0, 1}, {0, 1}}, Joined -> True,        AxesLabel ->{x, X}], {ttt, finaltime}]] 

Thank you for any help you can offer me!

Spell which causes grapple

Let me start by pointing out that I’m new to Pathfinder. I’ve played some Dnd and now our DM is migrating our campaing to PF2e.

I’m planning to build a new character, some necromancer of sorts, and I would very much like for it to have a specific spell.

I’m trying to find a spell (up to lvl 6) which grapples the target. I’m aware of the Strangling Hair lvl 3 spell, and it is quite similar to what I want, but I wanted something that had an area of effect instead of just targeting one creature. My goal is to have something to lock several weak foes into place, so I wouldn’t mind if it is ineffective against strong enemies.

If AoE is not possible, then I would like at least something stronger than Strangling Hair in terms of damage. I also would like to depict the spell as skelleton arms sprouting from the ground and holding the enemies into place, so it is a plus if the spell is somehow connected to this.

As @HeyICanChan pointed out, Strangling Hair is a Pathfinder spell, and not a Pathfinder 2e one. This wouldn’t be a deal-breaker, since our GM is pretty flexible, but I’d rather have something actually from Pathfinder 2e.

Is it correct or incorrect to say that an input say $C$ causes an average run-time of an algorithm?

I was going through the text Introduction to Algorithm by Cormen et. al. where I came across an excerpt which I felt required a bit of clarification.

Now as far as I have learned that that while the Best Case and Worst Case time complexities of an algorithm arise for a certain physical input to the algorithm (say an input $ A$ causes the worst case run time for an algorithm or say an input $ B$ causes the best case run time of an algorithm , asymptotically), but there is no such physical input which causes the average case runtime of an algorithm as the average case run time of an algorithm is by it’s definition the runtime of the algorithm averaged over all possible inputs. It is something I hope which only exists mathematically.

But on the other hand inputs to an algorithm which are neither the best case input nor the worst case input is supposed to be somewhere in between both the extremes and the performance of our algorithm is measured on them by none other than the average case time complexity as the average case time complexity of the algorithm is in between the worst and best case complexities just as our input between the two extremes.

Is it correct or incorrect to say that an input say $ C$ causes an average run-time of an algorithm?

The excerpt from the text which made me ask such a question is as follows:

In context of the analysis of quicksort,

In the average case, PARTITION produces a mix of “good” and “bad” splits. In a recursion tree for an average-case execution of PARTITION, the good and bad splits are distributed randomly throughout the tree. Suppose, for the sake of intuition, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits. Figure(a) shows the splits at two consecutive levels in the recursion tree. At the root of the tree, the cost is $ n$ for partitioning, and the subarrays produced have sizes $ n- 1$ and $ 0$ : the worst case. At the next level, the subarray of size $ n- 1$ undergoes best-case partitioning into subarrays of size $ (n-1)/2 – 1$ and $ (n-1)/2$ Let’s assume that the boundary-condition cost is $ 1$ for the subarray of size $ 0$ .

The combination of the bad split followed by the good split produces three sub- arrays of sizes $ 0$ , $ (n-1)/2 – 1$ and $ (n-1)/2$ at a combined partitioning cost of $ \Theta(n)+\Theta(n-1)=\Theta(n)$ . Certainly, this situation is no worse than that in Figure(b), namely a single level of partitioning that produces two subarrays of size $ (n-1)/2$ , at a cost of $ \Theta(n)$ . Yet this latter situation is balanced! Image

If the Bestow Curse spell causes me to do extra damage, does the target’s death trigger the Necromancy wizard’s Grim Harvest feature?

In D&D 5e using a wizard of the School of Necromancy:

If I cast bestow curse on a monster, then kill it with a crossbow, would it trigger the Grim Harvest feature due to the extra 1d8 damage? Does it matter if the monster had 1 HP?

What about if I cast bestow curse, then hit it with magic missile (and kill it with just 1 missile) – would Grim Harvest trigger off the missile or the curse?

Is filling up plan cache causes a decrease in space allocated for data cache?

SQL Server uses allocated server memory for different kind of purposes. Two of them are plan cache and data cache which are used to store execution plans and actual data correspondingly.

My question: Do these two caches have different allocated space section in Buffer pool, or in contrary, they have just one section in Buffer pool which they share between each other?

In other words, if plan cache is filling up, does space for data cache is reducing as well?

Problems for which a small change in the statement causes a big change in time complexity

I know that there are several problems for which a small change in the problem statement would result in a big change in its (time) complexity, or even in its computability.

An example: The Hamiltonian path problem defined as

Given a graph, determine whether a path that visits each vertex exactly once exists or not.

is NP-Complete while the Eulerian path problem defined as

Given a graph, determine whether a trail that visits every edge exactly once exists or not.

is solvable in linear time with respect to the number of edges and nodes of the graph.

Another example is 2-SAT (polynomial complexity) vs K-SAT (NP-complete) although someone could argue that 2-SAT is just a specific case of K-SAT.

What do you call this kind of problems –if they even have a name? Can someone provide a list of other examples or some references?

converting latin to utf8mb4 causes questionmarks

  • The original format of the data is unknown
  • The new table is in utf8mb4_general_ci

If I do CONVERT(BINARY CONVERT(column USING latin1) USING UTF8) as mentioned here – it fixes all text, but converts something like: © in the original column to ? in the new column.

If it helps to determine what original encoding it was in, the original text renders as e.g. KotaÄići and converts to Kotačići.

Is there a way to both preserve special characters and restore correct utf8 text format?

VirtualBox LAMP Server to Proxmox causes webroot to disappear

Hey guys! First time here.
Hopefully I'm in the right spot, move this thread if not.

I'm having some problems when trying to transfer my completely functional website thats hosted on my Virtualbox VM LAMP Server onto a Proxmox installation.
Let me explain.
I've had my first website up for about two months now. I'm running it on a Ubuntu Server 18.04 LAMP stack in a VirtualBox VM on Windows 10. I have my own GoDaddy signed certs so my site has https. I wanted to transfer this to My Proxmox…

VirtualBox LAMP Server to Proxmox causes webroot to disappear