## System of Differential equations on non uniform grid

I need to solve the following set of ODE’s numerically,

$$\frac{dy}{dx}=Ay+Bz, \ \frac{dz}{dx}=Cy+Dz$$

where the independent variable $$x$$ is an non-uniform spaced array of points and so is A, B, C & D.

What is the best way of achieving this as common NDsolve specifies the independent variable in $$\{x,x_{min},x_{max}\}$$ format.

## OpenGL GLSL ES 3.10 – Referencing a uniform variable, causes the vertex shader to not draw anything

I have this project, that has a default shader, that just draws models and textures. Recently I decided to add a second shader that does a fancy effect, and is used only on some of the objects drawn.

After compiling the project for Linux or Windows, it all works as expected. When compiling the project to Android, only on specific devices, the new shader doesn’t work, while on other devices I tried, it all works.

Below is my default vertex shader specifically made for Android devices, this one works on all devices and draws everything without any editing or effect. As far as I understand, the fragment shaders work, so I’ll omit them.

    #version 310 es      in vec4 position;     in vec3 colour;     in vec2 texCoord;      uniform mat4 matrix;     uniform mat4 matrixProjection;      out vec2 outTexCoord;     out vec4 outColour;      void main() {             gl_Position = matrixProjection *matrix *position;             outTexCoord  = texCoord;             outColour  = vec4(colour.rgb, 1);     } 

I hope this looks fairly straight-forward. matrixProjection is the projection matrix, and matrix is the model-view matrix. They both work as expected and I’m able to render a whole scene without issue.

Now here is a simplified version of my new shader:

    #version 310 es      in vec4 position;     in vec3 colour;     in vec2 texCoord;      uniform mat4 matrix;     uniform mat4 matrixProjection;     uniform float animationCurrent;      out vec2 outTexCoord;     out vec4 outColour;      void main() {             gl_Position = matrixProjection *matrix *position;              if (animationCurrent > 0.0) {                     gl_Position.y += 5.0;             }              outColour = vec4(colour.rgb, 1.0);             outTexCoord  = texCoord;     } 

The only difference of the new shader is the new uniform animationCurrent, and an extra if statement that will modify the gl_Position.y of some vertices. Any object that is using this shader, is not drawn at all on the screen on some devices.

## What I’ve tried

From the new shader, if I remove the entire if statement, it all works and it displays objects as-is. If I replace the if statement with if (true) it still works, but it just displays all vertices of objects drawn with it slightly higher. If I replace it with if (false) it still works as expected.

So for some reason, just referencing animationCurrent causes the object to not be drawn.

I also tried replacing the if statement with if (matrix[0][0] > 0.0) and it still draws the object, it looks like there’s something specifically wrong with animationCurrent. I tried adding another matrix uniform variable, and set its value the same way as I do matrix, but it wouldn’t draw the object either.

This should mean that the value of animationCurrent is not relevant, and the fact that it’s a uniform float doesn’t matter either.

## Hardware

The problem occurs on an android phone with this hardware:

Device: Moto E (4) Plus - 7.1.1 Vendor graphic card: ARM Renderer: Mali-T720 Version OpenGL: OpenGL ES 3.1 v1.r12p1-01alp0.62f282720426ab7712f1c6b996a6dc82 Version GLSL: OpenGL ES GLSL ES 3.10 

And this android tablet with similar hardware:

Device: Kindle Fire 8 Vendor graphic card: ARM Renderer: Mali-T720 Version GL: OpenGL ES 3.1 v1.r26p0-01rel0.526d936ea9da20486773a9aaceecd920 Version GLSL: OpenGL ES GLSL ES 3.10 

This is an android tablet where everything works as expected:

Device: Lenovo TB-X505F - 10 Vendor graphic card: Qualcomm Renderer: Adreno (TM) 504 Version GL: OpenGL ES 3.2 V@415.0 (GIT@f345350, I0760943699, 1580221225) (Date:01/28/20) Version GLSL: OpenGL ES GLSL ES 3.20 

And here is a slightly older device that works. I’ve modified the shader a bit to support an older glsl version, but the idea is the same:

Device: Kindle Fire 7 Vendor graphic card: ARM Renderer: Mali-450 MP Version GL: OpenGL ES 2.0 Version GLSL: OpenGL ES GLSL ES 1.00 

## Question

My primary goal, is to understand what is causing this. Have I missed something very obvious? Is this a very edge-case bug related to the hardware?

I’m still learning how to support different devices with different versions of glsl, so it’s very likely I’ve missed something.

Any information you have, let me know. I’m willing to try a few things on different devices to find more about this issue.

## Problems arise when using a VBO compared to a uniform for mat4

I’ve been learning LWJGL and was originally using instancing and using a uniform to send all the mat4 data to the shader for the individual positions. I’m trying to switch to using a VBO for this since I had a batch size limitation of just 128, but have run into a problem with sending the data to the shader. I’ve spend a few days looking over the instancing sections of both https://learnopengl.com/ and https://lwjglgamedev.gitbooks.io/ but haven’t been able to find anything that could be the cause of this.

Nothing was changed aside from the code presented here, the creation of any needed variables, and the conversion from uniform to VBO in the shader itself.

This was the original code using the uniform that worked just fine. First is what was used to actually make the draw calls, and the second part is what was used to pass the mat4s to the shader:

// Prepare shader  GL40.glEnableVertexAttribArray(0); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, vertexID); GL40.glVertexAttribPointer(0, 3, GL40.GL_FLOAT, false, 0, 0);  GL40.glEnableVertexAttribArray(1); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, textureID); GL40.glVertexAttribPointer(1, 2, GL40.GL_FLOAT, false, 0, 0);  GL40.glEnableVertexAttribArray(2); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, normalID); GL40.glVertexAttribPointer(2, 3, GL40.GL_FLOAT, false, 0, 0);  GL40.glEnableVertexAttribArray(3); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, shadeID); GL40.glBufferData(GL40.GL_ARRAY_BUFFER, createFloatBuffer(shade.toArray(new Vector4f[shade.size()])), GL40.GL_STATIC_DRAW); GL40.glVertexAttribPointer(3, 4, GL40.GL_FLOAT, false, 0, 0); GL40.glVertexAttribDivisor(3, 1);  GL40.glBindBuffer(GL40.GL_ELEMENT_ARRAY_BUFFER, indexID); for(int i = 0; i < amount / 128 + 1; i++) {     int draws = Math.min(128, amount - i * 128);     for(int j = 0; j < draws; j ++) {         int loc = i * 128 + j;         shader.setUniform("model[" + j + "]", Matrix4f.transform(position.get(loc), rotation.get(loc), scale.get(loc)));     }     GL40.glDrawElementsInstanced(GL40.GL_TRIANGLES, drawCount, GL11.GL_UNSIGNED_INT, 0, draws); }  GL40.glBindBuffer(GL40.GL_ELEMENT_ARRAY_BUFFER, 0); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, 0);  GL40.glDisableVertexAttribArray(0); GL40.glDisableVertexAttribArray(1); GL40.glDisableVertexAttribArray(2); GL40.glDisableVertexAttribArray(3); 

Setting the mat4 uniform:

public void setUniform(String name, Matrix4f value) {     int location = GL20.glGetUniformLocation(programID, name);     if (location != -1) {         try (MemoryStack stack = MemoryStack.stackPush()) {             final FloatBuffer matrixBuffer = stack.mallocFloat(16);             matrixBuffer.put(value.getAll()).flip();             GL20.glUniformMatrix4fv(location, true, matrixBuffer);         }     } } 

And this is the code that’s been causing issues:

// Prepare shader  GL40.glEnableVertexAttribArray(0); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, vertexID); GL40.glVertexAttribPointer(0, 3, GL40.GL_FLOAT, false, 0, 0);  GL40.glEnableVertexAttribArray(1); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, textureID); GL40.glVertexAttribPointer(1, 2, GL40.GL_FLOAT, false, 0, 0);  GL40.glEnableVertexAttribArray(2); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, normalID); GL40.glVertexAttribPointer(2, 3, GL40.GL_FLOAT, false, 0, 0);  GL40.glEnableVertexAttribArray(3); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, shadeID); GL40.glBufferData(GL40.GL_ARRAY_BUFFER, createFloatBuffer(shade.toArray(new Vector4f[shade.size()])),         GL40.GL_STATIC_DRAW); GL40.glVertexAttribPointer(3, 4, GL40.GL_FLOAT, false, 0, 0); GL40.glVertexAttribDivisor(3, 1);  GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, modelID); GL40.glBufferData(GL40.GL_ARRAY_BUFFER, createFloatBuffer(position.toArray(new Vector3f[position.size()]), rotation.toArray(new Vector3f[rotation.size()]), scale.toArray(new Vector3f[scale.size()])), GL40.GL_STATIC_DRAW);  int size = 4; for (int i = 0; i < 4; i++) {     GL40.glEnableVertexAttribArray(4 + i);     GL40.glVertexAttribPointer(4 + i, 4, GL40.GL_FLOAT, false, 4 * size, i * size);     GL40.glVertexAttribDivisor(4 + i, 1); }  GL40.glBindBuffer(GL40.GL_ELEMENT_ARRAY_BUFFER, indexID); GL40.glDrawElementsInstanced(GL40.GL_TRIANGLES, drawCount, GL11.GL_UNSIGNED_INT, 0, amount);  GL40.glBindBuffer(GL40.GL_ELEMENT_ARRAY_BUFFER, 0); GL40.glBindBuffer(GL40.GL_ARRAY_BUFFER, 0);  GL40.glDisableVertexAttribArray(0); GL40.glDisableVertexAttribArray(1); GL40.glDisableVertexAttribArray(2); GL40.glDisableVertexAttribArray(3); GL40.glDisableVertexAttribArray(4); GL40.glDisableVertexAttribArray(5); GL40.glDisableVertexAttribArray(6); GL40.glDisableVertexAttribArray(7); 

Storing the mat4s into the VBO:

public static FloatBuffer createFloatBuffer(Vector3f[] positions, Vector3f[] rotation, Vector3f[] scales {     FloatBuffer buffer = BufferUtils.createFloatBuffer(positions.length * 16);     for (int i = 0; i < positions.length; i++) {         buffer.put(Matrix4f.transform(positions[i], rotation[i], scales[i]).getAll());     }     buffer.flip();     return buffer; } 

I’ve messed around with this a ton over the past couple days which leads me to believe that it’s not a problem with the data being sent, but rather how the data itself is formatted. That doesn’t make much sense though, since both examples use Matrix4f.transform() to get the needed data. I tried transposing the matrices before adding them to the buffer but to no avail.

And just for reference, here is an example of what Matrix4f.getAll() produces:

[0.5, 0.0, 0.0, -40.0,

0.0, 0.5, 0.0, 12.0,

0.0, 0.0, 0.5, 88.0,

0.0, 0.0, 0.0, 1.0]

Here are two screen shots of what it’s suppose to look like and the results I’m getting from using the VBO approach:

## Sampling from the uniform distribution

Is there an efficient classical algorithm that generates samples from the uniform distribution? Would such an algorithm exist for any distribution that has an analytic description?

## What is Simple Uniform Hashing, and why searching a hashtable has complexity Θ(n) in the worst case

Can anyone explain nicely what Simple Uniform Hashing is, and why searching a hashtable has complexity Θ(n) in the worst case if we don’t have uniform hashing (where n is the number of elements in the hashtable)

## Why is the Greedy Algorithm for Online Scheduling on uniform machines NOT competitive?

Consider the Online Scheduling Problem with m machines and different speeds.

Each instance $$\sigma$$ consists of a sequence of jobs with different sizes $$j_i\in\mathbb{R^+}$$. At each step in time any algorithm gets to know the one new job of $$\sigma$$ which needs to be assigned to one machine. The goal is to minimize the makespan (by the end of the online sequence \sigma\$ ).

I want to find an instance such that the Greedy Algorithm, which assigns each new job to the machine which finishes it first, is only $$\Theta(\log m)$$-competitive.

Any ideas? I can’t find any articles regarding my problem.

## Make image boundaries uniform

I have the following image (fig 1) with the extracted points from the geomagic software (Please see the point list in the attached link).

https://pastebin.com/K51N8Kfa

I would like to know how I can remove the indented boundaries of the shape. This should be done by removing associated points causing indentation from the list (fig 2). I need the edges to be uninformed so that later I can normalize the height of the image.

## How can I generate a random sample of unique vertex pairings from a undirected graph, with uniform probability?

I’m working on a research project where I have to pair up entities together and analyze outcomes. Normally, without constraints on how the entities can be paired, I could easily select one random entity pair, remove it from the pool, then randomly select the next entity pair.

That would be like creating a random sample of vertex pairs from a complete graph.

However, this time around the undirected graph is now

• incomplete
• with a possibility that the graph might be disconnected.

I thought about using the above method but realized that my sample would not have a uniform probability of being chosen, as probabilities of pairings are no longer independent of each other due to uneven vertex degrees.

I’m banging my head at the wall for this. It’s best for research that I generate a sample with uniform probability. Given that my graph has around n = 5000 vertices, is there an algorithm that i could use such that the resulting sample fulfils these conditions?

1. There are no duplicates in the sample (all vertices in the graph only is paired once).
2. The remaining vertices that are not in the sample do not have an edge with each other. (They are unpaired and cannot be paired)
3. The sample generated that meets the above two criteria should have a uniform probability of being chosen as compared to any other sample that fulfils the above two points.

There appear to be some work done for bipartite graphs as seen on this stackoverflow discussion here. The algorithms described obtains a near-uniform sample but doesn’t seem to be able to apply to this case.

## Uniform Hashing. Understanding space occupancy and choice of functions

I’m having troubles understanding two things from some notes about Uniform Hashing. Here’s the copy-pasted part of the notes:

Let us first argue by a counting argument why the uniformity property, we required to good hash functions, is computationally hard to guarantee. Recall that we are interested in hash functions which map keys in $$U$$ to integers in $$\{0, 1, …, m-1\}$$. The total number of such hash functions is $$m^{|U|}$$, given that each key among the $$|U|$$ ones can be mapped into $$m$$ slots of the hash table. In order to guarantee uniform distribution of the keys and independence among them, our hash function should be anyone of those ones. But, in this case, its representation would need $$\Omega(log_2 m^{|U|}) = \Omega(|U| log_2 m)$$ bits, which is really too much in terms of space occupancy and in the terms of computing time (i.e. it would take at least $$\Omega(\frac{|U|log_2 m}{log_2 |U|})$$ time to just read the hash encoding).

The part I put in bold is the first thing is confusing me.

Why the function should be any one of those? Shouldn’t you avoid a good part of them, like the ones sending every element from the universe $$U$$ into the same number and thus not distributing the elements?

The second thing is the last "$$\Omega$$". Why would it take $$\Omega(\frac{|U|log_2 m}{log_2 |U|})$$ time just to read the hash encoding?

The numerator is the number of bits needed to index every hash function in the space of such functions, the denominator is the size in bits of a key. Why this ratio gives a lower bound on the time needed to read the encoding? And what hash encoding?

## Asymmetric Transition Probability Matrix with uniform stationary distribution

I am solving a discrete Markov chain problem. For this I need a Markov chain whose stationary distribution is uniform(or near to uniform distribution) and transition probability matrix is asymmetric.

[ Markov chains like Metropolis hasting has uniform stationary distribution but transition probability matrix is symmetric ]