## How to evaluate a binary expression tree in hlsl without recursion or a stack

I’m currently working on a dual contouring implementation for which I want to create procedural terrain based on layers of noise. Both, the terrain generation and the mesh creation via dual contouring run on the GPU in compute shaders.

For configurating the terrain generation I previously changed the specific compute shader’s source code itself to generate different layers of FBM noise and combine them with various CSG operations (e.g. union, intersection, difference) to arrive at the final terrain. But this offers very little flexibility, e.g. I cannot change the terrain generation at runtime.

So in order to get more flexiblity for configuring the terrain generation, I’ve started implementing a graph tool (similiar to ShaderLab) using XNode: Red (leaf) nodes are operands, grey (internal) nodes are operators (either binary or unary) and the green (root) node is simple the output node. On the GPU side each operator (internal node) equals a function, e.g. a function that creates the output of the noise node.

The idea is that I can visually create a binary expression tree (with additional unary operators), upload it to the GPU using a `StructuredBuffer<NoiseGraphNode>` where `NoiseGraphNode` is

``struct NoiseGraphNode {     uint leftNodeIndex;     uint rightNodeIndex;     uint nodeType;     uint dataIndex;         // Index used in conjunction with nodeType                              // to access a node's data located in a                              // nodeType-specific StructuredBuffer, e.g.                              // the noise parameters in case of a noise node. }; ``

and have that graph evaluated by the GPU’s compute shader for generating the noise that represents the final terrain. Normally such a graph would be evaluated using a recursive approach, something like:

``evaluate(node) {      if(node has children){           left_val = evaluate(node->left);           right_val = evaluate(node->right);            // find operation symbol for node and use it as           // val = left_val operation right_val            return val;      }      else {           return node_value;      } } ``

Pseudo-code taken from https://stackoverflow.com/questions/10769174/evaluating-expression-trees.

HLSL doesn’t support recursion though! Another way would be to emulate the recursive implementation using a stack and a while loop (after all recursion is leveraging the call stack). But creating a stack structure in HLSL like so

``struct NoiseGraphStack {     uint buffer[20];     uint count;      void Push(uint number)     {         buffer[count++] = number;   // Doesn't work because an array reference                                      // cannot be used as an l-value.     }      float Pop()     {         return buffer[--count];     }      static NoiseGraphStack Create()     {         NoiseGraphStack stack;         stack.count = 0;          return stack;     } }; ``

doesn’t work either because it requires the while loop to be unrolled which isn’t possible.

Note: The exact error message referred to in the code above is "array reference cannot be used as an l-value; not natively addressable, forcing loop to unroll."

So is it possible to evaluate a binary expression tree without recursion or a stack? Can I perhaps somehow preprocess the necessary steps for evaluating such a tree on the CPU (where I can use recursion just fine) first and linearize them before I send them to the GPU?

I decompiled the first Minecraft version (rd-132211) using `jd-gui` and recompiled it. After recompiling, running Minecraft had no problem except screen size problem and I can make sure the source code has no problem.

Now, I looked at `com.mojang.rubydung.level` `Level.java` file. But I can’t understand lot.

Here is the code I understood and used in my code:

``package com.kg.jopenattack.level;  import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.util.zip.GZIPInputStream; import java.util.zip.GZIPOutputStream;  public class Level {     private byte[] bytes = new byte[1];     public void load() throws FileNotFoundException, IOException {         DataInputStream dis = new DataInputStream(new GZIPInputStream(new FileInputStream(new File("level.lvl"))));         dis.readFully(bytes);         dis.close();     }     public void save() throws FileNotFoundException, IOException {         DataOutputStream dos = new DataOutputStream(new GZIPOutputStream(new FileOutputStream(new File("level.lvl"))));         dos.write(bytes);         dos.close();     } } ``

To decompile Minecraft rd-132211, enable historical option in Minecraft launcher settings and create rd-132211 installation.

## How do I get the data from this binary file?

I have a file which contains the following binary data:

``\$  BinaryData={2,0,0,0,0,0,0,0,20,0,0,0,0,0,0,0,224,123,92,59,148,254,214,1,46,0,0,0,67,0,58,0,92,0,85,0,115,0,101,0,114,0,115,0,92,0,97,0,116,0,102,0,97,0,105,0,92,0,105,0,67,0,108,0,111,0,117,0,100,0,68,0,114,0,105,0,118,0,101,0,92,0,84,0,104,0,105,0,115,0,32,0,105,0,115,0,32,0,97,0,32,0,116,0,101,0,115,0,116,0,46,0,116,0,120,0,116,0,0,0}; ``

It is interpreted according to the following picture:

I can look at the file data in the format shown in the picture using:

``Partition[IntegerString[     \$  BinaryData,     16,2 ],16,16]//Column ``

Using this I pull out the relevant content as follows:

``{$$FileSize,$$DateDeleted,$$FilePath}=Rest@TakeList[$$BinaryData,{8,8,8,All}]; ``

Now my question is how do I get the content converted from this binary form to the appropriate form. `\$ FileSize` is an integer representing bytes. `\$ DateDeleted` is a number that should be convertible to date somehow and `\$ FilePath` is a Unicode string representing file path.

I am new to parsing binary so I might be doing things wrong. Feel free to correct me.

## Fetching content binary from database or fetching content by its link from storage service

For an app (web + phone) there are two options:

1. Image binaries in database. Server replies to app HTTP request with images as base64
2. Images in storage service like Amazon S3 or Azure Blob Storage or a self-hosted one. Image links in database. Server handles app HTTP requests by sending back only the links to images. The app fetches the images from storage by their link

Which option above is the standard practice? Which one has less trouble down the road?

## Does binlog_group_commit_sync_delay affect existing binary logging in MySQL

We want to tune the setting binlog_group_commit_sync_delay to increase replication performance. Our slave is currently two days behind. Does changing this setting affect the current backlog or will we see the impact after two days?

## Segmenting a binary image into three parts

I am trying to segment a binary image into three parts, head, body, and tail. The binary image is a stack tiff file of a moving fruit fly larvae. I was able to do this initially by removing points within a certain distance of the centroid:

However, this broke down if the larvae’s head moved too close to the centroid, because the head points would then be removed:

I have been looking for creative solutions to this problem. Trying to fit it to a curved ellipsoid and get the vertice points that make up the tail and the head. However, I have not been very successful! I was curious if anyone had any insight into this problem or a suggestion for a creative solution? Thank you! ðŸ™‚

Here are the raw binary images:

1)

2)

## Why does the formula floor((i-1)/2) find the parent node in a binary heap?

I learned that when you have a binary heap represented as a vector / list / array with indicies [0, 1, 2, 3, 4, 5, 6, 7, 8, …] the index of the parent of element at index i can be found with parent index = floor((i-1)/2)

I have tried to explain why it works. Can anyone help me verify this?

Took reference from Why does the formula 2n + 1 find the child node in a binary heap? thanks to @Giulio

## In what cases is solving Binary Linear Program easy (i.e. **P** complexity)? I’m looking at scheduling problems in particular

In what cases is solving Binary Linear Program easy (i.e. P complexity)?

The reason I’m asking is to understand if I can reformulate a scheduling problem I’m currently working on in such a way to guarantee finding the global optimum within reasonable time, so any advice in that direction is most welcome.

I was under the impression that when solving a scheduling problem, where a variable value of 1 represents that a particular (timeslot x person) pair is part of the schedule, if the result contains non-integers, that means that there exist multiple valid schedules, and the result is a linear combination of such schedules; to obtain a valid integer solution, one simply needs to re-run the algorithm from the current solution, with an additional constraint for one of the real-valued variables equal to either 0 or 1.

Am I mistaken in this understanding? Is there a particular subset of (scheduling) problems where this would be a valid strategy? Any papers / textbook chapter suggestions are most welcome also.

## Choosing an AI method to recreate a given binary 2D image

If the title wan not very clear, I want a method to take an input image like this,

``[[0, 0, 0, 0],  [1, 1, 1, 0],  [1, 1, 1, 0],  [0, 1, 1, 0]] ``

and output the 2D coordinates of the `1`s of the image (So that I can recreate the image)

I want the output to be sequential because I need to reconstruct the input image pixel by pixel and there are some conditions on the image construction order (e.g. You cannot place a `1` somewhere when it is surrounded by `1`s)

The image can change and the number of `1`s in the image too.

1. What is an appropriate AI method to apply in this case?
2. How should I feed the image to the network? (Will flattening it to 2D affect my need of an output order?)
3. Should I get the output coordinates one by one or as an ordered 2xN matrix?
4. If one by one, should I feed the same image for each output or the image without the `1`s already filled?

EDIT: The application is a robot creating the image using some kind of building blocks, placing one block after the other

I have tried to apply "NeuroEvolution of Augmenting Topologies" for this using neat-python but was unsuccessful. I am currently looking at RNNs but I am not sure if it is the best choice either.

## Binary string satisfying several constraints

I’m trying to solve this problem, but without success.

Problem: You’re given a binary string where some of the bits are replaced by `?`. You’re also given several intervals [$$l_i$$, $$r_i$$] ($$l_i < r_i$$). Find a binary string S with `?` replaced by `0` or `1`, such that in each interval [$$l$$, $$r$$], s[$$l..r$$] includes both `0` and `1`.

Example:

Given the string `00?11` and the interval [2, 3], the string `00111` satisfies the requirements, as S[2..3]=`01` include both 0 and 1.

However, the string `00?11` and the intervals [2, 3] and [3, 4] cannot be solved, as the `?` would have to be both `0` and `1`.

The bounds are |S|,Q<=1e5

I tried using greedy, but it’s giving wrong answer for this case: `0??0`, [1, 3], [2, 3] and [3, 4], as the greedy algorithm would try `0??0` -> `01?0` -> `0100` -> fail, yet there exists a solution (`0010`).

I tried removing completely overlapping intervals (such as [1, 3] and [2, 3]), but there’re other cases I found that fails. What would be a correct solution?

Thanks.