City Building Program

I want to build a city building game that’s super back end and not visual. I want to focus on complexity, different modes of transportation, and how that impacts peoples lives; lots of statistics on how much people travel, what kind of access and resources people have. Quality of life. I also want to have it be able to generate cities based on smaller scale models, using templates, and fractal templates of the city to expand into a much larger city. Where can I start?

Is there a program to convert OpenGL 2.0 to OpenGL-ES 2.0?

I have a 3D game library that uses OpenGL 2.0 for PC and I need to convert it to OpenGL-ES 2.0 to compile it for Android. Because the library is huge, it can’t be done by hand, so I was wondering if there is some kind of software to auto convert desktop OpenGL to OpenGL-ES source code, some wrapper, or maybe some layer running on Android that converts desktop OpenGL to ES on runtime? Perhaps there is a tool that auto converts desktop OpenGL to a cross platform 3D rendering library ?

In what cases is solving Binary Linear Program easy (i.e. **P** complexity)? I’m looking at scheduling problems in particular

In what cases is solving Binary Linear Program easy (i.e. P complexity)?

The reason I’m asking is to understand if I can reformulate a scheduling problem I’m currently working on in such a way to guarantee finding the global optimum within reasonable time, so any advice in that direction is most welcome.

I was under the impression that when solving a scheduling problem, where a variable value of 1 represents that a particular (timeslot x person) pair is part of the schedule, if the result contains non-integers, that means that there exist multiple valid schedules, and the result is a linear combination of such schedules; to obtain a valid integer solution, one simply needs to re-run the algorithm from the current solution, with an additional constraint for one of the real-valued variables equal to either 0 or 1.

Am I mistaken in this understanding? Is there a particular subset of (scheduling) problems where this would be a valid strategy? Any papers / textbook chapter suggestions are most welcome also.

memory storage of a program before compiling

Whenever we write code, after compilation the code will be converted to machine language and then stored in the hard disk. But before compiling the code, it is still in the high-level language. How and where the memory is allocated for the code before compiling the code while it is in a high-level language.

I assume, before compiling the code is stored in RAM, but how? because we can only store in machine language in RAM.

If there is any wrong with my question or it is a wrong way of asking, please comment below. It will be helpful

My any-dice program times out, when calculating large limit break checks

Someone in chat helped write an anydice program to calculate limit breaks in an RPG I’m developing, but after making some changes, it times out for dicepools > 7.

The system I have in mind, is that if any of the dice you roll is below a threshold, you can bank the sum of all failed rolls for later use, by converting it into a limit break token (currently, at an exchange rate of 1:4). I’m toying with requiring a certain number of successes before you can convert failed, which may or may not be slowing down the program.

function: sum X:s less than L with at least K successes {   R: 0   S: 0   loop I over X {      if I <= L { R: R + I }      if I > L { S: S + 1 }   }   if S >= K { result: R/4 }   if S < K { result: 0 }  } 

Is there a more efficient way of running this program? Initially before my tweaks, the same helpful person suggested this as an alternative to the function: output 3d{1..6, 0:6} named "Alt dice" but I can’t figure a way of running that, which is probably less likely to time out, and still check for a minimum number of successes.

Here is the code that causes the time out:

output [sum 1d12 less than 7 with at least 0 successes] named "1 die limit break" output [sum 2d12 less than 7 with at least 1 successes] named "2 die limit break" output [sum 3d12 less than 7 with at least 1 successes] named "3 die limit break" output [sum 4d12 less than 7 with at least 1 successes] named "4 die limit break" output [sum 5d12 less than 7 with at least 1 successes] named "5 die limit break" output [sum 6d12 less than 7 with at least 1 successes] named "6 die limit break" \Times out around here\ output [sum 7d12 less than 7 with at least 1 successes] named "7 die limit break" output [sum 8d12 less than 7 with at least 2 successes] named "7 die limit break" output [sum 9d12 less than 7 with at least 2 successes] named "7 die limit break" output [sum 10d12 less than 7 with at least 2 successes] named "7 die limit break" 

I found the timeout point by running each line individually.

How to prove that the dual linear program of the max-flow linear program indeed is a min-cut linear program?

So the wikipedia page gives the following linear programs for max-flow, and the dual program :

enter image description here

While it is quite straight forward to see that the max-flow linear program indeed computes a maximum flow (every feasable solution is a flow, and every flow is a feasable solution), i couldn’t find convincing proof that the dual of the max-flow linear program indeed is the LP of the min-cut problem.

An ‘intuitive’ proof is given on wikipedia, namely : $ d_{uv}$ is 1 if the edge $ (u,v)$ is counted in the cut and else $ 0$ , $ z_u$ is $ 1$ if $ u$ is in the same side than $ s$ in the cut, and $ 0$ if $ u$ is in the same side of the cut than $ t$

But that doesn’t convince me a lot, mainly why should all the variables be integers, while we don’t have integer conditions ?

And in general, do you have a convincing proof that the dual of the max-flow LP indeed is the LP formulation for min-cut ?

Problem running C program [closed]

I am using Windows 10 OS. I installed MinGW for compiling C programs. I tried running my program using the gcc command on the Command Prompt. The file compiles and an executable file(.exe) is formed in the same folder as my source file. But when I try running this file, I keep getting the message ‘Access is denied’. Also the .exe file vanishes after this. I do not know what is wrong. Please help me out.

P.S Another time I did the same thing mentioned above and the .exe file ran and I was able to see the output on the Command line. And this time the .exe file did not vanish either.

How to program a situation like the following in mathematics and generalize the process to other configurations?

Distribute the numbers from 1 to 10(view image) so that the sum of each row and each column is the same and a) the maximum possible b) the minimum possible (I put it from 1 to 10 for ease)

I know it is a problem that could work with matrices or lists but I can’t think how to start

Image