Function providing input and integration limits for NIntegrate

I am trying to define some custom function that evaluates the numeric integral of some complicated function. The problem is that I would like to include as an input the integration limits. A MWE follows

f[x_,y_]:= Exp[-2 x^2 y^2] test[x_?NumericQ,IntL_]:= NIntegrate[f[x, y],IntL] myIntL1= {y,-4 x^2,4x^2}; myIntL2= {y,-4 x^4,4x^4}; 

Then, if I evaluate for instance test[3,myIntL1] I get a problem concerning an invalid limit of integration.

Is there a clever way to fix this without defining several functions including the integration limits, such as

    test1[x_?NumericQ,IntL_]:= NIntegrate[f[x, y],{y,-4 x^2,4x^2}]     test2[x_?NumericQ,IntL_]:= NIntegrate[f[x, y],{y,-4 x^4,4x^4}] 

etc?

Of course, here everything looks simple, but in my case all the functions are rather long; some even purely numeric ones. Since I have multiple choices for the integration limits, it would be more practical to avoid defining test1[x_], test2[x_], ..., testN[x_]

Thanks in advance, Pablo

display an input string many times

I’d like to create a function that add 2 to an integer as much as we want. It would look like that:

>>> n = 3  >>> add_two(n) Would you like to add a two to n ? Yes The new n is 5 Would you like to add a two to n ? Yes the new n is 7 Would you like to add a two to n ? No 

Can anyone help me please ? I don’t how I can print the sentence without recalling the function.

Response time optimization – Getting record count based on Input Parameters

I’m trying to optimize the process of calculating count of records, based on variable input parameters. The whole proces spans several queries, functions and stored procedures.

1/ Basically, front-end sends a request to the DB (it calls a Stored procedure) with an input parameter (DataTable). This DataTable (input parameter collection) contains 1 to X records. Each record corresponds to one specific rule.

2/ SP receives the collection of rules (as a custom typed table) and iterates through them one by one. Each rule apart from other meta-data contains a name of a specific function that should be used in evaluating the said rule.

For every rule, the SP prepares a dynamic query wherein it calls the mentioned function with 3 input parameters.

a/ Custom type Memory Optimized Table (Hashed index) b/ collection of lookup values (usually INTs) that the SELECT query uses to filtr data. Ie. "Get me all records, that have fkKey in (x1, x2, x3)" c/ BIT determining if this is the first rule in the whole process.

Each function has an IF statement, that determines based on the c/ parameter if it should return "all" records that fullfill the input criteria (b/). Or if it should fullfill the criteria on top of joining the result of previous rule that is contained in the custom table (a/)

3/ Once the function is run, it’s result is INSERTed into a table variable called @tmpResult. @result is then compared to tmpResult and records that are not in the tmpResult are DELETEd from result.

  • @result is a table variable (custom memory optimized table type), that holds intermediate result during the whole SP execution. It is fully filled up on the first rule, every consequent rule only removes records from it.

4/ The cycle repeats for every rule until all of the rules are done. At the end, count is called on the records in @result and returned as a result of SP.

Few things to take into account:

  • There are dozens of different types of rules. And the list of rules only grows bigger over time. That’s why dynamic query is used.
  • The most effective way to temporarily store records between individual rule execution so far proved to be custom Memory-Optimized table type. We tried a lot of things, but this one seems to be the fastest.
  • The number of records that are usually returned for 1 single rule is roughly somewhere between 100 000 and 3 000 000. That’s why a bucket_size of 5 000 000 for the HASHed temporary tables is used. And even though we tried nonclustered index, it was slower than that HASH.
  • The input collection of rules can vary strongly. There can be anything from 1 rule up to dozens of rules used at once.
  • Most every rules can be defined with at minimum 2 lookup values .. at most with dozens or in a few cases even hundred values. For a better understanding of rules, here are some examples:

Rule1Color, {1, 5, 7, 12} Rule2Size, {100, 200, 300} Rule3Material, {22, 23, 24}

Basically every rule is specified by it’s Designation, which corresponds to a specific Function. And by it’s collection of Lookup values. The possible lookup values differ based on the designation.

What we have done to optimize the process so far:

  • Where big number of records need to be temporarily stored, we use Memory-Optimized variable tables (also tried with temp ones, but it was basically same when using Memory-Optimized variants).
  • We strongly reduced and optimized the source tables the SELECT statements are run against.

Currently, the overal load is somewhat balanced 50/50 between I/O costs pertaining to SELECT statements and manipulation with records between temporary tables. Which is frankly not so good .. ideally the only bottleneck should be the I/O operations, but so far we were not able to come up with a better solution since the whole process has a lot of variability.

I will be happy for any idea you can throw my way. Of course feel free to ask questions if I failed to explain some part of the process adequately.

Thank you

Input on Variation on KRyan’s TWF Elf Barbarian

I really like KRyan’s solution to this character concept: How to optimize a TWF Barbarian Elf

I’m looking to build something similar, but I don’t have all the restrictions that the OP had. For instance, I am planning on using the Arctic Template from Dragon #306 applied to a Wood Elf, giving me +2 Str, +2 Dex, -2 Int, -2 Cha.

With those bonuses, does TWF even make sense anymore? If so, are there changes that would make sense to utilize the STR/DEX synergy?

Getting back into 3.5e after a long time, and I’d forgotten that the limitless options are such a double-edged sword…

How to see which mouse button was pressed (Unity Input System)

I have switched my game to use the new Unity Input System, and I want to know which mouse button was pressed when an action is called.

Here’s how it’s set up: Input System

Whenever one of these three mouse buttons are pressed, the MouseButtonClicked action will fire. I want to somehow know which one of these buttons was pressed from this single action, rather than making 3 seperate actions for 3 seperate mouse buttons.

I have tried reading the values from InputAction.CallbackContext context but I can’t seem to get a proper value from the mouse button which was pressed.

How can I do this?

Approximation of a function part of the input of NMaximize

I need to maximize the following function (the input to NMaximize below)

    NMaximize[{((1/3) + f[p]*(1 - p)*(((1/(Sqrt[2]/2))*p)^2-1))/(1+(1-p)*     (((1/(Sqrt[2]/2))*p)^2 - 1)), p >= Sqrt[2]/2, p <= 1}, {p}] 

where $ f(\cdot)$ is defined as follows

    f[p_?NumericQ] := NMinimize[{-((a + a^2 + b - 2 a b + b^2 + c - 2 a c - 2 b c + c^2)     /((-1 + a) (a + b + c))), 0 <= a, a <= b, b <= c, c <= 1, c <= a + b, (a + b + c)/3 <= p},      {a, b, c}, Method -> "DifferentialEvolution"][[1]] 

However, as expected, the computation does not end and there are several alert messages. For the underlying mathematical problem I am trying to solve, I could replace $ f(\cdot)$ by a simple function $ g(\cdot)$ that approximates it, but I need $ g(p)\le f(p)$ for all $ p\in[0,1]$ . I tried to use InterpolatingPolynomial with a few values of $ f(\cdot)$ . However, $ f(\cdot)$ is neigher concave nor convex in $ [0,1]$ , and I am struggling to obtain a good approximation of $ f(\cdot)$ which satisfies the $ g(p)\le f(p)$ in $ [0,1]$ .

Do you know how any method for generating such approximation function $ g(\cdot)$ (or solving the maximization problem in a different way)?

Difference between shader input element classification between D3D12 and Vulkan

I’m confused about the difference between the shader input element classifications in D3D12 and Vulkan. For example, in Vulkan I could have the following declarations:

struct vertex {     glm::vec3 pos;     glm::vec3 col; };  VkVertexInputBindingDescription input_binding_description {     .binding = 0,     .stride = sizeof(vertex),     .inputRate = VK_VERTEX_INPUT_RATE_VERTEX };  std::array<VkVertexInputAttributeDescription, 2> input_attribute_descriptions {     VkVertexInputAttributeDescription{         .location = 0,         .format = VK_FORMAT_R32G32B32_SFLOAT,         .offset = offsetof(vertex, pos)     },         VkVertexInputAttributeDescription{         .location = 1,         .format = VK_FORMAT_R32G32B32_SFLOAT,         .offset = offsetof(vertex, col)     } }; 

Here the input rate is specified per vertex and not per attribute. On the other hand, in D3D12, we would have

struct Vertex {     XMFLOAT3 pos;     XMFLOAT3 col; };  std::array<D3D12_INPUT_ELEMENT_DESC, 2> input_element_descs = {     { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0,     D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },     { "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12,     D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 } }; 

And as you can see, the input rate is specified per attribute. Why is that? Is there a difference between the meaning of the input classification in D3d12 and Vulkan that I’m missing? I’m not familiar with D3D12, but at first glance, it doesn’t make sense to me to have different input classifications for the attributes of a vertex.

reading input from multiple mice in QB64 in Windows 10?

I am looking to write a non-network multiplayer game for Windows 10 with QB64 that accepts input from 2 or more USB mice plugged into the system. Like a simple Pong game where additional players plug mice into a USB hub to use as game controllers.

I have been googling this and found some older threads

  • Is it possible to detect two different mice at the same time, and have their movements recorded seperately? Asked 8 years, 9 months ago
  • How do I read input from multiple keyboards/mice on one computer? Asked 7 years, 6 months ago

however these are pretty old threads from before Windows 10, also they seem to be more oriented towards C++ or .NET.

Can anyone provide some example how it might be done with QB64 under Windows 10?

Thanks

Fixed physics time step and input lag

Everywhere I read says I should fix the time step of my physics simulation, and interpolate the result to the actual frame rate of my graphics. This helps with simplicity, networking, reproducibility, numerical stability, etc.

But as I understand it, fixed time step guarantees between 1 and 2 Δt of input lag, because you have to calculate one step ahead in order to interpolate. If I use 90 Hz physics, it gives me an input lag of about 17 ms, on average.

Since I often see gaming enthusiasts talking about 1 ms input lag, 1 ms delay, and how that makes a difference, I wonder how fast-action games does it, and how they reduce the input lag using fixed time step.

Or they don’t, and 1 ms delay is just marketing mumbo jumbo?