Can built-in functions deal with stochastic delay differential equations (SDDE)?

I know that functions like NDSolve can deal with delay differential equations and in the meanwhile, functions like ItoProcess and RandomFunction handle stochastic differential equations. So I wonder whether any built-in functions can handle it when the above two cases are combined together. For example, I naively tried the below codes by just slightly modifying the first example of ItoProcess (x[t] -> x[t - 1] in the square root)

proc = ItoProcess[\[DifferentialD]x[t] == -x[t] \[DifferentialD]t + Sqrt[1 + x[t - 1]^2]         \[DifferentialD]w[t], x[t], {x, 1}, t, w \[Distributed] WienerProcess[]] RandomFunction[proc, {0., 5., 0.01}] 

The first row of codes runs seemly well, but the second one just returns a RandomFunction::unsproc error, specifically RandomFunction::unsproc: The specification `<Ito process>` is not a random process recognized by the system..

Or do I have to implement a version myself with Euler method alike?

Access GLFW functions from DLL

My game uses DLLs as mods. Players can write DLLs which can render, play audio, print to console, etc. This works fine, however when I try to access GLFW functions, they don’t seem to work properly.

CustomMod.cpp (compiled to DLL)

void runDLLCode() {     std::cout << glfwGetKey(Game::getWindow(), GLFW_KEY_A); } 

Game.cpp (compiled to EXE)

void run() {     //Load DLL....       std::cout << glfwGetKey(Game::getWindow(), GLFW_KEY_A); // Prints 1     runDLLCode();                                           // Prints 0 (should print 1 since key is still down)     std::cout << glfwGetKey(Game::getWindow(), GLFW_KEY_A); // Prints 1 } 

Running std::this_thread::get_id() provides the same thread ID across both the DLL and exe.

The address of both Game::getWindow() is the same as well.

Is it possible to have all print 1 when the application is run? Or is it not possible to use GLFW functions across the DLL "wall"?

Normalized functions

I’m looking for a smart way to define normalized functions. I usually write

f[x_] := f[x] = A Sin[x]/x; 

Then I integrate the function

Integrate[f[x], {x, -\[Infinity], \[Infinity]}, Assumptions -> A > 0], 

take the output and divide f by it

g[x_] :=g[x] = f[x]/(A \[Pi]). 

Is there a better way than this? Thanks

Run Python Functions in frontend [closed]

I know front-end developement with React.js. I want to make a basic web-app where I use some python functions (for example on onClick of buttons etc). These functions would mainly revolve around machine learning (but wont be too complex).

Is there any way to run python functions from React.js apart from ajax queries?

If not, which Python framework should I use to do web development in Python such that I can directly run Python functions as well? I have come across names like Django, Tinkter and Flask but couldn’t figure out what exactly would suit my requirement.

Which function results from primitive recursion of the functions g and h?


Which function results from primitive recursion of the functions $ g$ and $ h$ ?

  1. $ f_1=PR(g,h)$ with $ g=succ\circ zero_0, h=zero_2$
  2. $ f_2=PR(g,h)$ with $ g=zero_0, h=f_1\circ P_1^{(2)}$
  3. $ f_3=PR(g,h)$ with $ g=P_1^{(2)}, h=P_2^{(4)}$
  4. $ f_4=PR(g,h)$ with $ g=f_3\left(f_1(x),succ(x),f_2(x)\right)$

(1.) $ g:N^0\to N$ , $ h:N^2\to N$
$ f(0)=1$
$ f(0+1)=h(0,f(0))=h(0,1)=0$
$ f(1+1)=h(1,f(1))=h(1,0)=0$
$ \forall n\in N_{>0}:f(n+1)=h(n,f(n))=0$ , $ f_1$ is defined as $ f_1:N^1\to N$ with $ f_1(x)=\begin{cases}1, x=0\ 0, x>0\end{cases}$

(2.) $ g:N^0\to N$ , $ h:N^2\to N$
$ f(0)=0$
$ f(0+1)=h(0,f(0))=h(0,0)=1$ $ f(1+1)=h(1,f(1))=h(1,1)=0$ $ \forall n\in N_{>0}: f(n+1)=h(n,f(n))=0$ , $ f_2$ is defined the same as $ f_1$ , $ f_1(x)=f_2(x)$

(3.) $ g:N^2\to N$ , $ h:N^4\to N$
$ f(x,y,0)=x$
$ f(x,y,0+1)=h(x,y,0,f(x,y,0))=h(x,y,0,x)=y$ $ f(x,y,1+1)=h(x,y,1,f(x,y,1))=h(x,y,1,y)=y$ $ \forall z \in N_{>0}: f(x,y,z+1)=h(x,y,z,f(x,y,z))=y$ , $ f_3$ is defined as $ f_3:N^3\to N$ with $ f_3(x,y,z)=\begin{cases}x, z=0\ y, z>0\end{cases}$

Is this correct up to here? It looks way too easy, that’s why I’m not sure.

Why don’t they use all kinds of non-linear functions in Neural Network Activation Functions? [duplicate]

Pardon my ignorance, but after just learning about Sigmoid and Tanh activation functions (and a few others), I am wondering why they choose functions that always go up and to the right? Why not use all kinds of crazy input functions, those that fluctuate up and down, ones that are directed down instead of up, etc.? What if used functions like those in your neurons, what is the problem, why isn’t it done? Why do they stick to very primitive very simple functions?

enter image description here enter image description here enter image description here

Can partial Turing completeness be quantified as a subset of Turing-computable functions?

Can partial Turing completeness be coherently defined this way:
An an abstract machine or programming language can be construed as Turing complete on its computable subset of Turing-computable functions.

In computability theory, several closely related terms are used to describe the computational power of a computational system (such as an abstract machine or programming language):

Turing completeness A computational system that can compute every Turing-computable function is called Turing-complete (or Turing-powerful). https://en.wikipedia.org/wiki/Turing_completeness

Why are nested anonymous pure functions shielded from evaluation?

I tried the following code (ignoring the warning messages):

{#, # &, Function[{x}, #], Function[{#}, x], Function[{#}, #]} &@7 (*result: {7, #1 &, Function[{x}, 7], Function[{7}, x], Function[{7}, 7]}*) 

I wonder why #& was not changed into 7&. I saw a "possible issue" similar to this mentioned in ref/Slot, but I couldn’t find further documentation about it. Is it a bug or it is specially designed this way?

First-order mutual-recursive functions Turing-complete or incomplete?

Suppose we have an ML-like programming language with only first-order terms (i.e. no higher-order functions/lambdas; variables cannot be functions). However, the language allows recursion in all forms.

Is it true that this language is Turing-incomplete, but becomes complete if we add basic heap semantics (i.e. pointers and manipulation of RAM-like memory)?