Differences between Mathematica versions 11 and 12 regarding ODE solution

Solving the ODE

$ $ (\lambda +y(x)) y”(x)-y'(x)^2-1=0 $ $

with Version 11 I got the solution

yx = 1/2 (Exp[-Exp[C[1]] (C[2] + x) - 2 C[1]] + Exp[Exp[C[1]] (C[2] + x)] - 2 lambda) 

while in Version 12 for the same ODE I got the solution

yx = -lambda - Tanh[E^C[1] (x + C[2])]^2/Sqrt[-E^(2 C[1]) Sech[E^C[1] (x + C[2])]^2 Tanh[E^C[1] (x + C[2])]^2] 

This last result isn’t ever real: see the denominator. My question is regarding how to ask the solver in Version 12 to obtain the Version 11 answer. Thanks.

Why doesn’t Mathematica provide an answer while Wolfram|Alpha does, concerning a series convergence?

Among other series I’ve been working on, I was asked to find whether $ $ \sum_n 1-\cos(\frac{\pi}{n})$ $ converged, and Mathematica’s output to SumConvergence[1 - Cos[Pi/n], n] simply was repeating the input, without further information. Wolfram|Alpha, though, at least told me which test were or not conclusive.

I’m new to Mathematica, and even though I’ve looked both on Google and into Wolfram’s documentation, I haven’t found information that could help me figure out how to get, from Mathematica, the conditions for the convergence of a series involving something else than powers of a variable.

I would appreciate if you could give me some clues on the typical procedure to make Mathematica correctly evaluate the convergence of a series, or/and to return the conditions for convergence. Thank you in advance.

Importing A Neural Network From Mathematica For Use In R

I am experimenting with platform interoperability between Mathematica and R.

My aim is to create an untrained Neural Network using Mathematica, export this network in MXNet format as a .json file, and import this network into R for a classification problem.

Creating the Network in Mathematica

Here i have created a basic neural network – this network is untrained. I have exported the network alongside the network parameters.

In mathematica the code is as follows.

dec=NetDecoder["Class",{"Chronic Kidney Disease","No Kidney Disease"}]  net =   NetInitialize@   NetChain[{BatchNormalizationLayer[], LinearLayer[20], Ramp,      DropoutLayer[0.1], LinearLayer[2], SoftmaxLayer[]},    "Input" -> 24, "Output" -> dec    ] 

There are 24 feature variables for the input and the output is the netdecoder. I then export this network.

Export["net.json", net, "MXNet"] 

This produces two files, one with the network, and another with the parameters. By using FilePrint we can visualise this


which returns

{     "nodes":[         {"op":"null","name":"Input","inputs":[]},         {"op":"null","name":"1.Scaling","inputs":[]},         {"op":"null","name":"1.Biases","inputs":[]},         {"op":"null","name":"1.MovingMean","inputs":[]},         {"op":"null","name":"1.MovingVariance","inputs":[]},         {"op":"BatchNorm","name":"1","attrs":{"eps":"0.001","momentum":"0.9","fix_gamma":"false","use_global_stats":"false","axis":"1","cudnn_off":"0"},"inputs":[[0,0,0],[1,0,0],[2,0,0],[3,0,0],[4,0,0]]},         {"op":"null","name":"2.Weights","inputs":[]},         {"op":"null","name":"2.Biases","inputs":[]},         {"op":"FullyConnected","name":"2","attrs":{"num_hidden":"20","no_bias":"False"},"inputs":[[5,0,0],[6,0,0],[7,0,0]]},         {"op":"relu","name":"3$  0","inputs":[[8,0,0]]},         {"op":"Dropout","name":"4$  0","attrs":{"p":"0.1","mode":"always","axes":"()"},"inputs":[[9,0,0]]},         {"op":"null","name":"5.Weights","inputs":[]},         {"op":"null","name":"5.Biases","inputs":[]},         {"op":"FullyConnected","name":"5","attrs":{"num_hidden":"2","no_bias":"False"},"inputs":[[10,0,0],[11,0,0],[12,0,0]]},         {"op":"softmax","name":"6$  0","attrs":{"axis":"1"},"inputs":[[13,0,0]]},         {"op":"identity","name":"Output","inputs":[[14,0,0]]}     ],     "arg_nodes":[0,1,2,3,4,6,7,11,12],     "heads":[[15,0,0]],     "attrs":{         "mxnet_version":["int",10400]     } } 

Importing the Network into R

Now we have an untrained network as a .json file in MXNet format.

We can import this using:

library(rjson) mydata <- fromJSON(file="net.json")  

The Problem

Im not sure how to process the exported net in R. Is it possible to use the imported untrained network from Mathematica, to then be used in R to train on some data?

Why does renderig my equation in Mathematica using MaTeX does not work out?

I have some problem with showing this LaTeX input:

$    F = (x_1  \lor x_2)  \land (x_1 \lor \overline x_3) \land (x_2 \lor x_4) \land (\overline x_3 \lor \overline x_4) \land (\overline x_1 \lor \overline x_4) $   

I tried using MaTex["F = (x_1 \lor x_2) \land (x_1 \lor \overline x_3) \land (x_2 \lor x_4) \land (\overline x_3 \lor \overline x_4) \land (\overline x_1 \lor \overline x_4)"] but it does not work.

Do we have any other solution to show this text in Matematica?

Use Mathematica to edit TeX file

I have a large (>300 page) latex project split across multiple .tex files that I have decided I want to change some notation around in. A minimal working example would be a main file main.tex that reads

\documentclass{book} \usepackage{amsmath} \newcommand{\funct}[2]{f \left( #1 , #2 \right)} \begin{document} \include{ch1} \end{document}

with ch1 reading

\chapter{Chapter 1} words words words $ \funct{a}{b}$ words word $ \funct { \sum_{k=1}^\infty \frac{1}{k^2}} {\lim_{n\to \infty} n }$

and so forth. Now, I’ve decided that I would like to globally reverse the inputs of \funct. Changing main.tex to \newcommand{\funct}[2]{f \left( #2 , #1 \right)} would do the exact right thing, except that this would be really psychedelic and problematic for any future editing. As such, I need to go through and reverse ever instance of this macro being used in the individual tex files, i.e., replace \funct{a}{b} with \funct{b}{a}. The problem is that there are…hundreds?…of instances of this macro being used among the various tex files, not to mention that the individual inputs to this macro are often several lines worth of latex. Basically, going through by hand would be an absolute nightmare and necessitate several days worth of hunting down typos.

As such, I’ve had the thought of trying to use some of Mathematica’s string pattern matching to do this reordering automatically. After making ALL THE BACKUPS, I’ve gotten as far as:

ch1=Import[<file path for ch1.tex>, "String"] StringCases[ch1, "\funct{"~~_~~"}{"~~_~~"}"]

Before getting hopelessly stuck. In particular, Mathematica doesn’t seem to like all the \ characters, not to mention the issues of inconsistent spacing between braces, nor dealing with matched sets of nested open/close braces (e.g., the second \funct { ....} {...} above).

Any suggestions? Or is this going to become a quagmire, and I would be better off just brewing a pot of coffee and getting busy with the old ctrl-x ctrl-v routine?

Looping through all functions defined in Mathematica

Is there a way to loop through all the Functions (Elementary and Special functions) that exist in Mathematica?

I want to construct a table of some identities and maybe I can discover something surprising if I plug every function that there is into my formula.

I.e. I want to do something like:

for function in Mathematica print function[x]^2 

(Note this is not Mathematica syntax, but I hope you get the idea)

Thanks, MichaƂ

Can I point to Wolfram’s geoserver from outside Mathematica?

Sadly, I’m having to port my Mathematica code. I’d like to use the same (or very similar) GeoGraphics default map background as my Mathematica code does. Can I somehow point to and use Wolfram’s geoserver for map tiles?

Alternately, where does Mathematica get its map backgrounds from? Maybe I could point back to this source instead…