Manually converting TensorFlow models to Mathematica

Is

Mathematica (v12, my uninformed attempt at manual conversion)

h = StringSplit@Import["https://raw.githubusercontent.com/IBM/tensorflow-hangul-recognition/master/labels/2350-common-hangul.txt"]; n = NetChain[{(*2*)    ConvolutionLayer[032, 5], Ramp, PoolingLayer[2, 2],    ConvolutionLayer[064, 5], Ramp, PoolingLayer[2, 2],    ConvolutionLayer[128, 3], Ramp, PoolingLayer[2, 2],    FlattenLayer[],    LinearLayer[1024],    Ramp,    DropoutLayer[],    LinearLayer[h // Length],    SoftmaxLayer[]    },   "Input"  -> NetEncoder[{"Image", {64, 64}, ColorSpace -> "Grayscale"}],   "Output" -> NetDecoder[{"Class", h}]   ] 

NetChain[ <> ]

a good conversion of

Tensorflow (v1?, source network, excerpt from complete github file)

    # First convolutional layer. 32 feature maps.     W_conv1 = weight_variable([5, 5, 1, 32])     b_conv1 = bias_variable([32])     x_conv1 = tf.nn.conv2d(x_image, W_conv1, strides=[1, 1, 1, 1],                            padding='SAME')     h_conv1 = tf.nn.relu(x_conv1 + b_conv1)       # Max-pooling.     h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1],                              strides=[1, 2, 2, 1], padding='SAME')       # Second convolutional layer. 64 feature maps.     W_conv2 = weight_variable([5, 5, 32, 64])     b_conv2 = bias_variable([64])     x_conv2 = tf.nn.conv2d(h_pool1, W_conv2, strides=[1, 1, 1, 1],                            padding='SAME')     h_conv2 = tf.nn.relu(x_conv2 + b_conv2)       h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1],                              strides=[1, 2, 2, 1], padding='SAME')       # Third convolutional layer. 128 feature maps.     W_conv3 = weight_variable([3, 3, 64, 128])     b_conv3 = bias_variable([128])     x_conv3 = tf.nn.conv2d(h_pool2, W_conv3, strides=[1, 1, 1, 1],                            padding='SAME')     h_conv3 = tf.nn.relu(x_conv3 + b_conv3)       h_pool3 = tf.nn.max_pool(h_conv3, ksize=[1, 2, 2, 1],                              strides=[1, 2, 2, 1], padding='SAME')       # Fully connected layer. Here we choose to have 1024 neurons in this layer.     h_pool_flat = tf.reshape(h_pool3, [-1, 8*8*128])     W_fc1 = weight_variable([8*8*128, 1024])     b_fc1 = bias_variable([1024])     h_fc1 = tf.nn.relu(tf.matmul(h_pool_flat, W_fc1) + b_fc1)       # Dropout layer. This helps fight overfitting.     keep_prob = tf.placeholder(tf.float32, name=keep_prob_node_name)     h_fc1_drop = tf.nn.dropout(h_fc1, rate=1-keep_prob)       # Classification layer.     W_fc2 = weight_variable([1024, num_classes])     b_fc2 = bias_variable([num_classes])     y = tf.matmul(h_fc1_drop, W_fc2) + b_fc2       # This isn't used for training, but for when using the saved model.     tf.nn.softmax(y, name=output_node_name) 
  1. How can the Mathematica model improved to match the Tensorflow version exactly?
  2. Is there a resource anywhere to learn the correspondences between the two?
  3. Specifically, I am not sure about
    1. padding='SAME' – how to stay true to this in Mathematica?
    2. tf.nn.relu(x_conv1 + b_conv1) == Ramp?
    3. tf.matmul == LinearLayer?
    4. FlattenLayer[]
    5. DropoutLayer[.5] (tensor flow switches between 0.5 and 1.0, see complete file linked above)

I feel like I am making critical mistakes somewhere. The resulting network is too sensitive.

Yahoo groups is going away: can I use Mathematica to download the old messages?

It looks like all the old email messages from my group are stored in web pages with URLs that look like this:

page="https://groups.yahoo.com/neo/groups/arco-75/conversation/messages/19291"] 

That’s email number 19291, and the others have the same form but different numbers at the end. What I am hoping to do is to grab all the old messages. The problem is that:

ans = Import[page] 

returns stuff that begins with “Sorry, an error occurred while loading the content.” From the look of it, I’m guess that the problem is that my Import statement is not “logged in” to the website, and so it is rejecting the request. Does anyone know how to “log in” to the yahoo site (to enable downloading of the old emails)?

Is Mathematica calculating Lagrangian correctly?

The following Lagrangian is available: Lagrangian

Here $ \omega(t)$ is the angular velocity of the body; $ J(t,\theta(t))$ – a variable moment of inertia of the body, depending on $ t$ and on $ \theta(t)$ ; $ m(\omega(t))$ – a hypothetical change in body weight (does not have a physical meaning, this is necessary to study the equation);$ G_g$ -acceleration of gravity;$ P$ -vector of the center of mass, depending on the time and angle of rotation of the body in space.

The question is as follows. When we draw up the Euler-Lagrange equation, we fit the Lagrangian into the following structure:

Euler-Lagrange equations

In Mathematics, there is a code that, in theory, should calculate the Euler-Lagrange equation according to the Lagrangian:

L = \[Omega][t]^2 J[t, \[Theta][t]] - (m[\[Omega][t]]) (Subscript[G, g]) P[t, \[Theta][t]]     D[D[L, \[Omega][t]], t] - D[L, \[Theta][t]] 

What confuses me is that the second term in the last formula contains the derivative with respect to the generalized coordinate, which also changes in time, and the generalized coordinate and its speed also enter into the term of kinetic energy and potential energy.

Is the result obtained in the Mathematica by this code correct?

How do you typeset raised indices in Mathematica?

I’m trying to follow a book that uses the variable $ x$ for dimension in cartesian coordinates and $ x’$ for the polar coordinates. I’d like to use this typesetting in my Mathematica notebook so I don’t have to mentally convert.

Instead of using $ r$ and $ \theta$ , how would you typeset something like:$ $ mat=\begin{bmatrix}cos x’^2 & -x’^1 sin x’^2 \sin x’^2 & x’^1 cos x’^2\end{bmatrix}$ $ Where $ x’^1$ is used in the place of $ r$ and $ x’^2$ is used in the place of $ \theta$ ? Is it practical to work with this kind of typesetting in Mathematica?

Slow performance of embedding Mathematica Demonstrations in webpages

When I embed a Mathematica Demonstration in my webpage, the performance is very slow and laggy.

For instance, if I follow the instruction video and embed the Radial Engine Demonstration in an HTML page, it takes about 5 seconds to load (that’s ok) and when I drag a slider, it takes about 2 seconds for the image to update (that’s a big problem). This is the case even for simpler demonstrations, such as this magnetic field demonstration.

Is there any way to improve performance?

Can mathematica check an ansatz?

I’m just a beginner learning Mathematica and I’d like to do something with it but I don’t know if it is possible.

Suppose you want to solve a system of differential equations, maybe a non linear one, and you just want to check whether or not a certain function satisfies the system, is it possible to give your ansatz and let mathematica tell you if you guessed the solution correctly?

Moreover, if there are some parameters, is it possible to know the values of the parameters for which your solution is valid?

If this is possible, can you tell me how to do it or where I should look for (books, pdf or anything) to learn how to do it?

Is it possible to scrape a result from Google Search by using Mathematica?

my question is general. I would like to search a term on Google and scrape all the results, including the dates of publishing, if there are.
I know that it is possible to scrape a website – following the policy, of course!, and not breaking the rules – by using Python or other softwares. Is it possible to do this in Mathematica?

enter image description here

Is it possible to fit such a large range plot in mathematica?

I am trying to solve the coupled differential equation numerically with Mathematica. But the range of values are large so mathematica cannot give correct plot. Here is the code:

a = 4.75388*10^26; b = 5.424*10^-3; d = 4.75388*10^20; {X, Y} = {x, y} /.  NDSolve[{x'[ z] == -((a/z) (x[z] - b*z^(3/2) E^(-z)) (BesselK[1, z]/BesselK[2, z])),  y'[z] == ((d/z) (x[z] - b *z^(3/2) E^(-z)) (BesselK[1, z]/ BesselK[2, z]) - (a *z/4) (BesselK[1, z]) y[z]),  x[0.1] == 1.552*10^-4, y[0.1] == 10^(-9)}, {x, y}, z] //  FullSimplify // First LogLogPlot[{X[z], Y[z]}, {z, 0.1, 100}, PlotRange -> All]