Structure of multilingual and multiregional directory listing website for translation

What is the best translation structure for our multilingual and multiregional directory website? Let’s say our listing owners as members of our work team are from around 20 different countries with different languages and I want each of them to submit their listings in the official language of their country and all the listings are aggregated on our site which has the default English language. Our directory site, whose default language is English, to be translated into at least twenty other languages. In this case, we want the listings to be usable both locally for the country of listing owners and for other countries globally. Do I need to use WordPress MULTISITE? Should I use multiple separate sites that link to each other or can I use a special plugin like WPML? I would be grateful if you could guide me in this regard.

Generate JSON files for language translation from po file without wp-cli i18n make-json

My plugin uses wp_set_script_translations() to load translations for JS as mentioned here

I understand that when translations for a locale is added it will create the JSON files.

But, we have a lot of site owners wants to translate a few words and start using the plugin. They can create .po and .mo files, but how they can create the JSON files without setting up wp-cli to use (wp i18n make-json).

Loco Translate plugin is not supporting JSON file generation.

I couldn’t find any online sites supporting .po to .json(with below mentioned format) conversion.

Out put file should in $ {domain}-$ {locale}-$ {handle}.json or $ {domain}-$ {locale}-$ {md5}.json JED 1.x ( .json ) file format for each JS file that called with wp_set_script_translations().

Signal translation with Seq2Seq model

I’m currently doing some research on signal processing and I got a dataset which includes the signal in itself and its "translation".

A signal and its translation

So I want to use a Many-to-Many RNN to translate the first into the second.

After spending a week reading about the different option I have, I ended up learning about RNN and Seq2Seq models. I believe this is the right solution for the problem (correct me if I’m wrong).

Now, as the input and the output are of the same length, I don’t need to add padding and thus I tried a simple LSTM layer and TimeDistributed Dense layer (Keras):

model = Sequential([     LSTM(256, return_sequences=True, input_shape=SHAPE, dropout=0.2),     TimeDistributed(Dense(units=1, activation="softmax")) ])  model.compile(optimizer='adam', loss='categorical_crossentropy') 

But the model seems to learn nothing from the sequence and when I plot the "prediction", it nothing but values between 0 and 1.

As you can see, I’m a beginner and the code I wrote might not make sense to you but I need guidance on few questions:

  • Does the model make sense for the problem I’m trying to solve ?
  • Am I’m using the right loss/activation functions ?
  • And finally, please correct/teach me

trouble recovering rotation and translation from essential matrix

I am having trouble recovering rotation and translation from an essential matrix. I am constructing this matrix using the following equation: \begin{equation} E = R \left[t\right]_x \end{equation}

which is the equation listed on Wikipedia. With my calculated Essential matrix I am able to show the following relation holds: \begin{equation} \left( \hat x \right) E x = 0 \end{equation}

for the forty or so points I am randomly generating and projecting into coordinate frames. I decompose $ E$ using SVD then compute the 2 possible translations and the two possible rotations. These solutions differ significantly from the components I’m starting with.

I have pasted a simplified version of the problem I am struggling with below. Is there anything wrong with how I am recovering the these components?

import numpy as np   t = np.array([-0.08519122, -0.34015967, -0.93650086])  R = np.array([[ 0.5499506 , 0.28125727, -0.78641508],     [-0.6855271 , 0.68986729, -0.23267083],     [ 0.47708168, 0.66706632, 0.57220241]])  def cross(t):     return np.array([     [0, -t[2], t[1]],     [t[2], 0, -t[0]],     [-t[1], t[0], 0]])   E =   u, _, vh = np.linalg.svd(E, full_matrices=True)  W = np.array([ [ 0,-1, 0], [ 1, 0, 0], [ 0, 0, 1]])  Rs = [,] Ts = [u[:,2], -u[:,2]] 

curl to wfuzz translation

I am trying to run wfuzz to match the curl command which works, I know valid credentials but it doesn’t seem to pass it properly.

wfuzz -c -w user -w pass -b "session=cookie" --digest FUZZ:FUZ2Z ""

(user and pass files contain user and pass accordingly)

curl -c cookie --digest -u user:pass

The target is running Gunicorn web server

trouble in distinguishing between syntax directed translation and syntax directed definition

I am currently reading the dragon book ,chapter 2 is confusing me a lot according to it the definition of the two terms are:

syntax-directed translation

syntax-Directed translation is done by attaching rules or program fragments o productions in a grammar

my understanding of this is that in a production within {} we have some rules/program code that is executed(those rules/program are also known as semantic action)


A syntax-directed definition associates 1.with each grammar symbol, a set of attributes and 2.With each production,a set of semantic rules for computing the values of the attribute associated with the symbols appearing in the production

my understanding of this definition is that with each production we associate a action at the end of tail to compute the value of a attribute(this action are also known as semantic rules)

I have a few doubts regarding the two definitions 1. Is my understanding of this two definition correct?

  1. do syntax directed translation and syntax directed definition do the same task using two different ways?

  2. Is a syntax directedtranslation scheme same as syntax directed translation?

  3. what is the difference between syntax directed translation scheme and just translation scheme?

Translation anchor insert broken (#file_links)

Hello @sven, sorry to bother you again.

It seems when using the #file_links to pull in lines of content, then using translate around the #file_links, SER does not insert the anchor properly and instead breaks the link, it ends up being a giant block of text.

Note: I have tested this without using #file_links (by grabbing a random article and surrounding it with the translate macro) and it works properly.

Here is the code I use to create my article templates:

#trans_en_sk #file_links[example.dat,1,S] #file_links[example.dat,1,S] #file_links[example.dat,1,S] #notrans

The test site/anchor I used in this example (note: only testing in preview mode):

Output of the bug (blurred for text privacy):

You can see the entire bottom part becomes a link. I have now tested this many times, and every time the result is the same. A giant block of text as the link, and the actual anchor not linked at all.

Please let me know if this is preview-bug only, because I’m hesitant to test it on a live site in this situation.

Integer programming to MAX-SAT translation

Reading A Comparison of Methods for Solving MAX-SAT Problems, I can see that a MAX-SAT problem can be translated to an integer programming (IP) problem.

Definition of MAX-SAT [Wikipedia]:

The maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula.

Definition of integer programming [Wikipedia]:

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers

Is there a similar translation backwards? Given an IP problem that can be translated to a MAX-SAT.

The MAX-SAT problem is NP-hard [2] and Integer programming is NP-complete [3]. technically it might not be very suitable to solve a (in principle) simpler problem using a formulation which is (in principle) harder to solve. But it will be great to understand the encoding both ways. My motivation is to compare MAX-SAT solvers with IP solvers.

More precisely given an IP problem like: $ $ x_1 + x_{2} + \neg x_3 \leq 1 \ \neg x_1 + x_4 + x_3 \leq 1 \ … $ $

with total n inequalities/equations and each $ x_i \in \mathbb{B}$ . Such that maximize as many as equations (from total n). Encode it as a MAX-SAT problem.

First Attempt: Each inequality encodes AtMostOne of the complement of the variables (or AtleastN-1 of the original IP inequality). But representing AtMostOne in CNF will create multiple clauses and destroys the possible use of direct MAX-SAT.