How machine learning libraries are created?

I would like to know how machine learning libraries (or in general libraries at large scale) are created. I mean Python doesn’t have inbuilt array system but c has. So how they are supported for Python and how do they start the thing and develop it as we know today as a final product (like NumPy) ?

P.S.- Let me know if this is not the right community for asking general computing questions because there is significant overlap among CS stack exchange forums and if it’s not right place then recommend the appropriate stack exchange platform for asking general computing questions.

Also, I couldn’t find relevant tags so had to tag it with machine learning.

If the Lorenz cipher machine was so much more advanced than the Enigma, how was it broken without having a physical machine?

The Enigma was commercially available before the war, and even during the war, they got their hands on “live” such machines, heavily helping them in breaking the encryption used by the Germans.

The Lorenz was said to be much more secure, required mains power, was much bigger and heavier, and they never got a single “specimen” to look at. Yet they somehow “deduced its inner workings” without having ever seen one, well before the end of the war.

How is this possible? How could this highly complicated machine be reverse-engineered remotely with nothing to go by except the encrypted messages? I fundamentally don’t understand this.

I know that very simple ciphers can be determined by checking how often patterns repeat over time, which helps the attacker because they can tell which letters these correspond to, but this simple technique seems to fall apart completely once it’s anything but the simplest form of cipher. This massive electro-mechanical machine, made specifically to be secure and superior to the cheap and affordable Enigma, ought to have been utterly impossible to break.

I really don’t get it. I “get” how somebody much smarter than I can calculate the math and physics to get a rocket to fly out in space, even though I couldn’t do it myself, but breaking these cipher machines without even having a physical machine to look at… it frankly seems impossible. Almost as if they had access to some kind of supernatural “black magic”.

Matrices of Machine- or Arbitrary-Precision Real Numbers Error While Using Arnoldi for Large Sparse Matrix

I’m trying to use the built in Arnoldi method in Mathematica to compute the first 1440 eigenvalues of a large sparse matrix. After importing

Elements12 = NN /@ Uncompress[Import["Elements12.m"]] and applying the followings function

NN[{a_, b_} -> c_] := {a, b} -> N[c];

I define a sparse matrix

s = SparseArray[Elements12] 

which I’m able to nicely plot using MatrixPlot

MatrixPlot[s]. 

However when it comes to finding the first 1440 eigenvalues

Sort[Eigenvalues[s, -1000, Method -> "Arnoldi"]] 

I receive the error that “Method -> Arnoldi can only be used for matrices of machine- or \ arbitrary-precision real numbers” despite my matrix elements being machine presicion after applying NN. My matrix is 344640 by 344640 and has 5,300,000 non zero elements. Any help will be appreciated.

Errors in importing files from Kontent Machine into GSA Search Engines Ranker.

Hello GSA Team
I discovered the following error. Please check to help me
I have used the Kontent Machine software to create contents for the GSA Search Engines Ranker.
For example, I created 100 files from Kontent Machine with the feature: “NO Spacing” or “Blank Line”

I then imported the above 100 files in the form of “Artice” into the GSA Search Engines Ranker software.

I see the following error: Only the first file produces 1 long and full content, while the remaining 99 files only produce short contents.



I think that GSA cannot import enough content. It only imports the first paragraph, causing the above error.
I will put the image for you to easily identify

I will attach a few files from Kontent Machine for you to check If there are errors you will be able to fix the error
Thank you so much

Can CN=localhost be used on a server that should run on any machine [duplicate]

Got a query about self-signed certificates that after doing several searches I don’t feel I’ve got a concrete answer for.

Say I have generated a self-signed server certificate with CN=localhost. Does this mean that I can use that certificate in a server and be able to run that server on any machine in a LAN, where any client on the network with the certificate public key can communicate with the server (i.e. the server listens to any IP)?

As an example, I used the following script to generate certificates for use in a mutual TLS scenario (based on this answer):

echo Generate CA key: openssl genrsa -passout pass:1111 -aes256 -out ca.key 4096  echo Generate CA certificate: openssl req -passin pass:1111 -new -x509 -days 36500 -key ca.key -out ca.crt -subj  "/C=UK/ST=UK/L=London/O=YourCompany/OU=YourApp/CN=MyRootCA"  echo Generate server key: openssl genrsa -passout pass:1111 -aes256 -out server.key 4096  echo Generate server signing request: openssl req -passin pass:1111 -new -key server.key -out server.csr -subj  "/C=UK/ST=UK/L=London/O=YourCompany/OU=YourApp/CN=localhost"  echo Self-sign server certificate: openssl x509 -req -passin pass:1111 -days 36500 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt  echo Remove passphrase from server key: openssl rsa -passin pass:1111 -in server.key -out server.key  echo Generate client key openssl genrsa -passout pass:1111 -aes256 -out client.key 4096  echo Generate client signing request: openssl req -passin pass:1111 -new -key client.key -out client.csr -subj  "/C=UK/ST=UK/L=London/O=YourCompany/OU=YourApp/CN=localhost"  echo Self-sign client certificate: openssl x509 -passin pass:1111 -req -days 36500 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt  echo Remove passphrase from client key: openssl rsa -passin pass:1111 -in client.key -out client.key 

What I am finding is that the server loads fine on some machines, however on other machines it fails to start, reporting that it could not bind to the port. I have checked that the port is definitely not being used by anything. Also the server starts fine if I don’t use any certificates.

Am I doing something specifically wrong in the script, or is it not possible to have a certificate with CN=localhost in a server that should be able to wrong on any machine in a local network, and accept connections from any client on the network that trusts the public key?

Rubber Calender Machine Made in China

Dalian Huahan Rubber & Plastic Machinery CO., LTD. is a professional enterprise dedicated in designing and manufacturing rubber machines.By long cooperation with large tire factories, conveyor belt factories, wire and cable factories, rubber plates company, rubber hose and rubber belts factories, recycled rubber factories, PVC artificial leather factories, PVC film factories and other rubber and plastic products company it have contracted to design and manufacture whole production line for all kinds of  rubber and plastic product.
This company was founded in 1986, in Xinhua Industrial zone of zhuanghe, Dalian, China, covering 120, 000 square meters and having more than 500 employees including research and development team, manufacture team and service team.The company has passed ISO9001 quality management system certification and CE certification, and has successively won AAA-rated enterprises, contract keeping and credit units, top 10 tax-paying enterprises, Dalian famous brand products, Liaoning famous brand products, Liaoning famous trademarks, high-tech enterprises and other honorse. The company has dozens of utility models and invention patents.The company has won the trust of domestic and foreign customers with advanced technology, strong processing strength, and appropriate services. The products not only have a high share in the domestic market, but also are exported to the United States, Europe, Africa, Japan, South Korea, Southeast Asia, and the Middle East and so on, all over the world.
Huahan person serves the customers with sincerity and customer first. They are committed to providing outstanding products and full set of solutions for global customers and building high-quality Chinese manufacturing.Rubber Calender Machine Made in China
website:http://www.huahandl.com/
website2:http://www.rubbermachinery.org/

How to securely transfer files from a possibly infected WinXP machine to Linux?

There is an old Windows XP installation that was being used without even an antivirus. This WinXP computer has files. These files are important and should be moved to a Linux installation. Given the lack of any security practices on the side of the WinXP owner it seems possible that the data contains malware.

I can now:

  • Ignore this and simply keep using these files in Linux; after all Linux is supposed to not need AV.
    • At the very least the files should be scanned to avoid accidental redistribution of malware if they are ever sent to anyone else again
    • The files contain eg a multitude of .odt / .doc documents – maybe it’s a very remote possibility, I don’t know, but malicious macros are OS independent?
  • Install ClamAV on Linux machine, scan the files, remove Clam afterwards.
    • AFAIK ClamAV is known for its poor detection rate – scanning the files with it is only marginally better than not scanning at all?
  • Install an AV on the WinXP machine (Panda Free AV still supports WinXP, doesn’t it?), scan the files there, only transfer them afterwards.
    • Which means going online with WinXP once again – this just feels wrong
  • Any options I overlooked?

I feel stuck. Not sure how to progress.

Note I wouldn’t like to manually inspect the files and eg remove any potentially suspicious files like .exe files while leaving safe files like .png files intact. Reason is the data is not mine, I was just asked to transfer it so that someone else may use them.

What is the accepted best practice in a situation like this?

Asymptotic analysis for machine learning algorithms

I wanted to know if it would practical and useful to analyse machine learning algorithms in terms of asymptotic computational complexity.

I have noticed this is very uncommon. However, I believe it would help us compare these algorithms and decide which one to use for a given scenario.

I am also aware that the running time of most machine learning algorithms is highly dependent on the data. For example, gradient descent algorithm can iterate significantly more times on certain data sets than others.

Considering this, what would be a nice complexity measure for comparing machine learning algorithms?

Can a second-order busy beaver function/turing machine be programmed?

I have seen a computer program at https://github.com/pkrumins/busy-beaver/blob/master/busy-beaver.py that computes Busy Beaver numbers (well, pretends to). It has busy beaver numbers up to 4, and I am wondering if it is possible (reasonably) for a programmer to add a second-order, n-state, 2-symbol Turing machine.

Measures of performance in machine learning

I’m new to machine learning and struggle to interpret the results I get from different measures of performance. If for several prediction models I have e.g. accuracy, precision, recall, F1, FPR, and MCC and want to find out which model performs best, what do I look for? I would assume accuracy is most important?

Also, what influence does the error have on the interpretation? E.g. method 1 has an accuracy of 87 +- 5% and method 2 has 84 +- 5% I would assume method 1 to be better. Is that correct?