Diffie Hellman implementation- NodeJS

Diffie Hellman is a key exchange algorithm where client and server both generate public and private key, exchange their public key and combine this key with his own private key to generate same secret key.

But, here is a confusion in the implementation. Here is the code…

const crypto = require('crypto'); const express = require('express'); const app = express();  // Generate server's keys... const server = crypto.createDiffieHellman(139); const serverKey = server.generateKeys();  // Generate client's keys... const client = crypto.createDiffieHellman(server.getPrime(), server.getGenerator()); const clientKey = client.generateKeys();  // Exchange and generate the secret... const serverSecret = server.computeSecret(clientKey); const clientSecret = client.computeSecret(serverKey);   

First of all, server create an instance of DiffieHellman class to generate key. But, client need server’s prime (.getPrime()) and Generator (.getGenerator()) to generate another instance of DiffieHellman class to generate key.

So, server need to pass the value of server.getPrime() and server.getGenerator() to the client. What happen if any middle-man-attack rises in this time? Because, if somehow hacker get this two things then they can also generate same secret key. (-_-)

Any solution? Think this system without TLS.

Unusal GET requests in my nodejs journal – has my nginx/node been hacked?

Saw this in the journalctl for a service I have:

jul 29 12:39:05 ubuntu-18 node[796]: GET http://www.123cha.com/ 200 147.463 ms - 8485 jul 29 12:39:10 ubuntu-18 node[796]: GET http://www.rfa.org/english/ - - ms - -     jul 29 12:39:10 ubuntu-18 node[796]: GET http://www.minghui.org/ - - ms - -      jul 29 12:39:11 ubuntu-18 node[796]: GET http://www.wujieliulan.com/ - - ms - -     jul 29 12:39:11 ubuntu-18 node[796]: GET http://www.epochtimes.com/ 200 133.357 ms - 8485     jul 29 12:39:14 ubuntu-18 node[796]: GET http://boxun.com/ - - ms - - 

These GET requests are not coming from any code I’ve written.

"Correct" entries look like this:

jul 29 12:41:46 ubuntu-18 node[796]: GET / 304 128.329 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/bootstrap.min.css 304 0.660 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/font-awesome-4.7.0/css/font-awesome.min.css 304 0.508 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /img/250x250/deciduous_tree_5.thumb.png 304 0.548 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /stylesheets/style.css 304 7.087 ms - - jul 29 12:41:47 ubuntu-18 node[796]: GET /img/logos/250x250/brf_masthugget.250x250.jpg 200 0.876 ms - 9945 

The server is a nodejs instance v8.10.0, running on nginx v1.14.0, running on up to date Ubuntu server 18.04.

The ubuntu is a Digital Ocean droplet.

I’ve tried generating similar requests from a javascript console, but my the browser blocks access to http (not allowing mixed http and https); if I try https I get cross-origin error – which is good 馃檪

I’m puzzled as to how these GET requests are being generated/sent?

What is the possible character set for the jsonwebtoken nodejs library?

I see a lot of folks using the following for a private key when using the jsonwebtoken library:

const hexStr = require('crypto').randomBytes(64).toString('hex') 

But this returns a character set of only 0-9 and a-f. Not a good practice it seems.

jwt.sign({ name: kennedy }, hexStr); 

However this approach is seeming a much better, more secure approach as now we’re using the full character set as binary data:

jwt.sign({ name: kennedy }, Buffer.from(hexStr, 'hex').toString()) 

Thoughts on this? Is the second approach a better one? I’m just looking for affirmation (or not!) that I’m doing it correctly.

Suspicious behavior by Google when verifying users via nodejs

I’m building a user authentication system in Nodejs and use a confirmation email to verify a new account is real.

The user creates an account, which prompts him/her to check the email for a URL that he/she clicks to verify the account.

It works great, no issues.

What’s unusual is that in testing, when I email myself (to simulate the new user process), and after I click the verify-URL, immediately afterward there are two subsequent connections to the endpoint. Upon inspection, it appears the source IPs belong to Google. What’s even more interesting is that the user agent strings are random versions of Chrome.

Here’s an example of the last sequence. The first one is the HTTP 200 request and the next two — the HTTP 400s are Google. (I remove upon user verification the user’s verification code from the database so that subsequence requests are HTTP 400s.) - - [03/Jul/2020:20:35:40 +0000] "GET /v1/user/verify/95a546cf7ad448a18e7512ced322d96f HTTP/1.1" 200 70 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" "hidden.com" "" "US" "en-US,en;q=0.9" - - [03/Jul/2020:20:35:43 +0000] "GET /v1/user/verify/95a546cf7ad448a18e7512ced322d96f HTTP/1.1" 400 28 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36" "hidden.com" "" "US" "en-US,en;q=0.9" - - [03/Jul/2020:20:35:43 +0000] "GET /v1/user/verify/95a546cf7ad448a18e7512ced322d96f HTTP/1.1" 400 28 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.122 Safari/537.36" "hidden.com" "" "US" "en-US,en;q=0.9" 

Now I’m using Cloudflare so the first IP address in each line is a Cloudflare IP address but the second one you see is the real one [as reported by Cloudflare] … I modified my "combined" log format in Nginx.

Anyhow, any idea what this is? Or why Google would be doing this?

It’s just incredibly suspicious given the use of randomized user agent strings.

And one last note, if I inspect my console w/Chrome and go into the network tab before I click a verification link from my email, the 2 subsequent connections never come. It’s like Google knows I’m monitoring … this is just so incredibly odd that I had to ask the community. I’m thinking maybe this is an extension that’s infected w/some kind of tracking, but how then do the IPs come back as Google?

Rate my idea: NodeJS as root behind Apache as a proxy with password

I’m the admin of a small Linux server owned by a relative of mine. He’s fairly tech savvy, but more at a level of a power user than an expert. I want to make a handy visual tool for him that would allow to do some simple server tasks: add/remove users and change their passwords; set up/remove websites; set up/remove mailboxes (I’ve decoupled those from system users so it’s a separate task if needed); and perhaps something else as needed.

Most of these things can be done from command line and some require the editing of some config files, but lengthy incantations with a lot of changing parts is just asking for trouble. I’d rather have a handy script.

The trouble is: most of these tasks require superuser permissions. He already has that, so I could make a textmode tool (which requires to be run as root), but a website would be so much nicer.

There’s already an apache webserver in place on port 80, bit running that as root would obviously be a lousy idea. Similarly, I don’t want to store root password anywhere.

So I had the idea of making the website in NodeJS and running the Node process as root, listening only on a specific port which only accepts incoming connections from localhost. Then Apache would be a non-elevated proxy in front of the NodeJS app. In addition, both Apache and NodeJS would ask for a password (taken from the same .htpasswd file).

If you can’t enter the password to Apache, you can’t even get to Node. If you hack Apache (or have access to some local account) you still need the password to get the Node app to cooperate.

Would this be safe enough? Ok, that’s kinda subjective, but considering that I’m more worried about opportunistic hackers from outside than malicious local users, would this be ok? There’s really nothing of much value stored on the server; I don’t expect anyone to do targeted hacking because there’s not much to gain (Wanna see pictures of my kids? You’re welcome…) I consider automated scanners and hackers trying to add to their botnets/db leaks the main threat. Any other suggestions on how to achieve this maybe?

Why does the NodeJS Crypto docs use CBC instead of GCM for RSA key-pair?

I have read that GCM is almost always more secure than CBC when implemented correctly.

However, in the documentation of NodeJS, CBC is being used as an example instead. The key-pair will be stored in the node environment.

Since the private key is being stored locally and CBC is an acceptable encryption for local files according to this answer, is it a secure enough implementation, or should GCM be used such as in this sample code?

funciones en nodejs con callbacks

Estuve siguiendo un tutorial y encontre un ejemplo con promesas lo cual es el siguiente:

function requestName(userName){ 	 	const url = `https://api.github.com/users/$  {userName}`; 	fetch(url) 		.then( function(res){ 			return res.json();  		}) 		.then( function(json){ 			console.log(json.name); 		}) 		.catch( function(e){ 			console.log(`El error es: $  {e}`); 		}); 		 }

Aun estoy aprendiendo sobre funciones pero estaba tratando de hacer la misma funci贸n con Callback pero el resultado no me imprime nada en la consola o quiza este haciendo algo mal, el codigo es el siguiente:

function requestName(userName){ 	const url = `https://api.github.com/users/$  {userName}`; 	fetch(url, function(err,res){ 		if(err){ 			console.log(`El error es: $  {err}`); 		}else{ 			const json = res.json(); 			console.log(json); 		} 	}) }

Chat entre dos personas con NodeJS y Socket.io

soy algo nuevo con socket.io y quiero hacer un un tipo de ayuda o asistencia en mi plataforma entre un usuario y el administrador a trav茅s de un chat, esto para resolver dudas o cualquier otro tipo de inconveniente.

El problema es que no logro hacer la comunicaci贸n entre los dos. Ac谩 parte del c贸digo.

Cliente o Usuario

enviar.addEventListener('click', function() {     //Enviando Datos al Servidor     socket.emit('asistenciaCliente', {         usuario: usuario.value,         mensaje: mensaje.value     });      mensaje.value = ""; //Borrando campo al mandar el mensaje     mensajeChange();     mensaje.focus(); });   socket.on('asistenciaServidor', function(datos) {     // console.log(datos);     salida.innerHTML +=         `<p>         <strong>$  {datos.usuario}:</strong> $  {datos.mensaje}     </p>`;     salida.scrollIntoView(false); //Mostrar Ultimos Mensajes en Pantalla de forma automatica }); 


socket.on('asistenciaServidor', function(datos) {     // console.log(datos);     salida.innerHTML +=         `<p>         <strong>$  {datos.usuario}:</strong> $  {datos.mensaje}     </p>`;     salida.scrollIntoView(false); //Mostrar Ultimos Mensajes en Pantalla de forma automatica }); 


socket.on('asistenciaCliente', (datos) => {   console.log(datos);    socket.join(datos.usuario);   io.sockets.to(datos.usuario).emit('asistenciaServidor', datos); }); 

cabe mencionar que ya hace los chats privados para cada usuario, pero no muestra los mensajes al Administrador 馃檨

Espero me puedan ayudar.

驴C贸mo enviar un dato por un emitter desde nodejs?

Resulta que mi c贸digo lee los datos de un txt linea a linea, y despu茅s de eso los envi贸 por un emitter a otro archivo, sucede que el ha le铆do muchos datos del txt por lo que esta recibiendo bastantes y esta emitiendo bastantes cada segundo, lo que quiero hacer es el c贸digo reciba todo esos valores del txt, pero que envi茅 un solo valor por cada segundo. pero que el valor que envi茅 no sea el primero que recibi贸 y despu茅s el segundo, sino mas bien que sea el valor que se recibi贸 justo cuando se ejecuto la linea de c贸digo despu茅s del segundo. si me explico?

ac谩 esta el c贸digo:

const fs = require('fs'); const readline = require('readline');  readFile.on('line', function (line) {     element = line.split('\t');     var counter = 0;         for(let property of element){             counter++;             console.log("1",counter, property)             if(counter === 7){                 convertNumber = parseFloat(property);                 console.log("2", counter, convertNumber, property)                 myEmitter.emit('event', convertNumber)                 counter = 0;             }          } }) 

Nodejs pnp-auth (adfs) behind corporate proxy

We have a node/express app that connect to SharePoint onprem using pnp-auth and node-sp-auth-config. (IE connection settings : automatic) Works like a charm

Moving this app to another server On that server IE connection needs to be on manual proxy config to make be able to connect to SharePoint via the browser

For the node app the result is : nodejs app cannot connect to SharePoint “FetchError: request to ….. failed, reason: connect ETIMEDOUT …..:443 at ClientRequest. (d:\NODE\QOMV-CRExport\node_modules\pnp-auth\node_modules\node-fetch\lib\index.js:1444:11) at ClientRequest.emit (events.js:182:13) at TLSSocket.socketErrorListener (_http_client.js:392:9) at TLSSocket.emit (events.js:182:13) at emitErrorNT (internal/streams/destroy.js:82:8) at emitErrorAndCloseNT (internal/streams/destroy.js:50:3) at process._tickCallback (internal/process/next_tick.js:63:19)”

Anybody any pointers how to solve this ?