node.js app on ubuntu linux with postgresql on virtual box on windows client

I set up, ubuntu linux on virtual box. Then I installed postgresql and node.js So basically I made a clean installation and i want that ubuntu handles my requests. When i run node server.js , a Sequelize.js error occurred.

Unhandled rejection SequelizeConnetionRefusedError: connect ECONNREFUSED

PostgreSQL is listening on port 5432, and once i run node server.js,tables are created with no problem.

I think is a problem with my client, and I don’t know why.

postgres.conf listen_addresses=’*’.

pg_hba.conf host all all trust.

I have little bit experience with Linux Ubuntu and configuration So a big hug and thanks for who will help me

How To Set Up a Node.js Application for Production on a CentOS 7 VPS

In this tutorial, we will create a simple Node.js application and put it into a production ready environment. We are going to install and use the following pieces of software:

  • Nginx as a reverse proxy. It will make the app accessible from your browser, and in case you ran several sites from the same server, it could serve as a load balancer as well.
  • Certbot will let us install Let’s Encrypt certificates. Access to the site will be secure as only the HTTPS requests will be honored.
  • NPM package called PM2 will turn a node.js app into a service. The app will run in the background, even after system crashes or reboots.

What We Are Going To Cover

  • Install Nginx
  • Install firewall-cmd and enable rules for Nginx
  • Install the latest version of Node.js
  • Add NPM packages for the app that we are making
  • Create the example app to show all characters in upper case
  • Configure Nginx as a reverse proxy
  • Install Let’s Encrypt certificates to serve HTTPS requests
  • Access the app from the browser
  • Install PM2, a production process manager for Node.js applications with a built-in traffic Load Balancer
  • Use PM2 to restart the Node.js app on every restart or reboot of the system


We use Centos 7:

  • Starting with a clean VPS with
  • At least 512Mb of RAM and
  • 15Gb of free disk space.
  • You will need root user access via SSH
  • A domain name pointed to your server’s IP address (it can also be a subdomain) using A records at your DNS service provider
  • We use nano as our editor of choice, and you can install it with this command:
yum install nano 

Step 1: Install Nginx

After you have logged in as a root user, you will install Nginx. Add the CentOS 7 EPEL repository with this command:

yum install epel-release 

Next, install Nginx:

yum install nginx 

Press ‘y’ twice and the installation will be finished. Enable Nginx service to start at server boot:

systemctl enable nginx 

Step 2: Change Firewall Rules to Enable Nginx

Let’s now install firewall-cmd, the command line front-end for firewalld (firewalld daemon), for CentOS. It supports both IPv4 and IPv6, firewall zones, bridges and ipsets, allows for timed firewall rules in zones, logs denied packets, automatically loads kernel modules, and so on.

Install it in the usual manner, by using yum:

yum install firewalld 

Let us now start it, enable it to auto-start at system boot, and see its status:

systemctl start firewalld systemctl enable firewalld systemctl status firewalld 

Node.js apps require a port that is not used by the system, but is dedicated to that one app only. In our examples, we might use ports such as 3000, 8080 and so on, so we need to declare them explicitly, otherwise the app won’t run.

Here is a list of ports and feel free to add any other that your host requires for the normal functioning of the system:

firewall-cmd --permanent --zone=public --add-service=ssh firewall-cmd --zone=public --add-port=3000/tcp --permanent firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --permanent --zone=public --add-service=https firewall-cmd --reload 

Let us now start Nginx:

systemctl start nginx 

With HTTP functioning, we can visit this address in the browser:


and verify that Nginx is running:

Step 3: Install Latest Node.js

We’ll now install the latest release of Node.js. First, install development tools to build native add-ons (make, gcc-c++) and then enable Node.js yum repository from the Node.js official website:

yum install -y gcc-c++ make curl -sL | sudo -E bash - 

Now, the repository is added to your VPS and we can install the Node.js package. NPM, the package manager for Node.js, will also be installed, as well as many other dependent packages in the system.

yum install nodejs 

Press ‘y’ twice to finish the installation. Show the version of Node.js that is installed:

node -v 

It shows v12.3.1, which was the actual version at the time of this writing. If it shows an error, double check the commands you entered against the ones shown above.

Step 4: Adding NPM Packages

We of course know what packages our Node.js app will need, so we install the required npm packages in advance. Since our app will turn any input into uppercase letters, we first install a package for that:

npm install upper-case 

Most Node.js apps will now use Express.js, so let’s install that as well:

npm install --save express 

Execute this command as well:

npm install -g nginx-generator 

It will globally install an NPM package to generate the reverse proxy config for Nginx. We will apply it after the app is running on port 8080.

Step 5: Creating The App

Open a file named uppercase-http.js for editing:

nano uppercase-http.js 

Add the following lines:

var http = require('http'); var uc = require('upper-case'); console.log('starting...'); http.createServer(function (req, res) {   console.log('received request for url: ' + req.url);   res.writeHead(200, {'Content-Type': 'text/html'});   res.write(uc(req.url + '\n'));   res.end(); }).listen(8080); 

Save and close the file.

The HTTP server will listen to port 8080. You can specify any other port that you like, provided that it will be free when the app is running (and that you have previously opened access to it in firewalld).

Run the app:

node uppercase-http.js 

You will see the following message:


Node.js app starting

To test it, fire up another terminal, connect to your VPS as root via SSH and curl localhost:8080:

curl localhost:8080/test 

The program correctly converted path to uppercase. The server app shows a status message for the request:

received request for url: /test 

Now we have two terminal windows, one with the app running and the other which we used to test the app. The first window is blocked as long as the app is running and we can press Ctrl-C from keyboard to stop it. If we do so, the app won’t be running later when we access it from the browser. The solution is to either activate the app again or — much cleaner — enter further commands only into the second terminal window for the rest of this tutorial.

Step 6: Configure Nginx as Reverse Proxy

Nginx for CentOS comes without folders for available and enabled sites, as is the custom on Ubuntu Linux. You’ll need to create them:

mkdir /etc/nginx/sites-available mkdir /etc/nginx/sites-enabled 

Then, edit Nginx global configuration to load config files from these folders:

nano /etc/nginx/nginx.conf 

Find line

include /etc/nginx/conf.d/*.conf; 

and insert these lines:

include /etc/nginx/sites-enabled/*; server_names_hash_bucket_size 64; 

Save and close the file. Now Nginx will read the contents of the “enabled” sites.

For the sake of completness, our nginx.conf file looks like this:

user  nginx; worker_processes  1;  error_log  /var/log/nginx/error.log warn; pid        /var/run/;   events {     worker_connections  1024; }   http {     include       /etc/nginx/mime.types;     default_type  application/octet-stream;      log_format  main  '$  remote_addr - $  remote_user [$  time_local] "$  request" '                       '$  status $  body_bytes_sent "$  http_referer" '                       '"$  http_user_agent" "$  http_x_forwarded_for"';      access_log  /var/log/nginx/access.log  main;      sendfile        on;     #tcp_nopush     on;      keepalive_timeout  65;      #gzip  on;      include /etc/nginx/conf.d/*.conf;     include /etc/nginx/sites-enabled/*;     server_names_hash_bucket_size 64; 

You may want to copy it and paste it.

With NPM package nginx-generator we generate files that will tell Nginx to act as a reverse proxy. In the command line, execute the following:

nginx-generator \       --name site_nginx \       --domain YOUR_DOMAIN \       --type proxy \       --var host=localhost \       --var port=8080 \       /etc/nginx/sites-enabled/site_nginx 

Replace YOUR_DOMAIN with your actual domain before running this command.

That command creates a file called site_nginx and puts it into the directory /etc/nginx/sites-enabled/. (We could have used any other name instead of site_nginx for the file.)

We can see it with this command:

sudo nano /etc/nginx/sites-enabled/site_nginx 

Test the configuration:

sudo nginx -t 

and if everything is OK, restart Nginx:

systemctl restart nginx 

Run the app again:

node uppercase-http.js 

You will see the following message:


In your browser, go to address


and the result should be


Bad Gateway Case No. 1 – The App is Not Active

Instead of proper result, which in this particular case would be text printed in uppercase letters, it is all too easy to get the message Bad Gateway in this place.

The main reason is that we were using one teminal window to both run the app and insert other commands. When you start the app with node uppercase-http.js, it will block the entire window and when you want to try out the next command in the installation process, the app will stop running. One way to prevent this is to repeat starting the app all over again, as we have done in this tutorial.

Another way would be to open two terminal windows, start the app in one of them and then proceed with further commands in the second terminal window, exclusively.

Bad Gateway Case No. 2 – SELinux Is Active

If SELinux is enabled, it can block Nginx from making outbound connections.

You can check this with:


If you get Enforcing as the result, SELinux is active. Run this command to let Nginx serve as a reverse proxy:

setsebool -P httpd_can_network_connect true 

Step 7: Securing Your Site To Serve Only HTTPS

We want to serve our app via a HTTPS request. If you have a domain name and DNS records properly set up to point to your VPS, you can use certbot to generate Let’s Encrypt certificates. This means that you will always access the app as well as the rest of your domain, via HTTPS.

We will folow the original documentation to install Let’s Encrypt. Choose Nginx for Software and Centos/RHEL 7 for System – it should look like this:

Certbot Certbot Site

Certbot is packaged in EPEL (Extra Packages for Enterprise Linux). To use Certbot, you must first enable the EPEL repository. On CentOS, you must also enable the optional channel, by issuing the following commands:

yum -y install yum-utils yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional 

Now install the Certbot by executing this:

yum install certbot python2-certbot-nginx 

It will compute the dependencies needed and ask you to let it proceed with the installation.

Press ‘y’ when asked.

Finally, run Certbot:

certbot --nginx 

If you are installing certificates for the fist time, Certbot will ask for an emergency email address, then several less important questions and finally – do you want to redirect all HTTP traffic to HTTPS? Select 2 to confirm this redirection, and you’re all set!

Activate Nginx as you normally would after each change in parameters:

systemctl restart nginx 

To verify that redirection is working, go to the same address in your browser:


Note that this address started with HTTP, but that it ended up as HTTPS.

Step 8: Install PM2

PM2 is a production process manager for Node.js applications. With PM2, we can monitor applications, their memory and CPU usage. It also provides easy commands to stop/start/restart all apps or individual apps.

Once the app is started through PM2, it will always be restarted after system crashes or restarts. In effect, it will “always be there”.

Use NPM to install PM2:

npm install pm2@latest -g 

Option -g tells it to install pm2 globally, so it can run from all paths in the system.

Let’s now run our application under PM2:

pm2 start uppercase-http.js 

The output of PM2 can be spectacular when run for the first time, but we really need to concentrate on the rows about the app:

PM2 will use app name, and then show the id number of the app, mode, status, CPU usage and memory. If two or more apps are running in the background, they will all be presented in this table.

We can list the processes like this:

pm2 list 

The following command

pm2 show 0 

will show details of the app with ID of 0:

It is also possible to monitor CPU usage in real time:

pm2 monit 

Other useful PM2 commands are stop, restart, delete.

What Can You Do Next

Now you have a node.js app in a production environment, using HTTPS protocol for safety, Nginx for speed and as a reverse proxy, running as a service in the background. We have just installed one such app and one such site, while you may run serveral node.js apps and sites from the same server. We used root user throughout for ease of installation, while for multiples sites you would need multiple non-root users for safety.

Dusko Savic is a technical writer and programmer.

The post How To Set Up a Node.js Application for Production on a CentOS 7 VPS appeared first on Low End Box.

Adjuntar codigo js a un html en node.js

Tengo como objetivo correr un archivo de javascript en el lado del browser pero para ello primero tengo que cargarlo, el problema es que no funciona con solo referenciarlo en el archivo html, y me encuentro tratando de encontrar formas de cargar ese archivo para que no haya la necesidad de devolver datos del html al servidor para procesar cosas que podrian ser procesadas del lado del browser.

Para html estoy usando ejs, y en java estoy usando express.

var express = require('express') var app = express(); app.use(express.static('./public/confirm.js')); app.get('/ide', (req, res) => {     res.render('identification.ejs')     res.sendFile('./public/confirm.js')     console.log(req.url) }) app.listen(3000, console.log('Listening')) 

Ahora el archivo html:

<% include header.ejs%> <body>          <p>Complete the folowing input bars acordinly</p>         <input id="name" type="text" name="name" placeholder="First and last name"/>         <input id="id" type="number" name="id" placeholder="Id number"/>         <input id="homeAd" type="text" name="homeAd" placeholder="Home address"/>         <input id="posCod" type="number" name="posCod" placeholder="Postal Code"/>         <input id="phone" type="number" name="phone" placeholder="Phone"/>         <input id="homePhone" type="number" name="homePhone" placeholder="Home phone number"/>         <input id="email" type="text" name="email" placeholder="Email"/>         <br>         <button id="ideconfi" onclick="hola()">Sumbit</button>     </script src="../public/confirm.js"> 

El archivo que estoy intentando cargar esta en una carpeta public como pueden ver, se que tal vez la respuesta sea simple pero no encuentro forma de resolver agradeceria mucho si me ayudan.

User RBAC and billing with microservices | Node.js

I am working on a SaaS product that utilizes a microservice architecture and I am starting now to think more about my user system and payments. I plan to have 4 levels of plans (free and 3 paid).

My question comes in how do I design a user and payment system that can handle the different plan levels but also different account permissions that can be set by my customers. By that I mean there will be a company admin who then creates accounts for their employees who have roles assigned to them.

Was that since I already have an API gateway that I would do some processing there since as part of the API gateway I get the user’s information and I send it along with the request to my backend services.

I have not seen much online for people talking about how to handle this kind of thing. I can only find stuff about passport.js (I’m using Node.js) and mostly just basic setups with no billing nor anything like that.

I was thinking that a separate billing system would make the most sense but then I have to tie the multiple databases together or have another service that runs with access to my user DB.

I am a bit lost on how to do this as I have never designed a system like this before and since it is a personal project, not a work project one, I have no one that I can ask so hoping this is the right stack exchange place.

My payment processor is going to be Stripe since I have worked with them before and quite like their docs and ease of use. So no worries about trying to create the wheel again.

Node.js mssql return query result to ajax

I’m new to learning Node.js, so I’m still getting used to asynchronous programming and callbacks. I’m trying to insert a record into a MS SQL Server database and return the new row’s ID to my view.

The mssql query is working correctly when printed to console.log. My problem is not knowing how to properly return the data.

Here is my mssql query – in addJob.js:

var config = require('../../db/config');  async function addJob(title) {      var sql = require('mssql');     const pool = new sql.ConnectionPool(config);     var conn = pool;      let sqlResult = '';     let jobID = '';     conn.connect().then(function () {         var req = new sql.Request(conn);          req.query(`INSERT INTO Jobs (Title, ActiveJD) VALUES ('$  Node.js mssql return query result to ajax', 0) ; SELECT @@IDENTITY AS JobID`).then(function (result) {              jobID = result['recordset'][0]['JobID'];              conn.close();              //This prints the correct value             console.log('jobID: ' + jobID);          }).catch(function (err) {             console.log('Unable to add job: ' + err);             conn.close();         });      }).catch(function (err) {         console.log('Unable to connect to SQL: ' + err);     });      // This prints a blank    console.log('jobID second test: ' + jobID)    return jobID; }  module.exports = addJob; 

This is my front end where a modal box is taking in a string and passing it to the above query. I want it to then receive the query’s returned value and redirect to another page.

// ADD NEW JOB             $  ("#navButton_new").on(ace.click_event, function() {                 bootbox.prompt("New Job Title", function(result) {                     if (result != null) {                          var job = {};                         job.title = result;                          $  .ajax({                             type: 'POST',                             data: JSON.stringify(job),                             contentType: 'application/json',                             url: 'jds/addJob',                                                   success: function(data) {                                  // this just prints that data is an object. Is that because I'm returning a promise? How would I unpack that here?                                 console.log('in success:' + data);                                  // I want to use the returned value here for a page redirect                                 //window.location.href = "jds/edit/?jobID=" + data;                                 return false;                             },                             error: function(err){                                 console.log('Unable to add job: ' + err);                             }                         });                     } else {                      }                 });             }); 

And finally here is the express router code calling the function:

const express = require('express'); //.... const app = express(); //....'/jds/addJob', function(req, res){      let dataJSON = JSON.stringify(req.body)     let parsedData = JSON.parse(dataJSON);      const addJob = require("../models/jds/addJob");     let statusResult = addJob(parsedData.title);      statusResult.then(result => {         res.send(req.body);     });   }); 

I’ve been reading up on promises and trying to figure out what needs to change here, but I’m having no luck. Can anyone provide any tips?

Run Node.js website on port 80 with Apache

I have a CentOS 6 VPS with Apache using port 80 to host some of my websites. However, one of my websites now require Node.js for one of its features, but it is proving rather difficult to run the Node scripts as I want it to run on port 80, but apache2 is currently using it.

The domain name of my website is It is not the default domain for my cPanel, and is nested in public_html/xxxxxx. I’ve implemented the following in my httpd.conf file, but still nothing’s working. I’ve got to be missing something.

NameVirtualHost *:80 <VirtualHost *:80>     ServerName     ProxyPreserveHost on     ProxyPass /home/xxxxxxxxx/public_html/xxxxxx </VirtualHost> 

mod_proxy is also enabled.

Cannot Connect to wss:// on Ubuntu 18.04.2 LTS using Node.js Server

To make things clear I am able to connect to ws:// but not via wss://.

I am running a LAMP Server and Using Node.js Server to connect to application. I also have a valid certificate for the domain name for the server so I am able to access without any problems.

I am trying to accomplish this without using NGINX, so please do not mention it unless nothing else is possible.

I am also able to make the connection without messing aroung mod_proxy.

This is the code for the server app server.js

"use strict";  Object.defineProperty(exports, "__esModule", { value: true }); const express = require("express"); const http = require("http"); const WebSocket = require("ws");  const PORT = process.env.PORT || 3000; const app = express(); app.use(express.static(__dirname));  //initialize a simple http server const server = http.createServer(app); //initialize the WebSocket server instance const wss = new WebSocket.Server({ server });   var users = {}; var id = 1; wss.on('connection', (ws, req) => {     ws.userId = req.headers.user;     ws.binaryType = "arraybuffer";     if (!ws.userId) {         ws.userId = id;         id++;     }     users[ws.userId] = ws;     console.log("User: " + ws.userId);     ws.isAlive = true;     ws.on('pong', () => {         ws.isAlive = true; });  //connection is up, let's add a simple simple event     ws.on('message', function (message) {         //make data a JSON object and extract info         var obj = JSON.parse(message);         //log the received message and send it back to the client         console.log('From: %s', obj.from);         console.log('To: %s',;         console.log('Message: %s', obj.msg);         //log the received message and send it back to the client         console.log('received: %s', message);         var broadcastRegex = /^broadcast\:/;         if (broadcastRegex.test(message)) {             message = message.replace(broadcastRegex, '');             //send back the message to other clients             wss.clients                 .forEach(function (client) {                 if (client != ws) {                     client.send("Hello, broadcast message -> " + message);                 }             });         }         ws.send("Hello, you sent -> " + message);     });     //connection is closed by the user     ws.on('close', function () {         console.log('User disconneted');     });     //send immediatly a feedback to the incoming connection      console.log("User connected!");     ws.send('Hi there, I am a WebSocket server'); });  //start our server server.listen(PORT, function () {     console.log("Server started on port " + server.address().port + " :)"); }); 

Appreciate any help to have this accomplished. Been at this for a few days.


Node.js tool to update Cisco UCCX with CSV data

What this tool does update skills for agents on a Contact Center Cluster (Cisco UCCX). The tool reads a csv file called agents.csv from the current working directory. This file has a list of agents and their skills. The skills are updated using a REST API.

Note that UCCX allows a maximum of 400 agents, so that’s why I use the sync API for reading a file and csv because I don’t think it’s going to be performance impacting.

Line 171 is where the agent skill is actually updated:

        await axios.put(ccxURI, xml, config); 

It is done sequentially. I originally wanted to do it concurrently using Promise.all but the UCCX server fails with a 500 error. So for now I’m doing it one at a time.

Code is below:

"use strict";  const ora = require("ora"); let throbber;  const inquirer = require("inquirer");  const prompts = [     {       name: "IP",       type: "input",       message: "Enter IP address of UCCX publisher: ",     },     {         name: "user",         type: "input",         message: "Enter username: ",     },     {         name: "pass",         type: "password",         message: "Enter password: ",     }, ];  const axios = require("axios"); const https = require("https");   const fs = require("fs"); // csv sync api is used as there can be at most 400 agents on the cluster // hence it should not be performance impacting const parse = require("csv-parse/lib/sync"); const parseOptions = {     comment: "#",     skip_empty_lines: true,     trim: true };  // csv file won"t exceed 1MB so readFileSync should be OK const csv = fs.readFileSync("agents.csv", "utf-8"); const parser = require("xml2json");  (async () => {     console.log("Please verify agents.csv is in the current working directory\n");     const answers = await inquirer.prompt(prompts);      const IP = answers.IP;     const config = {         auth: {             username: answers.user,             password: answers.pass         },         httpsAgent: new https.Agent({               rejectUnauthorized: false         }),         headers: {"Content-Type": "application/xml"}     };      const XMLtemplate = `         <resource>             <self></self>             <userID></userID>             <firstName></firstName>             <lastName></lastName>             <extension></extension>             <alias></alias>             <resourceGroup></resourceGroup>             <skillMap>                 <skillCompetency></skillCompetency>             </skillMap>             <autoAvailable>true</autoAvailable>             <type></type>             <team name="Default">                 <refURL>$  {IP}/adminapi/team/1</refURL>             </team>             <primarySupervisorOf/>             <secondarySupervisorOf/>         </resource>`;      const resourceGroupMapping = {};     const resourceGroupURI = `https://$  {IP}/adminapi/resourceGroup`;     const resourceGroupList = await axios.get(resourceGroupURI, config);       throbber = ora("Updating").start();     for(const resourceGroup of {         resourceGroupMapping[] =;     }      const skillMapping = {};     const skillURI = `https://$  {IP}/adminapi/skill`;      const skillList = await axios.get(skillURI, config);      for(const skill of {         skillMapping[skill.skillName] = skill.skillId;     }      const teamMapping = {};     const teamURI = `https://$  {IP}/adminapi/team`;     const teamList = await axios.get(teamURI, config);       for(const team of {         teamMapping[team.teamname] = team.teamId;     }      // sample csv     // #id,firstName,lastName,extension,resourceGroup,team,skill1,competence1,skill2,competence2,skill3,competence3,skill4,competence4 ...      // Adrian_A,Adrian,Aldana,1008,1,Saturday-Lending,Lending,Pre_Approved_Loans,9,HELOC,6,Auto_and_Consumer_Loans,4,Loan_Status,7     const records = parse(csv, parseOptions);     const recordLength = records.length;      const XMLOptions = {         object: true,         reversible: true,         trim: true     };      for(let index = 0; index < recordLength; index++) {         let skillsAndCompetency;         let jsonObject;         let xml;         let ccxURI;          jsonObject = parser.toJson(XMLtemplate, XMLOptions);         jsonObject.resource.skillMap.skillCompetency = [];          jsonObject.resource.self = { "$  t" : `https://$  {IP}/adminapi/resource/$  {records[index][0]}` };         jsonObject.resource.userID = { "$  t" : `$  {records[index][0]}` };         jsonObject.resource.firstName = { "$  t" : `$  {records[index][1]}` };         jsonObject.resource.lastName = { "$  t" : `$  {records[index][2]}` };         jsonObject.resource.extension = { "$  t" : `$  {records[index][3]}` };          // include resource type         jsonObject.resource.type = { "$  t" : `$  {records[index][4]}` };          // update resource group         if(records[index][5]) {    = `$  {records[index][5]}`;             jsonObject.resource.resourceGroup.refURL = { "$  t" : `https://$  {IP}/adminapi/resourceGroup/$  {resourceGroupMapping[records[index][5]]}` };         }          // update team         if(records[index][6] !== "Default") {    = records[index][6];    = { "$  t" : `https://$  {IP}/adminapi/team/$  {teamMapping[records[index][6]]}` };         }          // update all skills and competency          for(let j = 7; j < records[index].length; j += 2) {             skillsAndCompetency = {                      competencelevel : "",                     skillNameUriPair : {                          name: "",                         refURL : ""                     }             };              skillsAndCompetency.competencelevel = { "$  t" : `$  {records[index][j + 1]}` };    = `$  {records[index][j]}`;             skillsAndCompetency.skillNameUriPair.refURL = { "$  t" : `https://$  {IP}/adminapi/skill/$  {skillMapping[records[index][j]]}` };             jsonObject.resource.skillMap.skillCompetency.push(skillsAndCompetency);         }          // xml payload          xml = `<?xml version="1.0" encoding="UTF-8"?>` + parser.toXml(JSON.stringify(jsonObject));          // update skills for the agents         ccxURI = `https://$  {IP}/adminapi/resource/$  {records[index][0]}`;         // if a row in the csv is incorrect, continue processing subsequent rows         try {             await axios.put(ccxURI, xml, config);         } catch (error) {             throbber.stop();             console.log(`\nCould not assign skills to $  {records[index][0]}\n$  {[0].errorMessage}\n`);             throbber.start();         }     } })()     .catch(error => console.log(error.stack))     .then( () => throbber.stop());