postgresql server, can connect with SSL

I installed and used a Postgresql server a while back and have been using it with Postgis.
I would like to use my letsencrypt certificates with the server so i followed this article:
https://medium.com/@pavelevstigneev/postgresql-ssl-with-letsencrypt-b53051eacc22

The problem is that I can’t get it work.
If I follow the articles instructions and try to psql a database:

psql: error: could not connect to server: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

Same with outside connections.

I installed postgresql 12 on a ubuntu server.
I followed every step in the article.
I use the certbot/letsencrypt certificates for apache2 and vsftpd

Questions i struggle with:
Is there an extra I forgot about?
Do i need to open an extra port on the firewall to enable the connection (other than 5432)?
Should I configure an extra subdomain in DNS so cerbot can make designated certificates?

OpenID Connect with user bound roles and M2M access

I’m trying to get my head straight about how to properly design a OpenID connect provider and the roles to use with it. I understand the basic of scopes, claims and the different flow one can use. However, I’m trying to get my head around how I should handle the cases where i want M2M access to all resources, and a end user should only have access to his/her data.

My question is more related to how I should handle roles, is it overkill to have roles such as:

  • view_company_data
  • view_all_data

An example could be to provide a public API to access all data, e.g. collaborating companies, while also allowing me to have specific users to only access the data created by him/her. In my case that would be government body that wants access to all data, whilst the business owners should only have access to their own data.

I have an authentication provider, along with several resource servers. The business owners access their data through our client with only read/write permission for their own entity, and the government body wants access through our APIs to access all the data.

I wish to have all access control in a central entity, so generating access tokens on each separate resource server along with default JWT tokens from the authentication server seems like a bad idea. I’d rather handle it all from the authentication server.

Also a user should be able to generate these full-access tokens, given that they have an Global administration role.

So, what would be the right thing to do here?

What ways are there to securely connect different sites/branches?

I’m somewhat new to the field, but I have been tasked with coming up with the best way to connect remote customer sites to our two main offices. The data sent across from the customer to the main offices contains sensitive information and needs to be protected adequately.

From what I’ve gathered, VPN connections would be one of the traditional ways of doing this, but I’m not sure about what topology would fit best, mesh or hub-and-spoke. There are probably a lot of things you can do with cloud computing too, and if you guys have any ideas that would be great.

I’m sure there are several more ways of transferring data securely and I would appreciate every single bit of advice!

Store cookies for multiple sites on remote server and connect from multiple clients


Would it be secure to:

  1. Store all my website cookies (stack sites, webhost, github, web-based email, etc) on a remote server (using an customized open-source VPN or something)
  2. Login to the server with password + 2fa (and maybe have a trusted devices list?)
  3. Keep the cookies only on the server… never actually download them to any of my devices
  4. When visiting stackexchange.com, for example, my server would send the cookies to stack exchange, get the response, and send it back to me, but REMOVE any cookies & store them only on my server

Benefits (I think):

  1. I could keep diverse and very strong passwords for every website, but don’t store the passwords anywhere digitally (keep them on paper in a safe at home or something)
  2. logging in to all the sites I use on a new device only requires one sign in (to my custom VPN server)
  3. Only cookies would be stored digitally, so if anything went wrong server-side, my passwords would be safe & I could disable all the logins through each site’s web-interface

Problems (I think):

  1. If the authentication to my custom VPN is cracked, then every website I’ve logged into would be accessible
  2. The time & energy & learning required to set something like this up.

Improvement idea:

  1. When I sign in to the server the first time, the server creates an encryption key, encrypts all the cookies with it, and sends the encryption key to me as a cookie. Then on every request, my browser uploads the key, the website’s cookie is decrypted, then the request is made to whatever website I’m visiting. Then only one client could be logged in at a time (unless the encryption cookie were stolen)
  2. Encrypt each cookie with a simple password, short password or pin number
  3. An encryption key that updates daily (somehow)
  4. Keep a remote list of trusted devices, identified by IP address? Or maybe by cookie?

Why not just sign into the browser and sync cookies across devices?

  • Signing into Firefox mobile & Firefox on my computer doesn’t give the cookies to Twitter’s or Facebook’s web-browsers (that frustratingly always open first instead of taking me to my actual browser!)
  • It’s not as cool?
  • That would require me to trust a third-party (of course, I’ll ultimately have to trust my web-host to some extent)

OAuth2, SAML, OpenID connect – Which one to use for my scenario?

I work for a company where we give customer (hundreds/thousands of users) access to 2 sites. One owned by a 3rd party SaaS and one owned by us. Customers spend alot of time registering for both sites and we also spend alot of time removing accounts when customers no longer need access.

I would like users to register for Site A. After successful authentication; a user can click on a link within the site to access Site B but without the user entering credentials. I want Site A identity to be used to access site B and its resources. I do not need site B resources to be presented on Site A site, but simply allow users to access site B if already authenticated to site A.

Users may have different roles on site B.

What is my best option? Oauth2 sounds like a good option. But will it satisfy my requirement above?

Who will manage the authorisation server? I presume Site B?

Thank you.

Who to connect in filezilla with provided ftp credentials ?

Hello,
From my client I received ftp account and sample listing, that credentials are valid:

ubuntu@ip-CLIENT_IP:~$   ftp CLIENT.REMOTE.SERVER Connected to CLIENT.REMOTE.SERVER. 220 ProFTPD Server (ProFTPD) [CLIENT.SERVER.IP] Name (CLIENT.REMOTE.SERVER:ubuntu): SERVERUSER 331 Password required for SERVERUSER Password: 230 User SERVERUSER logged in Remote system type is UNIX. Using binary mode to transfer files. ftp> ls 200 PORT command successful 150 Opening BINARY mode data connection...
Code (markup):

Who to connect in filezilla with provided ftp credentials ?

OpenID Connect realms strategy

I’m reaching out to see if I can get a second opinion on something that came up at work. One of the clients of the company that I work at is setting up an OpenID Connect provider to authenticate api’s that they will be exposing to third parties (partners and in the future perhaps other api’s available to the general public). This provider might also be used for internal api’s further down the line.

Since the provider has to be exposed to the internet, do you think it is a reasonable strategy to set up three different realms, each of them for the three scenarios that I described above? (external api’s for partners, external api’s for the general public, and internal api’s). In case it’s relevant, the client is working with RedHat SSO.

To make the administration manageable, as long as each partner has its own client and it is correctly configured per the needs of each integration scenario, I thought that this setup would be correct. On the other hand, publicly available api’s will probably serve different, less sensitive information so I thought it could be acceptable to have a separate realm for those cases.

Thanks in advance!

Connect Cables in pc and it’s affect

Is there anyway I can know if the cable I use to charge my phone in it was connect to my pc before or not? Is there something in pc can let me see that because my pc contain some malware and spy files that comes to it through another phone because I used to use all cables and connect them to pc so I decided to format my other iphone in charger while charging it in the wall during deleting your data and setting the phone turned off so I connect it to the charger and it start to format and delete all data and settings while connect to charger so I format it again and I am afraid that the malware will be inside the iPhone system and even formatting it will no help

I manage to connect to Azure Analysis Services from SSMS, but not from SSIS

I’m new to the Microsoft Server Suite.

I’ve downloaded SSMS and connected to Azure Analysis Services from it. I’m able to query my data using mdx without any problems.

However, I actually intend to build an ETL pipeline with the AAS cube as one of the sources. So I installed SSIS and have been trying to connect it to the AAS cube.

I first add "Analysis Services Processing Task" to the package. The result looks ok (when I click on "Test connection" the result is positive). But when I click on "Add", it doesn’t detect any cubes (there are two on the AAS server specified):

enter image description here

I assumed it worked anyway, but I can’t query the cube no matter how I try to do that. I added "Execute SQL task", but when I run it, it gives me an error:

enter image description here

enter image description here

enter image description here

The error message is:

An OLE DB record is available. Source: "Microsoft OLE DB Driver for SQL Server" Hresult: 0x80004005 Description: "Login timeout expired". An OLE DB record is available. Source: "Microsoft OLE DB Driver for SQL Server" Hresult: 0x80004005 Description: "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". An OLE DB record is available. Source: "Microsoft OLE DB Driver for SQL Server" Hresult: 0x80004005 Description: "Named Pipes Provider: Could not open a connection to SQL Server [53]. ". Error: 0xC00291EC at Execute SQL Task, Execute SQL Task: Failed to acquire connection "asazure://northeurope.asazure.windows.net/xxxx". Connection may not be configured correctly or you may not have the right permissions on this connection. Task failed: Execute SQL Task Warning: 0x80019002 at Package: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. SSIS package "C:\Users176\source\repos\Integration Services Project1\Integration Services Project1\Package.dtsx" finished: Failure. The program ‘[18664] DtsDebugHost.exe: DTS’ has exited with code 0 (0x0).

Any ideas?