I installed and used a Postgresql server a while back and have been using it with Postgis.
I would like to use my letsencrypt certificates with the server so i followed this article:
The problem is that I can’t get it work.
If I follow the articles instructions and try to psql a database:
psql: error: could not connect to server: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Same with outside connections.
I installed postgresql 12 on a ubuntu server.
I followed every step in the article.
I use the certbot/letsencrypt certificates for apache2 and vsftpd
Questions i struggle with:
Is there an extra I forgot about?
Do i need to open an extra port on the firewall to enable the connection (other than 5432)?
Should I configure an extra subdomain in DNS so cerbot can make designated certificates?
I’m trying to get my head straight about how to properly design a OpenID connect provider and the roles to use with it. I understand the basic of scopes, claims and the different flow one can use. However, I’m trying to get my head around how I should handle the cases where i want M2M access to all resources, and a end user should only have access to his/her data.
My question is more related to how I should handle roles, is it overkill to have roles such as:
An example could be to provide a public API to access all data, e.g. collaborating companies, while also allowing me to have specific users to only access the data created by him/her. In my case that would be government body that wants access to all data, whilst the business owners should only have access to their own data.
I have an authentication provider, along with several resource servers. The business owners access their data through our client with only read/write permission for their own entity, and the government body wants access through our APIs to access all the data.
I wish to have all access control in a central entity, so generating access tokens on each separate resource server along with default JWT tokens from the authentication server seems like a bad idea. I’d rather handle it all from the authentication server.
Also a user should be able to generate these full-access tokens, given that they have an Global administration role.
So, what would be the right thing to do here?
I’m somewhat new to the field, but I have been tasked with coming up with the best way to connect remote customer sites to our two main offices. The data sent across from the customer to the main offices contains sensitive information and needs to be protected adequately.
From what I’ve gathered, VPN connections would be one of the traditional ways of doing this, but I’m not sure about what topology would fit best, mesh or hub-and-spoke. There are probably a lot of things you can do with cloud computing too, and if you guys have any ideas that would be great.
I’m sure there are several more ways of transferring data securely and I would appreciate every single bit of advice!
I work for a company where we give customer (hundreds/thousands of users) access to 2 sites. One owned by a 3rd party SaaS and one owned by us. Customers spend alot of time registering for both sites and we also spend alot of time removing accounts when customers no longer need access.
I would like users to register for Site A. After successful authentication; a user can click on a link within the site to access Site B but without the user entering credentials. I want Site A identity to be used to access site B and its resources. I do not need site B resources to be presented on Site A site, but simply allow users to access site B if already authenticated to site A.
Users may have different roles on site B.
What is my best option? Oauth2 sounds like a good option. But will it satisfy my requirement above?
Who will manage the authorisation server? I presume Site B?
From my client I received ftp account and sample listing, that credentials are valid:
ubuntu@ip-CLIENT_IP:~$ ftp CLIENT.REMOTE.SERVER Connected to CLIENT.REMOTE.SERVER. 220 ProFTPD Server (ProFTPD) [CLIENT.SERVER.IP] Name (CLIENT.REMOTE.SERVER:ubuntu): SERVERUSER 331 Password required for SERVERUSER Password: 230 User SERVERUSER logged in Remote system type is UNIX. Using binary mode to transfer files. ftp> ls 200 PORT command successful 150 Opening BINARY mode data connection...
Who to connect in filezilla with provided ftp credentials ?
I’m reaching out to see if I can get a second opinion on something that came up at work. One of the clients of the company that I work at is setting up an OpenID Connect provider to authenticate api’s that they will be exposing to third parties (partners and in the future perhaps other api’s available to the general public). This provider might also be used for internal api’s further down the line.
Since the provider has to be exposed to the internet, do you think it is a reasonable strategy to set up three different realms, each of them for the three scenarios that I described above? (external api’s for partners, external api’s for the general public, and internal api’s). In case it’s relevant, the client is working with RedHat SSO.
To make the administration manageable, as long as each partner has its own client and it is correctly configured per the needs of each integration scenario, I thought that this setup would be correct. On the other hand, publicly available api’s will probably serve different, less sensitive information so I thought it could be acceptable to have a separate realm for those cases.
Thanks in advance!
Is there anyway I can know if the cable I use to charge my phone in it was connect to my pc before or not? Is there something in pc can let me see that because my pc contain some malware and spy files that comes to it through another phone because I used to use all cables and connect them to pc so I decided to format my other iphone in charger while charging it in the wall during deleting your data and setting the phone turned off so I connect it to the charger and it start to format and delete all data and settings while connect to charger so I format it again and I am afraid that the malware will be inside the iPhone system and even formatting it will no help
I have entered Omega Indexer but it shows below error message. What happen? Thanks!
I’m new to the Microsoft Server Suite.
I’ve downloaded SSMS and connected to Azure Analysis Services from it. I’m able to query my data using mdx without any problems.
However, I actually intend to build an ETL pipeline with the AAS cube as one of the sources. So I installed SSIS and have been trying to connect it to the AAS cube.
I first add "Analysis Services Processing Task" to the package. The result looks ok (when I click on "Test connection" the result is positive). But when I click on "Add", it doesn’t detect any cubes (there are two on the AAS server specified):
I assumed it worked anyway, but I can’t query the cube no matter how I try to do that. I added "Execute SQL task", but when I run it, it gives me an error:
The error message is:
An OLE DB record is available. Source: "Microsoft OLE DB Driver for SQL Server" Hresult: 0x80004005 Description: "Login timeout expired". An OLE DB record is available. Source: "Microsoft OLE DB Driver for SQL Server" Hresult: 0x80004005 Description: "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". An OLE DB record is available. Source: "Microsoft OLE DB Driver for SQL Server" Hresult: 0x80004005 Description: "Named Pipes Provider: Could not open a connection to SQL Server . ". Error: 0xC00291EC at Execute SQL Task, Execute SQL Task: Failed to acquire connection "asazure://northeurope.asazure.windows.net/xxxx". Connection may not be configured correctly or you may not have the right permissions on this connection. Task failed: Execute SQL Task Warning: 0x80019002 at Package: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. SSIS package "C:\Users176\source\repos\Integration Services Project1\Integration Services Project1\Package.dtsx" finished: Failure. The program ‘ DtsDebugHost.exe: DTS’ has exited with code 0 (0x0).