Is it poor practice to host multiple web applications on the same domain, in terms of cookies?

In my web application, I have a single API backend and two frontends written as single page applications. To simplify deployment, I’d like to serve the API on /api, the admin dashboard on /admin, and the end user frontend on /user (or something similar), all on the same domain.

I want to use cookies for handling sessions, for both the end-user and admin apps. Is this a good idea? As I understand it, cookie usage is restricted by their domain. Would it make it simpler for an attacker to steal admin-session cookies from someone logged into both frontends? Or, should I use different domains for the admin and user frontends (admin.mydomain.com and user.mydomain.com)?

/wp-admin not accessible after migrating to local host (no plugin issue)

I migrated my local WordPress site to my WPEngine account and it’s been working without any problem!

After adding some content, I decided to export the database from the live version and import it to my local version so that they are synced. I adjusted the two siteurl and home fields in the database and the home page (https://localhost:8888) comes up well but the /wp-admin page is forced to https and responds with ERR_SSL_PROTOCOL_ERROR error.

All the other pages of the website cannot be loaded and return this error: Not Found The requested URL /news was not found on this server.

It seems like a "permalinks" reset problem for inner pages!

All these problems would go away if I switch the database back to the one I was using for local version so I’m pretty sure it’s a database issue!

Thanks

Does Nmap use only one of the DNS servers specified in the –dns-server flag per host?

When I’m scanning with Nmap, I make an effort to get proper hostnames associated with the target IPs. To do this, I scan UDP 53 on the targets to identify DNS servers and then run something like the following for each identified DNS server:

nmap -sL -v4 --dns-servers DNSSERVER TARGETS 

I have to review the results for each tested DNS server to see how many of the targets it can resolve, and also determine if the resolved targets differ.

The docs seem to imply that if you specify multiple servers in the --dns-servers flag, that it will choose one at random (or round robin). This interpretation comes from the "is often faster" part.

The problem I have is that my scan targets may not all be supported by the same DNS server. In my case, I’d rather specify all identified DNS servers in --dns-servers and have it fail over until it finds one that returns a response. If only one of the specified servers is used, to get accurate results I would need to perform multiple scans, each with a single DNS server specified.

My question is, is it true that the --dns-server flag will use only one of the specified DNS servers, and not try them all?

Is it possible to run commands that exist only on the host on a docker container?

We would like to harden our Docker Image and remove redundant software from it. Our Devs and Ops asked to keep some Linux tools used for debugging on the containers running on our Kubernetes Prod environment.

I’ve read this post: https://www.digitalocean.com/community/tutorials/how-to-inspect-kubernetes-networking

And it made me wonder, is it possible to run commands that exist only on the host, on a container (which those commands have been removed from)?

If so is there a difference between commands that have been removed from the container than ones that the user don’t have permissions to run?

P.S. How do the tools in the above mentioned post work?

Thanks for the help! 🙂

Host filesystem manipulation from docker vs. virtual machine

When reading about docker, I found a part of the documentation describing the attack surface of the docker daemon. From what I was able to understand, part of the argument is that it is possible to share (basically arbitrary) parts of the host filesystem with the container, which can then be manipulated by a privileged user in the container. This seems to be used as an argument against granting unprivileged users direct access to the docker daemon (see also this Security SE answer).

Would the same be possible from a virtual machine, e.g. in VirtualBox, which on the host is run as an unprivileged user?

A quick test where I was trying to read /etc/sudoers on a Linux Host from a Linux guest running in VirtualBox did produce a permission error, but I would not consider myself an expert in that regard in any way nor was the testing very exhaustive.

How do you get a secure bastion host if your IP address is constantly changing?

I am setting up AWS stuff and wondering how to setup a secure bastion host. They all say to only allow access to your IP address, but how can I do that if my IP address is changing every few hours or days (just in my house wifi, or going to coffee shops, etc.). What is best practice here, for SSHing into a bastion host and limiting access somehow to only specific IP addresses. If not possible, what is the next best alternative?

Examples of SSH key exchange algorithms requiring encryption-capable host keys

In the SSH spec, Section 7.1, key exchange algorithms are distinguished based on whether they require an "encryption-capable" or a "signature-capable" host key algorithm.

If I understood their details correctly, the well-known DH-based key exchanges algorithms such as curve25519-sha256, diffie-hellman-group14-sha256 and ecdh-sha2-nistp256 all require a signature-capable host key algorithm. What are examples of SSH key exchange algorithms that instead require an encryption-capable host key algorithm?