Limit access to servers through jumpbox only

I have a number of servers on digital ocean which I would like to only be accessible via the one central jumpbox (also on digital ocean).

I would like to do this via the SSH key access layer so the process is as follows :

  • User will ssh to jumpbox & enter user, password & google-auth verification (I have configured this part)
  • User will try to ssh to a server already within the jump hosts file e.g ssh production (which is mapped to an IP in the hosts file) as I am in the jumpbox already it will only ask for that users password on the destination server.

What is the most optimum way to achieve this?

Distribute jobs evenly among a set of servers

I have the need to distribute a set of long running jobs across a cluster of containers in ECS. These jobs essentially would need to open a socket connection with a remote server and begin streaming data for import into a database.

For each customer, there may be any number of socket connections required to consume different data feeds. Creating a new ECS service for each customer is not practical as this would also require new ECS task definitions with slightly different configurations, which ultimately would result in having 1000s of services/task definitions. This approach would quickly become a maintenance and monitoring nightmare, so I am looking for a simpler solution.

The list of “feeds” is relatively static and is stored in a document database. Feeds are only added as new customers sign up. My initial thought is to have a fixed number of containers responsible for fetching the feed configurations from the database. Each container would attempt to acquire a lease for the feed, and if acquired, start the feed and let it run until the container is killed or the connection is interrupted. Each container would have to periodically check for new feeds that are available in the pool. They would also have to extend the lease while the feed is running so the same feed isn’t pulled by another container. There could be a race condition here where the lease expires before it is extended, so I’d have to be careful to always extend the lease.

This solution would work, but there are some obvious sticking points. If each container starts at relatively the same time, there needs to be a way to control how many jobs each container is allowed to start at one time so that you don’t have 1 or 2 containers starting all of the jobs at once. One approach would be to pull one job every couple seconds until the pool is empty and all feeds are leased. This would lead to potentially uneven job distribution, and the ramp up time might take a while until all jobs are pulled from the pool and leased. I could also have the containers fetch a feed and start it, then go grab another one. Some containers may start their feeds faster and go fetch another job before another container could finish starting its feed, thereby leading to container hot spots.

Another approach could be using something like consistent hashing. If each container can know the ID of itself and the other containers, it can hash the feed configuration ID and figure out which container it belongs on. This would distribute the jobs more evenly, but I would also have to deal with checking periodically for new feeds, for example, if a container were killed and the feeds leases expired.

I have also thought that actor programming like with Akka might be exactly for this problem, but that does not come without significant complexity in implementation.

I am open to any and all suggestions or any other holes you can poke into my existing proposals!

Different HPE RAM for Intel & AMD servers

I have a HPE ProLiant DL325 G10 server and the specs for it state that HPE 838089-B21 RAM applies to it. The DL360/Intel servers accept HPE 835955-B21 RAM, which is about $ 45 cheaper. Spec wise both RAM is exactly the same. The only thing that seems to differ are the part numbers. Does anyone know why this is? And will 835955-B21 work perfectly in the DL325?

HPE doesn’t really answer the question. They just tell me that’s what the specs say…

New to servers, just recieved an intransa va expandr

Intransa VA-EXPANDR 12 2 TB Disk array question.

I received this 12 disk array along with a dell poweredge 2850. I would like to set up a RAID but i’m having trouble understanding how to hook up to the 12 disk array. It has an output port on the back that takes a SAS SFF-8088 cable but the poweredge does not have a SAS port. The cable came with it is a SFF-8088 to SFF-8470 SAS cable.

Do i directly hook up the disk array to the poweredge? In that case would it be a raid controller with a SFF-8470 port that i would need to buy for the poweredge or a differn’t type of card.

I’ve been looking this up for hours now, im lost. Thanks for any help.

RAM compatibility for HPE servers

I ordered a HPE ProLiant DL325 G10 server. It comes with a AMD EPYC 7401P and 32GB (2x16GB) DDR4 2666MT/s 2Rx8 RDIMMs. I’d like to add more RAM to it. I’m wondering if I can buy non-HPE branded RAM and save a bunch of money and have no compatibility issues. I looked up the module part numbers for the 16GB & 32GB sticks HPE sells. I found them a lot cheaper without the HPE sticker. Will these work perfectly without any issues? And is there really anything different in the hardware in HPE RAM?

Multi servers , Networking issues

I have couple of servers which they both running different websites. They both has the same external IP address. (im running them from my home and i have only 1 router with 1 ip address).

Those 2 servers requires the same ports, let’s suppose the port 80(HTTP). So i opened the port 80 for the first server, but what about the second one?

So now the question is, how can i connect to the right server when they both have the same IP and the same port ?

should i have 2 routers in my home with 2 diffrent ip’s ? (what if i had 50 server, should i get 50 ip ? )