Storage of SSL private key in load balancer VS HSM

I have a setup whereby the SSL certificates are terminated at the load balancer (i.e. Load balancer to web server is in plaintext). In order to do the SSL termination, the private key is stored on the load balancer itself. I do have a HSM in a data center.

I was told by security guys that best practice is to store the private key in a HSM.

I have read Should SSL be terminated at a load balancer? and I understand that there is nothing wrong in terminating SSL encryption at the load balancer.

However, should the private keys be stored in the load balancer itself (from a security perspective)? Are there any technical challenges in storing the SSL private keys in a central HSM instead?

How can I give a network load balancer (of any type) access to a port on a machine without opening that port to an entire VPC?

Is it possible to make an AWS load balancer for a non-HTTP port allowed to access a specific port on a host without opening that port open to the entire VPC’s subnet? I seem to remember reading that this might be possible with an IAM policy based around specific resources or something like that.

As you create a non-classic network load balancer it says: ” The security groups for your instances must allow traffic from the VPC CIDR on the health check port.”

Which is ok, but just barely since this service really doesn’t have much/any authentication.

Isn’t there a way this can be done with IAM permissions instead of a security group? I was reading about AWS firewall security somewhere and they mentioned that sometimes you can use an IAM policy, sometimes even a cross-account IAM policy to connect to the machines behind the firewall.

Any suggestions? I can definitely deploy more machines or AWS stuff.

What is the way to know that the endpoint is down if there is an L4 / L7 balancer?

I’m writing an API-fuzzer and I want to detect if a sequence cause falling down of an endpoint of some service. Of course I can get 500 response code, but it’s may be called from code of an endpoint. And there are any exact way to find out if a server is down?

I don’t know in advance what kind of balancing is on the service side. So I will be glad to know a universal solution, or some particular one.

How to redirect a port range on a load balancer

I’m a little new to this and trying to figure this out. I have 3 web servers running Ubuntu 16.04 and Apache2. One of the web servers will serve as a load balancer for the other two. I have the load balancing part correct, but I am trying to set this up where the port range 60000-65000 on the load balancer is redirected to the web servers on port 80. My guess is that this is done using iptables but that’s just a guess. Has anyone implemented this before? Thanks in advance.

Load Balancer requesting authentication infrequently SharePoint 2013

We have 2 WFE’s and a Citrix NetScaler load balancer (LB). We have issues where the load balancer ‘randomly’ requests user to authenticate. We know this because the load balancer URL is shown in the authentication box. We have ensured that our web applications are trusted and that the logged on user’s credentials are always used for web application access, however are not sure why the LB would be asking users to authenticate. Often if the user just closes down the box they can continue working but usually content linked to mysites is missing (user profiles images for instance – a red cross where the image should be)We are sure it happens for other cases but we cannot clearly qualify all other cases. If users enter their credentials and click to save username and password, we get mixed results and they often cannot get back to the intranet SP2013 site unless they clear out their Windows credentials.

Does anyone in the SP2013 world have experience with this issue?

How to correctly set up AspNet Core 2 authentication behind a load balancer?

I’ve set up AspNet Core 2 authentication successfully, but now would like to get it working behind a load balancer.

Because the load balancer address is different from my app address I’m changing the redirect Uri in my startup.cs ConfigureServices like this…

options.Events.OnRedirectToIdentityProvider = async n =>      {                                 n.ProtocolMessage.RedirectUri = "https://frontfacingaddress.com";         await Task.FromResult(0);      }; 

This works fine and I successfully authenticate and the callback from the identity server calls https://frontfacingaddress.com/signin-oidc. That is correctly handled and handling OnTokenResponseReceived shows that I successfully recieve the token.

The problem is: it is then making another call to the identity server but this time to the app’s actual (not load balancing) address. When that comes back it gives an error of: AspNetCore.Correlation.OpenIdConnect cookie not found.

So the Fiddler trace looks like this:

302 HTTPS  frontfacingaddress.com   /account/signin 200 HTTPS  identity.serveraddress.com /connect/authorize/callback etc... 302 HTTPS  frontfacingaddress.com   /signin-oidc -- this is where I successfully receive the code, but then: 302 HTTPS  actualwebaddress.com     /account/signin 200 HTTPS  identity.serveraddress.com /connect/authorize/callback etc... 400 HTTPS  frontfacingaddress.com   /signin-oidc -- this is the 400 cookie not found error 

Why, after successfully authenticating, is it then firing again from the actual address and failing?

AWS application load balancer 404

I’m following tutorial to create an application load balancer and the listener path is as follow:

LB -> path -> server1 or -> path -> server2

The problem:

I can get to server1 via the LB url ok but when I tried to go to server2 i received 404 page.

If i delete & reconfigure the LB & swap the server around then I can get to server2 but will get 404 if i tried to go to server1.

I can get to both server just fine directly.

Thanks for your help

HTTPS works only with Load Balancer DNS – AWS

I have a problem with HTTPS configuration on AWS hope you can help.

What I already have:

  1. EC2 – with Elastic IP, open ports screen shot with security group.
  2. Load Balancer attached to EC2 (with same security group as EC2).
  3. SSL certificate from AWS (ACM)
  4. Domain – “Transferred”, From another service (not amazon) using just Elastic IP for DNS configurations. (Can this be the problem?)
  5. Route53 – configured for Domain with AWS (SSL) and for IPV4 address I am using alias for Load Balancer.

How it works:

  • EC2: Elastic IP and public DNS (are working only for http) as it should work I guess.
  • LOAD BALANCER: Works and gives HTTPS and HTTP access just from DNS name.
  • Route53(domain) – Works just for HTTP, every HTTPS request returns ERR_CONNECTION_REFUSED

Is it going to fix the problem if I will change EC2’s elastic ip in Domain DNS with Load Balancer’s public DNS name?

Can an AWS Classic Load Balancer redirect traffic from a public ip address to a ec2 instance with only private ips?

I have a classic load balancer that has public ip addresses. Do the ec2 instances that it routes traffic to need public ip addresses as well, or will it successfully redirect the traffic to a private ip address? They’re all located in different subnets in the same VPC.

The Classic Load Balancer allowed me to add the instances with only private ip addresses without any sort of complaints or errors.

NGINX: determining what port a request forwarded by an AWS load balancer was originally sent to

I want a set-up where an Amazon load balancer forwards requests received on any of port 21001, 21002 and 21003 and send them all to an NGINX reverse-proxy on port 443 (regardless) which in turn forwards the request to a member of a cluster by IP address. Is there a directive I can include in an nginx.conf to determine the port that the original request was sent to in order to route this? X-Forwarded-Proto seems close, but only determines between https and http, not arbitrary high TCP ports.