Localhost website not accessible from Public IP despite port forwarding

My tiny office has 1 router, which is connected to ADSL line on one end and my laptop on other end. In office, laptop’s local IP is 192.168.1.2.

On office router, I have setup port (22) forwarding for SSH access. I also have DuckDNS script that allows me to ssh -v -t -L 5900:localhost:5900 myname.duckdns.org into my office laptop whenever I want.

I followed the same port-forwarding procedure to configure my router to forward Port 8082 to 192.168.1.2 (TCP, WAN interface is pppoe2). I ran a python/nodejs http server listening on 0.0.0.0:8082.

If I try to access my newly spun server from public IP I get timeout. This is the problem. I can SSH into my remote machine, but website hosted on it doesn’t work

Steps tried:

I take remote desktop of office laptop (using port 5900 for x11 forwarding) and find that firefox can open localhost:8082, 127.0.0.1:8082 and 192.168.1.2:8082 properly.

I tried shutting down extra services like gogs and nginx (which was listening on port 80 even though I didn’t tell it to) via systemctl, but still no luck.

Further, curl http://PUBLIC_IP:8082 gives different outputs:

  1. At home, in my Cmder I get curl: (7) Failed to connect to PUBLIC_IP port 8082: Timed out
  2. However, in SSH terminal (i.e. of remote machine), I get curl: (7) Failed to connect to PUBLIC_IP port 8082: Connection refused

Why is connection refused?

Thanks to @davidgo, I tried

$   sudo tcpdump -vv -i enp7s0 | grep 8082 tcpdump: listening on enp7s0, link-type EN10MB (Ethernet), capture size 262144 bytes 

If I curl localhost:8082 or 192.168.1.2:8082 I see 200 on server logs but I don’t see any output in the above command.
But if I curl PUBLIC_IP:8082 from

  1. inside SSH session I get
    duckDNSsubDomain.40626 > abts-north-dynamic-031.P3.P2.P1.airtelbroadband.in.8082: Flags [S], cksum 0x469a (incorrect -> 0x84f5), seq 18095393, win 64240, options [mss 1460,sackOK,TS val 2474578357 ecr 0,nop,wscale 7], length 0     abts-north-dynamic-031.P3.P2.P1.airtelbroadband.in.8082 > duckDNSsubDomain.40626: Flags [R.], cksum 0x8cea (correct), seq 0, ack 18095394, win 0, length 0 

and a quick connection refused complain by curl (BTW my public IPv4 looks like P1.P2.P3.31.

  1. And if I do the same curl from my home computer I see
    157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0     157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0     157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0     157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0     157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0  

and curl fails with timeout.

Now I am guessing my ISP doesn’t like random ports. So I tried hosting my webserver on port 80. Again, localhost and 192.186.1.2 work as expected but http://PUBLIC_IP:80/ opens up router control panel 🙁

So I try hosting it on a well-known port that’s not 80 or 443. I choose 21 (FTP), use sudo to run webserver listening on 0.0.0.0:21 but firefox/chrome don’t let me open it and curl hangs for a while before failing with a timeout.

cPanel route traffic through two public IPs in different physical locations [closed]

I would like to be able to route an IP I have from another virtual machine to my main cPanel server to be used.

cPanel server IP is 198.51.100.1 and the other machine has 203.0.113.1 and 203.0.113.2 as an additional IP. How am I able to route 203.0.113.2 to the cPanel server. I presume I would need to route it through 203.0.113.1 from the cPanel server side?

The end goal is to add the IP 203.0.113.2 inside cPanel to be used to host a website.

Is “document loaded” different on admin side than public side?

I’m writing a plugin to support Google Graphs. It’s working fine on the public side, but is intermittently rendering weirdly on the admin side, and also intermittently throwing errors that make me think the JS is running too soon. (for example, TypeError: null is not an object (evaluating 'document.getElementById(colControlId).appendChild'), but when I go inspect the page, that element is absolutely there.

The function that drives everything is registered to the Google library as a callback:

// Load the Visualization API and the corechart package. google.charts.load('current', {'packages':['corechart', 'controls']});  // Set a callback to run when the Google Visualization API is loaded. google.charts.setOnLoadCallback(initializeData);  // Query a CSV for the data function initializeData() {         let graphs = document.getElementsByClassName("cwraggbp_chart"); 

The Google docs say, "When the packages [loaded above by google.charts.load] have finished loading, this callback function will be called with no arguments. The loader will also wait for the document to finish loading before calling the callback."

But, I can’t see how I can be getting this error intermittently if the DOM is fully loaded, and I don’t know what "document loaded" means. So, since this works flawlessly on the public side, but not in admin, I’m wondering if … something is different.

I’m loading the scripts thusly:

    public function enqueue_scripts() {              wp_enqueue_script( $  this->plugin_name . '-public',                 plugin_dir_url( __FILE__ )                     . 'js/cwra-google-graph-block-public.js', // this is where the JS above is                 array( $  this->plugin_name . 'googlecharts', 'jquery' ),                 $  this->date_version(                     'js/cwra-google-graph-block-public.js'), false );             wp_localize_script( $  this->plugin_name . '-public',                 'cwraggbp',                 array(                     'contentdir' => wp_upload_dir()['baseurl']                         . '/cwraggb'                 ));              wp_enqueue_script( $  this->plugin_name . 'googlecharts',                 'https://www.gstatic.com/charts/loader.js',                 array(), $  this->version, false );      } 

How critical is encryption-at-rest for public cloud hosted systems

I wok as a solutions architect for web based systems on AWS and as part of this role often respond to Information Security questionnaires. Nearly all questionnaires request information about data encryption at-rest and in-transit. However only a much smaller percentage ask about other security aspects, such as password policies or common web application security issues, as published by OWASP.

I wonder how common/ likely accessing of clients data is within a public cloud provider such as AWS, Azure and GCP. It seems a very high barrier to pass for an external party, even data centers of small local web hosting companies seem to have very good physical access security. And informal conversations with bank employees tell me that accessing someone’s bank account without reason leads to instant dismissal, so surely public cloud providers would have similar controls in place?

This is not to challenge the value of encryption at rest, it is very cheap to access, so there is no reason not to enable it, but where does it sit in terms of priorities?

get_comments with post_status ‘public’ retrieves NULL result

This is my loop:

<?php $  comments = get_comments(array(     'status' => 'approve',     'type' => 'comment',     'number' => 10,     'post_status' => 'public' )); ?>      <ul class="sidebar-comments">         <?php foreach ($  comments as $  comment) { ?>             <li>                 <div><?php echo get_avatar($  comment, $  size = '35'); ?></div>                 <em style="font-size:12px"><?php echo strip_tags($  comment->comment_author); ?></em> (<a href="<?php echo get_option('home'); ?>/?p=<?php echo ($  comment->comment_post_ID); ?>/#comment-<?php echo ($  comment->comment_ID); ?>">link</a>)<br>                 <?php echo wp_html_excerpt($  comment->comment_content, 35); ?>...             </li>         <?php } ?>     </ul> 

This always gives an empty result (no errors). If I remove 'post_status' => 'public' from the get_comments arguments, the function works, comments load but also comments from private posts (which I don’t want).

Any ideas on why 'post_status' => 'public' is not working?

Is it possible to export an expired GPG subkey’s public key without signatures?

Based on Is it possible to export a GPG subkey's public component? I got familiar with:

gpg --keyid-format long --with-fingerprint --list-key {e-mail} gpg --export --armor --output public-key.asc 633DBBC0! # for ssb1 

and

gpg --export-options export-minimal {key-id} 

I also found the following which I added to my gpg.conf.

list-options show-unusable-subkeys 

In the context of a Yubikey, I sometimes need to transfer public key components to a new key ring on a new system in order to decrypt an old file. For some reason gpg --card-status is not enough to get the ball rolling. Gpg will keep reporting that no key exist to decrypt the file. After importing the public key component, it works. I read somewhere on Stack that "the yubikey has not enough data on it to recontruct the public key component." (Might add source later).

However, I don’t want to export all old subkeys (hence keyid!), only a select few and I don’t want to export any signatures (hence export-minimal).

So this is what I tried, but did not result in a desired result:

gpg --armor --export --export-options export-minimal {subkeyid1}! {subkeyid2!} or gpg --armor --export --export-options export-minimal {subkeyid1}! gpg --armor --export --export-options export-minimal {subkeyid2}! 

If I pick one {subkeyx}!, the output is the same. The combination of export-minimal and pointing to a subkey is not working as far as I can tell. I don’t know of any switch I can put in front of keyid, do you?

Then I tried the following and merged them later:

gpg --armor --export --output file1.asc {subkeyid1}! gpg --armor --export --output file2.asc {subkeyid2}! 

But these public key components contain unwanted signatures (and their primary key public part and uid which is acceptable).

I used gpg --armor --export {subkeyid2}! | gpg for reading the output. If I do this with unexpired subkeys, I get an expected result of keys, but if I do this with expired subkeys, the subkey is not listed.

The question: So, how do I export two expired subkeys’s public key components without any signatures?


(Sidenote; meta question; alternative route):

gpg --card-status delivers:

[...] General key info..: sub {rsaX/eccX}/{keyid} {date} {name} {address} sec# {rsaX/eccX}/{keyid} {created date} {expires date} [...] ssb> {rsaX/eccX}/{subkeyid1} {created date} {expires date} card-no: {nr} ssb> {rsaX/eccX}/{subkeyid2} {created date} {expires date} card-no: {nr} 

And as we now from gpg -k and gpg -K. ‘sub’ means public subkey; ‘ssb’ means private subkey and the ‘>’ indicator means material is on smartcard. So this all seems to confirm the public material is not on the card.

What prevents me from using a some server’s public key and impersonate another server [duplicate]

I read alot regarding RSA encryption/DH key exchange/digital signatures and the whole TLS protocol.

There’s something i am missing regarding the public key signatue validation.

Let say some website has a certificate signed with its private key, as a client I have access to the public key.

But if the server only sends the public key to the client, what is preventing me as an attacker from taking this public key, and returning it to who ever wants to communicate with me.

I mean, where is the private-key authentiction comes to place?

I created this small C# code to demostrate:

private const int _port = 4455; static void Main(string[] args) {     Task.Run(async () =>     {         await TcpServerInit();     });      Task.Run(async () =>     {         await TcpClientInit();     });      Console.ReadLine(); }  private static async Task TcpServerInit() {     var server = new TcpListener(IPAddress.Any, _port);     server.Start();      while (true)     {         TcpClient client = await server.AcceptTcpClientAsync();         using (var netStream = client.GetStream())         {             ServicePointManager.ServerCertificateValidationCallback = ValidateCertificate;             ServicePointManager.Expect100Continue = true;              using (var ssl = new SslStream(netStream, false))             {                 using (var cert = new X509Certificate2(@"MyPublicCert.cer"))                 {                     await ssl.AuthenticateAsServerAsync(cert, false, SslProtocols.Tls12, true);                 }             }         }     } }  private static async Task TcpClientInit() {     using (TcpClient client = new TcpClient("localhost", _port))     {         using (SslStream sslStream = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateCertificate), null))         {             var servername = "CN=localhost";             await sslStream.AuthenticateAsClientAsync(servername);             byte[] messsage = Encoding.UTF8.GetBytes("Hello");             sslStream.Write(messsage);             sslStream.Flush();          }     } }    private static bool ValidateCertificate(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) {     //cert validation     return true; } 

Why can’t we encrypt the message with sender’s private key and receiver’s public key in case of sending messages through a server?

I read that why do we need E2EE and can’t rely only on HTTPS for sending messages through a messaging app. The reason which i understood is when sender sends the message to the server, the TLS connection is associated with the server. TLS terminates at the server and whoever controls the server has the ability to view the messages since they are not encrypted.But, In this process when we send a message to the server, we are firstly encrypting the message with sender’s private key and then with server’s public key.

My question is why can’t we encrypt the message with sender’s private key and then receiver’s public key? In this way, even if it reaches server, it won’t be able to view anything since it can only be decrypted using receiver’s private key.

If this is possible, then why do we use methods like Diffie Hellman key exchange?

What prevents someone from spoofing their public key when trying to establish an SSH connection?

Recently I’ve been trying to learn the mechanisms behind SSH keys but I came across this question that I haven’t been able to find an answer to (I haven’t figured out how to word my question such that searching it would give me the answer).

Basically, we add our local machine’s public key to the server’s authorized_keys file which allows us to be authenticated automatically when we try to ssh into the server later on. My question is: what if someone takes my public key (it is public after all) and replaces their public key with it? When the "attacker" tries to connect to the server, what part of the process allows the server to know that they do not have the correct private key?

I read somewhere that for RSA, it is possible for a user (let’s say user A) to encrypt/sign a message with their private key, and then for others to decrypt this message using A‘s public key, thus proving that A is really who they claim to be. However, apparently, this is not true for all cryptosystems, where it is not possible to sign with a private key (according to What happens when encrypting with private key?, feel free to correct this information if it is wrong). In those cases, how does the server make sure that the user is really who they claim to be?