Timeout connecting to PostgreSQL from remote address

I’m going nuts here.

We had a production server running PostgreSQL 9.6; it took a dump on Thursday, we thought we had it up and running, but discovered an issue this morning with a materialized view. Any attempt to drop, alter, index or analyze the view locked up without throwing an error.

First step was to reinstall 9.6. On doing so, we were able to restore the data with no errors, and everything looked fine from the local machine.

However, we were not able to connect to the database from any remote IP address on the server’s network.

The system sits on a private network, and I connect to it through VPN. Two production web servers connect to it via the private network.

We’re running Windows Server 2016, and have tried the following to absolutely no effect:

  • Edited postgresql.conf ‘listen_addresses’ to ‘*’, ‘’, ”, and a comma deliminted list of these.

  • Changed the port in postgresql.conf

  • Edited pg_hba.conf file to: ‘’, ‘X.Y.Z.0/24’, ‘samenet’

  • Deleted and recreated the Windows firewall policy

  • Loosened up the incoming IP address restrictions in Windows firewall until there were no IP restrictions.

  • Turned off Windows firewall.

  • Deleted PostgreSQL 9.6, scrubbed every reference to it from the registry, except for benign entries like ‘recent items’, deleted every file from the HD.

  • Installed PGSQL 13

Same issue.

Tried these additional steps in 13, again, with no effect.

  • Changed the port in postgresql.conf

  • Changed the ‘listen_addresses’ in posgresql.conf to variations used above.

  • Used ‘all’ in ADDRESS field of the PGSQL 13 pg_hba.conf file as well as the other options (, X.Y.0.0/16)

As far as I know, I’ve tried all the obvious fixes: pg_hba.conf, postgresql.conf, Windows firewall, with absolutely no change.

Short of torching the whole d*mn thing and starting over, I don’t know what else to try.

harvester – hardcoded timeout?


I have problem – I cannot properly control timeout for custom search engine 
there seems to be hardcoded read timeout of ~30 seconds… How can I lift this timeout? It is nowhere to be found in config or ini settings…

if web server(custom search engine) takes more than 30 seconds to process request scrapebox fails.  But this timeout should be configurable in either  config ini file or in menu. I cannot find it anywhere

[Image: j2fM4Qs.png]

[Image: rrX2X3e.png]
[Image: urb3oqC.png]

How to reproduce “resource busy and acquire with NOWAIT specified or timeout expired ” error on local machine in oracle

I am running oracle database and I am getting error as "resource busy and acquire with NOWAIT specified or timeout expired " and because of this error i am getting some other exceptions. on my local machine I am not getting this "resource busy …" error. basically I want to reproduce this error on my local machine . is there any way to reproduce this error manually on my local machine?

what wrong am i doing with SOAP request, getting error invalid timeout formats [closed]

<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:Header><SecurityHeader xmlns="http://services.medconnect.net/submissionportal"><UserName>2143883</UserName><Password><![CDATA[I3zt!7&W]]></Password></SecurityHeader></soap:Header><soap:Body><SubmitSync xmlns="http://services.medconnect.net/submissionportal"><request><![CDATA[ISA*00*          *00*          *ZZ*EXPEDIUM       *30*204202692      *200904*0419*^*00501*007281118*0*P*:~GS*HS*EXPEDIUM*204202692*20200904*0419*7281119*X*005010X279A1~ST*270*007281120*005010X279A1~BHT*0022*13*7281120*20200904*0419~HL*1**20*1~NM1*PR*2*BCBS OF NORTH CAROLINA*****PI*10383~HL*2*1*21*1~NM1*1P*2*BEAUFORT COUNTY HEALTH DEPARTMENT*****XX*1679576763~REF*TJ*566001521~PRV*PE*PXC*261QP0905X~HL*3*2*22*0~TRN*1*1013076869*9919649646~NM1*IL*1*BROWN*JEAN*M***MI*KBOW1747326401~REF*SY*141117752~DMG*D8*19650504*F~DTP*291*D8*20200904~EQ*30~SE*16*007281120~GE*1*7281119~IEA*1*007281118]]></request><requestFormat>EDI</requestFormat><responseFormat>EDI</responseFormat><synchronousTimeout>00:01:00</synchronousTimeout><submissionTimeout>00:01:00</submissionTimeout></SubmitSync></soap:Body></soap:Envelope>  Response ----------- <faultstring>Invalid Timeout Format: , Valid Format: d.hh:mm:ss, Note: Hours &lt;= 23, Minutes &lt;= 59, Seconds &lt;= 59</faultstring>  please advise on this 

WordPress PHP custom function is causing 500 Internal Server Error Connection Timeout

I wrote a custom script to insert a post into WordPress and upload 3 images to the WP uploads directory.

To write the post I use the WP function wp_insert_post( $ wp_post_array, true );. Inside the script at various stages I also use wp_get_attachment_image_src($ image_id, $ size)[0];, wp_get_attachment_metadata($ image_id); and wp_get_attachment_image( $ image_id, 'large', false, $ image_attr ); but to upload the images and create their metadata I wrote this custom function below…

I must have messed up somewhere because I get a 500 Connection Timeout error when I run this code (even though it is only 3 images that are less than 1Mb each in size).

Can somebody spot what I am doing wrong? Thank you for your eyes and experience.

function insert_WP_Images_Data( $  post_id, $  image_url ) {  global $  writer_WP_id;  $  upload_dir = wp_upload_dir();  if ( isset($  image_url) && isset($  post_id) ) {      $  filename = basename($  image_url);     if(wp_mkdir_p($  upload_dir['path']))         $  file = $  upload_dir['path'] . '/' . $  filename;     else         $  file = $  upload_dir['basedir'] . '/' . $  filename;     $  image_data = file_get_contents( $  image_url );     file_put_contents($  file, $  image_data);      $  wp_filetype = wp_check_filetype($  filename, null);     $  attachment = array(     'post_author' => $  writer_WP_id,     'post_content' => '',     'post_title' => $  _SESSION['artist'],     'post_status' => 'inherit',     'post_name' => pathinfo($  image_url)['filename'],     'post_mime_type' => $  wp_filetype['type'],     'post_parent' => $  post_id,     'guid' => $  upload_dir['url'].'/'.$  filename     );     //  'post_title' => sanitize_file_name($  filename),      $  image_id = wp_insert_attachment( $  attachment, $  file, $  post_id );      require_once( ABSPATH.'wp-admin/includes/image.php' );     $  attach_data = wp_generate_attachment_metadata( $  image_id, $  file );     $  res1 = wp_update_attachment_metadata( $  image_id, $  attach_data );     $  res2 = set_post_thumbnail( $  post_id, $  image_id );      return $  image_id; } else {     echo '<span class="error">No post is selected or image is selected</span>'; } } 

I have already tried increasing my server execution time in cPanel (200, 600) and via .htaceess (300) but nothing works…

Why do we want a timeout on a server?

This question is quite general as I want to understand the (security) benefits of a ‘timeout’ in general.

For our Nginx proxy-server, we have been using HTTP/S timeouts where we came across the issue were the Nginx server returned a time-out. Now, we have solved this by simply increasing the Nginx time-out. We keep upscaling the timeout, may it be for a specific endpoint, it seems we keep pushing an underlying problem. We see this problem again and again where I asked the question: Why do we even want to have timeouts?

Thinking about some malicious attempts, like sending a bulk of load to the server, if Nginx gives a timeout (or any ‘timeout manager’) the server would still be processing the data.

So, why would we use server timeouts and what would be a better way to solve the issues for reaching the timeout cap every time? Would be paradigm like WebSocket, SSE or (Long-)Polling resolve this?

Timeout at verification

Hi @Sven I see that for manual verification of urls have little choice of the number of thread, the number of attempts but not the timout?

It’s been 5 times that I relaunch the verification on certain links which nevertheless pass through my browser but which do not want to be in ser. I could do it manually but it will have to sort for a long time and then I imagine that many other “useful” links are deleted unnecessarily.

PCI Idle Session Timeout general question

Can someone help me understand how the PCI Timeout rules change for an application like the Starbucks App? A user is able to keep their card open ready for scan for longer the 15 minutes if needed, but PCI A11y AA also requires to display a message giving the user a chance to react and keep the session alive.

I understand and have implemented it from an e-commerce approach but am a bit confused on the e-wallet approach.

What attacks are prevented using Session Timeout or Expiry?

OWASP recommends setting session timeouts to minimal value possible, to minimize the time an attacker has to hijack the session:

Session timeout define action window time for a user thus this window represents, in the same time, the delay in which an attacker can try to steal and use a existing user session…

For this, it’s best practices to :

  • Set session timeout to the minimal value possible depending on the context of the application.
  • Avoid “infinite” session timeout.
  • Prefer declarative definition of the session timeout in order to apply global timeout for all application sessions.
  • Trace session creation/destroy in order to analyse creation trend and try to detect anormal session number creation (application profiling phase in a attack).


The most popular methods of session hijacking attacks are session-fixation, packet sniffing, xss and compromise via malware, but these are all real-time attacks on the current session.

Once hijacked, the attacker will be able to prevent an idle timeout (via activity), and I would consider any successful session hijack a security breach anyway (unless you want to argue how much larger than zero seconds of access an attacker can have before it actually counts as an actual breach).

If the original method of getting the session token can be repeated, this seems to further limit the usefulness of a timeout — a 5-minute window that can be repeated indefinitely is effectively not limited.

What real-world attack exists (even theoretically) where a session timeout would be an effective mitigation? Is session expiry really just a form of security-theater?

How might we help customers get back on track from a connection timeout message

I’m designing ‘sad path’ scenarios for checkout and I’m trying to design for helping customers when a connection timeout occurs when the checkout hangs trying to connect to our 3rd party credit card payment form.

When this happens the credit payment form could not get loaded in our checkout environment.

A simple solution is to reload the page.

The UX/UI solution I’m putting forward is an alert message that appears on the page and asks the customer to reload the page.

This is my attempt at making the error message more ‘user-friendly’:

A connection error occurred

An error occurred when we were trying to connect to the system.

Please reload the page to try connecting again.

[ Reload page ] <— button

How do people feel about the above message? Any other solutions you can think of?