WordPress PHP custom function is causing 500 Internal Server Error Connection Timeout

I wrote a custom script to insert a post into WordPress and upload 3 images to the WP uploads directory.

To write the post I use the WP function wp_insert_post( $ wp_post_array, true );. Inside the script at various stages I also use wp_get_attachment_image_src($ image_id, $ size)[0];, wp_get_attachment_metadata($ image_id); and wp_get_attachment_image( $ image_id, 'large', false, $ image_attr ); but to upload the images and create their metadata I wrote this custom function below…

I must have messed up somewhere because I get a 500 Connection Timeout error when I run this code (even though it is only 3 images that are less than 1Mb each in size).

Can somebody spot what I am doing wrong? Thank you for your eyes and experience.

function insert_WP_Images_Data( $  post_id, $  image_url ) {  global $  writer_WP_id;  $  upload_dir = wp_upload_dir();  if ( isset($  image_url) && isset($  post_id) ) {      $  filename = basename($  image_url);     if(wp_mkdir_p($  upload_dir['path']))         $  file = $  upload_dir['path'] . '/' . $  filename;     else         $  file = $  upload_dir['basedir'] . '/' . $  filename;     $  image_data = file_get_contents( $  image_url );     file_put_contents($  file, $  image_data);      $  wp_filetype = wp_check_filetype($  filename, null);     $  attachment = array(     'post_author' => $  writer_WP_id,     'post_content' => '',     'post_title' => $  _SESSION['artist'],     'post_status' => 'inherit',     'post_name' => pathinfo($  image_url)['filename'],     'post_mime_type' => $  wp_filetype['type'],     'post_parent' => $  post_id,     'guid' => $  upload_dir['url'].'/'.$  filename     );     //  'post_title' => sanitize_file_name($  filename),      $  image_id = wp_insert_attachment( $  attachment, $  file, $  post_id );      require_once( ABSPATH.'wp-admin/includes/image.php' );     $  attach_data = wp_generate_attachment_metadata( $  image_id, $  file );     $  res1 = wp_update_attachment_metadata( $  image_id, $  attach_data );     $  res2 = set_post_thumbnail( $  post_id, $  image_id );      return $  image_id; } else {     echo '<span class="error">No post is selected or image is selected</span>'; } } 

I have already tried increasing my server execution time in cPanel (200, 600) and via .htaceess (300) but nothing works…

Why do we want a timeout on a server?

This question is quite general as I want to understand the (security) benefits of a ‘timeout’ in general.

For our Nginx proxy-server, we have been using HTTP/S timeouts where we came across the issue were the Nginx server returned a time-out. Now, we have solved this by simply increasing the Nginx time-out. We keep upscaling the timeout, may it be for a specific endpoint, it seems we keep pushing an underlying problem. We see this problem again and again where I asked the question: Why do we even want to have timeouts?

Thinking about some malicious attempts, like sending a bulk of load to the server, if Nginx gives a timeout (or any ‘timeout manager’) the server would still be processing the data.

So, why would we use server timeouts and what would be a better way to solve the issues for reaching the timeout cap every time? Would be paradigm like WebSocket, SSE or (Long-)Polling resolve this?

Timeout at verification

Hi @Sven I see that for manual verification of urls have little choice of the number of thread, the number of attempts but not the timout?

It’s been 5 times that I relaunch the verification on certain links which nevertheless pass through my browser but which do not want to be in ser. I could do it manually but it will have to sort for a long time and then I imagine that many other “useful” links are deleted unnecessarily.

PCI Idle Session Timeout general question

Can someone help me understand how the PCI Timeout rules change for an application like the Starbucks App? A user is able to keep their card open ready for scan for longer the 15 minutes if needed, but PCI A11y AA also requires to display a message giving the user a chance to react and keep the session alive.

I understand and have implemented it from an e-commerce approach but am a bit confused on the e-wallet approach.

What attacks are prevented using Session Timeout or Expiry?

OWASP recommends setting session timeouts to minimal value possible, to minimize the time an attacker has to hijack the session:

Session timeout define action window time for a user thus this window represents, in the same time, the delay in which an attacker can try to steal and use a existing user session…

For this, it’s best practices to :

  • Set session timeout to the minimal value possible depending on the context of the application.
  • Avoid “infinite” session timeout.
  • Prefer declarative definition of the session timeout in order to apply global timeout for all application sessions.
  • Trace session creation/destroy in order to analyse creation trend and try to detect anormal session number creation (application profiling phase in a attack).

(Source)

The most popular methods of session hijacking attacks are session-fixation, packet sniffing, xss and compromise via malware, but these are all real-time attacks on the current session.

Once hijacked, the attacker will be able to prevent an idle timeout (via activity), and I would consider any successful session hijack a security breach anyway (unless you want to argue how much larger than zero seconds of access an attacker can have before it actually counts as an actual breach).

If the original method of getting the session token can be repeated, this seems to further limit the usefulness of a timeout — a 5-minute window that can be repeated indefinitely is effectively not limited.

What real-world attack exists (even theoretically) where a session timeout would be an effective mitigation? Is session expiry really just a form of security-theater?

How might we help customers get back on track from a connection timeout message

I’m designing ‘sad path’ scenarios for checkout and I’m trying to design for helping customers when a connection timeout occurs when the checkout hangs trying to connect to our 3rd party credit card payment form.

When this happens the credit payment form could not get loaded in our checkout environment.

A simple solution is to reload the page.

The UX/UI solution I’m putting forward is an alert message that appears on the page and asks the customer to reload the page.

This is my attempt at making the error message more ‘user-friendly’:


A connection error occurred

An error occurred when we were trying to connect to the system.

Please reload the page to try connecting again.

[ Reload page ] <— button


How do people feel about the above message? Any other solutions you can think of?

Thanks.

cURL timeout error 28 in Site Health and Sucuri SiteCheck

I run a server hosting multiple WordPress installations with the iThemes Security Pro plugin installed. One of the things that this plugin does is it uses Sucuri SiteCheck to scan the site for vulnerabilities: https://sitecheck.sucuri.net/

Recently, SiteCheck has been failing on all of my sites, reporting the following error:

Unable to properly scan your site. Timeout reached 

Coincidentally, the new Site Health WordPress Tool has also been reporting the following error on all my sites:

The REST API is one way WordPress, and other applications, communicate with  the server. One example is the block editor screen, which relies on this to  display, and save, your posts and pages.  The REST API request failed due to an error. Error: [] cURL error 28: Connection timed out after 10000 milliseconds 

I suspect that the issues are related, but I don’t know where to start to fix this issue. I have both Fail2Ban and ModSecurity enabled on my server and on Apache respectively, but the problem still persists when I turn off the services.

Will appreciate if someone could help pinpoint possible issues. SiteCheck has always worked on my server without a hitch.

Ubuntu 18 get timeout on all my SMB mounts, but not Ubuntu 16.04 LTS, Fedora 25 or Windows 7 and 10

I have a couple of SMB mounts on some different Linux machines. Most of them are hosted on a ClearOS 6 or 7 machines and I have never had problems mounting these smb mounts on any earlier Ubuntu version, Fedora or Windows before, but Ubuntu 18.04 always gets a timeout when transferring big lots of data.

Ubuntu 16.04 worked like a charm for years. I do have one Ubuntu 18.04 machine that works great with the SMB, but that was upgraded from 16.04, the clean installed Ubuntu 18.04 machines does get a timeout.

Just checked and the Ubuntu 18 machine that have no problems with my SMB servers runs an older kernel than the ones that are clean installed. Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-58-generic x86_64)
vs. Ubuntu 18.04.3 LTS (GNU/Linux 5.0.0-29-generic x86_64)
Ubuntu 18.04.3 LTS (GNU/Linux 5.0.0-25-generic x86_64)

I know they aren’t fully updated, but my SMB issues have been present all the time on a clean installed Ubuntu 18.04.

Does anyone have any ideas of why my clean installed Ubuntu 18.04 machines timeout compared to Ubuntu 16.04, Ubuntu 18.04 upgraded from Ubuntu 16.04, Fedora or Windows.

Different tweaks I have tried on the samba servers under [global] to no avail.

socket options = IPTOS_LOWDELAY TCP_NODELAY    socket options = IPTOS_LOWDELAY TCP_NODELAY SO_KEEPALIVE    socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT    socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT SO_KEEPALIVE    socket options = IPTOS_LOWDELAY TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384    socket options = IPTOS_LOWDELAY TCP_NODELAY SO_KEEPALIVE SO_RCVBUF=16384 SO_SNDBUF=16384    socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT SO_RCVBUF=16384 SO_SNDBUF=16384    socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT SO_KEEPALIVE SO_RCVBUF=16384 SO_SNDBUF=16384    socket options = IPTOS_LOWDELAY TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536    socket options = IPTOS_LOWDELAY TCP_NODELAY SO_KEEPALIVE SO_RCVBUF=65536 SO_SNDBUF=65536    socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT SO_RCVBUF=65536 SO_SNDBUF=65536    socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT SO_KEEPALIVE SO_RCVBUF=65536 SO_SNDBUF=65536   

Does anyone have any ideas why the timeout only happens on these
Ubuntu 18.04.3 LTS (GNU/Linux 5.0.0-29-generic x86_64)
Ubuntu 18.04.3 LTS (GNU/Linux 5.0.0-25-generic x86_64)
machines?

Edit:
From the log

[2019/09/23 15:22:54.310475,  1] smbd/process.c:457(receive_smb_talloc)   receive_smb_raw_talloc failed for client 192.168.10.70 read error = NT_STATUS_CONNECTION_RESET. [2019/09/23 15:22:54.370419,  1] smbd/service.c:1378(close_cnum)   buntu (192.168.10.70) closed connection to service 

And i have this under [global] too

client min protocol = SMB1 client max protocol = SMB3 

I know SMB1 isnt safe anymore but this is on a local LAN only and used in support for older software and Phones at the moment.

Lock screen screen timeout

When I wake up the computer from sleep or manually lock the screen using Super+L the monitor goes out very quickly (~5-10 seconds). Especially after waking up this is annoying as I need to start entering my password fairly quickly to prevent the screen from turning off. How can I set this “screen off when in lock screen timeout”?

I’m on Ubuntu 18.04.02 LTS.