Major search engines redirect an HTTP link to a scammy pharmacy site

There is an HTTP website (not HTTPS) that works perfectly fine when the URL is directly typed into the address bar or when links are clicked from other websites/applications like Reddit, Facebook, and Discord.

The exceptions are the major search engines: Google, Yahoo, and Bing. When clicked from one of these sites, it is redirected to a scammy pharmacy site with one of many different domain names. This occurs in Chrome, Firefox, and Edge; it also occurs on Android smartphones. (Bing only does this in Chrome; it works fine in Firefox and Edge.) This issue occurs for multiple people on many different devices.

Interestingly, other search engines (Dogpile, Baidu, Ask, DuckDuckGo, Yandex, etc.) seem to work fine.

What could be the cause of this behavior? Do the search engines or the website need to be fixed, and how? Would converting to HTTPS help, and why?

The website in question is bluefurok.com. I am not the webmaster, but as a programmer and web developer I am curious about this issue.

Google Domains won’t redirect `http` to `https` when Bluehost handles the DNS and website

I have a website hosted on bluehost, but using a domain from Google domains. I am trying to force it to always redirect http://example.io -> https://example.io from the google domains site.

However, when I add the redirect, I get the following error:

This synthetic record has an error and will not function correctly: We had a temporary issue creating your SSL certificate. We will automatically keep trying to resolve this issue. 

enter image description here

We have it setup to redirect permanently using SSL: enter image description here

We are using bluehost name servers. Are we missing something here?

Data being edited in .opvn config when vpn app http gets .opvn files from my host. Adds my wordpress site code and removes the config

I have a VPN app that pulls .ovpn files from a folder on my website /home/mywebsitename/public_html/remoteovpn When I add the .opvn files to this folder and my app pulls them from https request. You can go to the files in the browser link, you can see them and read all the config no problem even dl them. But when my app pulls the files it gets the file names and makes the .opvn files, but edits the files with my website wp theme data, not the .ovpn config data. I can’t figure why it is doing this. Any help would be appreciated been trying for 2 days

VPS with CentOS 7 and cPanel with wordpress theme

Can I use scripts to block other script or block/rewirte HTTP requests?

I’m building a personal website using this premade Enterprise-class CMS because it has both the blogging and wiki/docs parts in one package, it’s not as gorgeous as a WordPress site can get but it’s got a ton of management tools and then some. Plus, I’ve invested on it already and got it looking pretty good for an Enterprise CMS––actually it ain’t that bad even compared against the blogging CMSes.

There’s a big issue with it though: analytics. They are disabled on the backend but the code still loads in every page and I found out that HTTP POST requests are made to a REST endpoint, fortunately all within the domain (although this might be because my reverse-proxy, HAProxy, injects Content Security Policy headers so no requests outside of my domains are allowed) and in the same proxy those REST calls are blocked so they never make it to the server and finally the server itself is blocked from connecting to the Internet on its own so it can’t ever phone home to upload stuff.

Only doing all of this I feel I feel confident about visitor (and my own) privacy and I would leave it at that except for the fact that those REST calls have the word "analytics" right on the URL therefore privacy tools like uBlock Origin flag them on a site with otherwise perfect privacy score.

The CMS allows to put in some code in sections on it, I’m already using code put in the end of the body section to hide the login section back up in the header section, not needed on a personal site. It’s a something like:

<script> jQuery('#sectionid').hide(); </script> 

So I’m thinking about using something like that to either block loading the analytics script’s module I guess it’s called, or perform a function similarly to a CSP, forbid the page to make HTTP requests to that address thus uBlock Origin won’t flag my site. I tried blocking the script from being requested altogether but it’s in some form of multiplexed request with other scripts (as you may tell by now; I know nothing about code) and they are loaded lumped together in batch.js files breaking the site with it when blocked. I found about all of this (and the concept of minify) after a couple of hours viewing logs and analyzing the code with the developer tools on different browsers. Didn’t fix a thing but at least I didn’t break things* and I got an idea on how to proceed.

I also found this resource://gre thing:

enter image description here

…and I am begging that "gre" doesn’t mean what it means in the networking world, y’know–a tunnel, because I’d have to dump the CMS and start looking again. I’ll leave that for later though.

Is there some code to block other code or block/rewrite requests? I have other servers from where I can easily server the code if it can’t be out inline. Any suggestion is welcome.

BTW those last sentences sound like dev talk, at least to me a little, but it’s only what I’ve learned from using a proxy–I really know no code.

*: actually I did break some stuff but thankfully virtualization saved me: I snapshoot (snapshotted?) back in time.

Redirect to subdomain not working when specifying http scheme

I’ve recently uploaded a website on a domain. Using the domain registrar (NameCheap) I’ve also applied a 301 redirect rule so that going to "@" (for example example.com) will redirect to www.example.com.

However, I noticed that if I specify the http scheme like so – http://example.com I get redirected to https://example.com, and get an ERR_CONNECTION_REFUSED error.

What’s the reason for that and how can it be fixed? Am I doing some things wrong?

I’ll mention that at first the 301 rule did not work when I specified to go to https://www.example.com, and only after replacing the https with regular http did it work (although when visiting the site, I still see in the URL that it is in fact using https).

I am receiving a pluggable.php warning sign on my only http:// page

I just recently shared a link to my site ysing he http but instead of redirecting, i just displays this:

Warning: Cannot modify header information – headers already sent by (output started at /home/thecmltm/public_html/index.php:1) in /home/thecmltm/public_html/wp-includes/pluggable.php on line 1281

Warning: Cannot modify header information – headers already sent by (output started at /home/thecmltm/public_html/index.php:1) in /home/thecmltm/public_html/wp-includes/pluggable.php on line 1284

I have searched all over the web but they all talk about function.php or wp_configure.php but that is not what my problem is. I have tried editing the index.php but nothing is wrong with it.

Please help me. Thanks in advance!

Can a webpage differ in content if ‘http’ is changed to ‘https’ or if ‘www.’ is added after ‘http://’ (or ‘https://’)?

When I use the Python package newspaper3k package and run the code

import newspaper paper = newspaper.build('http://abcnews.com', memoize_articles=False) for url in paper.article_urls():     print(url) 

I get a list of URLs for articles that I can download, in which both these URLs exist

  • http://abcnews.go.com/Health/coronavirus-transferred-animals-humans-scientists-answer/story?id=73055380
  • https://abcnews.go.com/Health/coronavirus-transferred-animals-humans-scientists-answer/story?id=73055380

As can be seen, the only difference between the two URLs is the s in https.

The question is, can the webpage content differ simply because an s is added to http? If I scrape a news source (in this case http://abcnews.com), do I need to download both articles to be sure I don’t miss any article, or are they guaranteed to have the same content so that I can download only one of them?

I have also noticed that some URLs also are duplicated by adding www. after the http:// (or https://). I have the same question here: Can this small change cause the webpage content to differ, and is this something I should take into account or can I simply ignore one of these two URLs?

HTTP Request Smuggling Basics

I am currently trying to learn HTTP Request Smuggling vulnerability to furthermore enhance my pen testing skill. I have watched a couple of videos on Youtube and read articles online regarding it but still have a couple of questions in mind. Question:

  • What are the attack vectors of HTTP Req Smuggling (Where should I look)?
  • What is the main way to provide PoC to companies with high traffic? I know that HTTP Smuggling could possibly steal people’s cookie, can this be used for the PoC or is this illegal?
  • Can this or other vulnerability be chained together? (e.g. self-xss & csrf)

Thank you everyone!