We have 3 businesses, located in different areas, with different names. They have different domains, and do not cross reference each other.
The business is the same function (medical), and branded differently, however to make things easy we triplicated the main website. Google now ommits the 2nd and 3rd business (the least busy) when googling. I have checked and the pages are on Google, however it omits them from search results.
Is it sufficient to change the meta tags and some content for them to be seen as separate sites and not duplicates?
We have gone through and setup separate my business pages etc.
In multiple instances lately, I have received the plaintext passwords I entered(not given to me) emailed to me after signing up. The sites in question have been legitimate small businesses, so I suspected it was a default setting. Is this something I as a user should be worried about? In other words, are they not only storing my password in plaintext but sharing it with my mail provider? Here is a link to one example screenshot, too large to fit in the post.
What if a ddos attacker hits public websites (google, amazon etc) with some requests but spoofed the souce ip to the victim’s ip. Now the responses will be sent to the victim’s ip.
Attacker can rotate between the millions of public websites so that the site wont find anything suspicious.
This seems like an easier way than having a malware botnet to do it. The attacker is just using the websites as botnet. Anyone can just do it with the personal computer (or faster and high bandwidth aws/azure VM) and with just 10-20 lines of code.
Why ddos attackers are not doing this instead of buying botnets?
I mean harmful by the fact that they might have ads, popups or other ways in which they might transfer malware to my phone or exploit vulnerabilities. And by visiting I mean interacting, clicking on items found on them, playing videos on them, like adult sites for an example of such a website.
Is there a sandbox or a VM on such Android phone that might help? Or am I secured if I have a basic antivirus, NoScript and an adblocker? Is there any you would recommend?
Many websites ask for payment by entering information from Credit card such as VISA card/ Mastercard etc. Now, till date I knew that I should never tell anybody about these numbers. Then why these websites ask for credit card details and if (suppose) any of them have a malicious intent, then what can they do with these numbers?
Basically my question is, how it is being ensured that they will take the specific amount of money/ cost upon my consent; and without my consent they will not take money? Will the bank send me some verification code to my phone?
I never yet used online transactions and I am very confused to understand its steps, do’s and don’ts etc. I have searched Google and Quora but I didn’t find anything helpful.
I would be thankful if anyone can explain how this specific online transaction mode (by entering credit/debit card number) works and how an without-consent-transaction is prevented, preferably via a flow chart.
Many thanks in advance.
I am a bit at a loss here…
I have read all the “no engine matches” threads here, but what I experience is different and I have no idea why.
Below I will state what I did and what happened.
– I have a list from previously scraped websites from hrefer and wanted to use them in SER
– Went to SER > options > Advanced > Tools > Import URLS (Identify platform and sort in) and imported that list
– After waiting one day the list was successfully imported and indeed I now had many files in “C:\Users\Administrator\AppData\Roaming\GSA Search Engine Ranker\site_list-identified” and all had the correct websites in them
– Now I wanted to use those identified websites for my new project in SER.
– To make things easier, I decided to only post on forums and therefore selected only “Forum Post” in “Type of backlinks to create” underneath options settings of the project
– Read post https://forum.gsa-online.de/discussion/155/inofficial-gsa-search-engine-ranker-faq and made sure all settings were as stated under header “How can I use GSA as posting tool for my own list?”
– When I first started my new project a message popped up that no search engines were selected and that I needed to import my list
– Went to the main screen of SER, right clicked the project, Import target urls and selected “from site lists”
– There I selected only “Identified” and on the next screen all forum engines were pre-selected and therefore just clicked OK.
– Program said the list was imported successfully and “please note that current URL’s will have to be proceed to the end until you see something in the log”
– Started the project, here the strange things happened…
– I got many, many messages stating “No engine matches” and “wrong engine detected”.
This is very strange to me, as during import all sites were already matched to an engine, so how come that all of a sudden during posting the program states that the previously matched websites are now un-matched?
– Is the program for some reason re-identifying the previously identified websites? If so, why?
– Is it a problem with the new version? The identifying was with version 6.42 and posting was done with 6.43
– Has it something to do with “please note that current URL’s will have to be proceed to the end until you see something in the log”? As I don’t really understand what is meant by this?
Thanks for your input.
After doing some research about websites that have been taken down like Silk Road I have been extremely curious about the procedure that the FBI does to take down a website from the Internet.
How do they do it if the website is hosted in another country for example? Do they just contact the hosting company and blackmail them? Does the FBI “hack” the websites they cannot seize?
My question applies for both Dark Web and Clear Web websites. I understand that is much easier for a government entity to just use the law for their purposes but that’s not always possible…
HTTP websites not loading when im using mitmf im using python mitmf.py –spoof –arp –gateway 10.0.0.1 -i eth0 –log-level debug –target 10.0.0.182, command and when i go to my target any http website would not load but i see the request packets going through mitmf?
Hi everyone ,
I'm a web developer, and I've often found myself wasting too much time checking if a responsive website displays properly on all screen sizes.
I feel we have lots of powerful and advanced applications to design and code websites, but there are very few tools to test and debug the behavior of responsive web pages.
I guess many of you have found themselves in my shoes before.
To overcome this problem and make our work easier, I've created slashB, an application that provides…
Responsive websites testing
I'm wondering if there is a JS library or script or CSS library that allows you to mobile optimise any HTML5 / CSS website instantly or very fast.
It can be very time consuming going in and adding a hundred media queries.