No Client Hello w/ SNI when accessing website’s subdomain via link

I noticed this while testing SNI-based HTTPS filtering for fun.

mail.yahoo.com is blocked when accessed directly. It is not blocked if you login to your Yahoo account and access mail.yahoo.com via the “Mail” link. I ran a packet capture and see there are no Client Hello messages with the mail.yahoo.com name in the SNI extension field when I click the “Mail” link.

My assumption is that the client is somehow re-using the same connection since the *.yahoo.com certificate is valid for both domains. Anybody with some deeper knowledge of TLS able to clarify what’s going on or able to point me in direction of some documentation?

Also, if that’s the case, then why does Chrome send a Client Hello w/ Google subdomains when attempting similar tests via accounts.google.com?

NOTE: I’m blocking UDP/80 and UDP/443, so QUIC should not be influencing my results. Also, I use deep inspection in my day-to-day, so please no responses telling me to stop using SNI-filtering.

Build beautiful websites even if you don’t know design

Hello webmasters,

Today I have written a very informative post about web developers who struggle to build beautiful websites. I use to struggle but after I learned these concepts my designs got better a lot. Hope you find it useful …

http://www.farrisfahad.com/post/how-to-design-a-beautiful-website-as-a-web-developer

How to credit a website’s designers and developers in schema.org structured data

Our web dev agency is working with a design agency to build a website for a client. I want to make it clear to google that our client owns the site, but that we and the design agency made it. So far here is what I have:

<script type="application/ld+json">{     "@type":"Organization",     "name":"Our Client",     "@id":"/#Organization",     "details":"checked against google structured data testing tool",     "@context":"https://schema.org" }</script><script type="application/ld+json">{     "@type":"WebSite",     "@id":"/#WebSite",     "details":"checked against google structured data testing tool",     "sourceOrganization":{         "@id":"/#Organization"     },     "creator":[         {             "@type":"Organization",             "name":"Web Dev Agency",             "@id":"web-dev-agency.com/#Organization"         },         {             "@type":"Organization",             "name":"Design Studio",             "@id":"design-studio.com/#Organization"         }     ],     "@context":"https://schema.org" }</script> 

and then objects on the page are linked by isPartOf to a WebPage, which similarly links to the WebSite itself.

First off, does this make sense? I’m still figuring out structured data and haven’t been able to find examples of this particular use case, but the structured data testing tool is giving me the OK.

Is there a better way to show that our client owns the website and is responsible for its day to day running? I’ve also considered the Producer and Publisher types, but nothing feels quite right for this relationship.

I’d like to credit individual designers and developers – would it be better to have the website creator objects as Persons, pointing to unique @ids, or have them as members of the creator organisations as they stand?

Are there any apps / websites that let you share homebrew content with players? [closed]

As in the title, are there any apps / websites that let you share homebrew content with players during a live gaming session?

For example, I’ve heard that DnDBeyond lets you share WotC content with players. And I’ve used Roll20 which lets you share content in game.

But I was looking for something that is not Roll20 (i.e. aimed at live play) and not DnDBeyond (which is paid for and seems to be only WotC).

Any suggestions?

How are websites actually mititating BREACH? (HTTPS + compression)

After reading some popular questions and answer on this website about BREACH, the only advice seems to be: don’t compress anything that might contain secrets (including CSRF tokens). However, that doesn’t sound like great advice. Most websites are actually compressing everything, so I wonder what they are doing exactly to prevent BREACH. I just checked the page with the form for changing your password here on StackExchange, and it’s compressed. It looks like everything is compressed on Google too, and a lot of other important websites that are supposed to care about security. So what are they doing to prevent BREACH?

Here’s a list of possible solutions I’ve been able to gather:

  • Disable compression completely. This means wasting bandwidth and no one seems to be doing this,
  • Only compress static resources like CSS and JS. Good idea, that’s the quickest solution to implement, and that’s what I plan to do on a few websites that I need to optimize.
  • Checking referrers and avoid compression whenever the requests from unauthorized websites. Interesting idea, but it almost sounds like a “dirty trick” and it’s far from perfect (some clients suppress referrers, all traffic coming from other websites and search engines will end up loading uncompressed pages, etc.)
  • Rate limiting the requests. This it definitely implemented by Google, since if you click on too many links too fast you might see a CAPTCHA (it happened to me sometimes, while checking a website’s position in the SERP, I was literally behaving like a bot). But are websites really relying on this to mitigate BREACH? And is it even reliable? What is a sensible and effective limit to set, for example?
  • Use CRSF tokens in HTTP headers instead of the body of the page. I haven’t noticed something like this on StackExchange, but Google seems to have interesting HTTP headers that look like tokens. I guess this will really mitigate the issue, provided the tokens are always checked (even just to display information, not only to change it). I guess this is the perfect solution, but it’s the hardest to implement unless you do it from scratch (it would require rewriting several parts of your application).

So the questions are: are the above points valid? Are there any other options? And what are the websites that follow best practices actually doing?

Its been a while! Looking to automate a specific list of websites, not platforms – Any GSA software.

Hello All.

So its been 5 years since i was active on here! 

I am no longer building a quarter of a million links a day, or breaking my brain writing SVM captcha breakers, but am still very much into SEO.

I am now looking for a tool that can be used to automate custom sites, primarily for building business citations.
I raked my brain for the stuff i used back in the blackhat days, but most of the stuff is dead (what happened to sick submitter?).

Then i remembered that Sven was working on a platform trainer back in 2013 – does that exist now?
If it does, is there a tutorial anywhere?

If not is there any other GSA software, or 3rd party, that people can recommend?

I did waste a bit of time trying to write something myself with chrome headless but i would rather just pay for something that works and get on with the job in hand.

A quick note that i am not looking for a service like whitespark, Marketeers centre – i use them already but have a slightly blackhat idea brewing that will require a volume/cost ratio that only software can provide.

Cheers

Python websites Repl.it and Glot.io – any malicious activities known by those web services?

  1. Python websites Repl.it and Glot.io – are they both considered secure in the programming world? Any security issues known for one of them?

  2. And if you run python codes within those two webservices, is there technical way that your local operating system could be infected, or is everything -by design- isolated from your own system when you run code via those websites?

Just want to make sure they are totally safe to use.

Thanks

Effectiveness and ethics of flooding phishing websites with fake data as a countermeasure

Today, I got a phishing email impersonating a large bank. To my surprise, the link in the email pointed at a rather sophisticated phishing website which could potentially result in many victims. The bank had already been notified and it will probably try to take down the website and raise awareness for the presence of this phish among its customers. This made me wonder, what one could do for people that had already fallen for this phish.

The first thing I thought about was to flood the phishing website with potentially valid data. (Not random data) It is pretty easy to automate and over night the number of invalid entries in the database compared to the number of valid entries will be very large.

Of course, threat actors will try and automate their validation process as much as possible as well. However, I could offer the bank access to my fake dataset to cross reference with invalid logins. Besides that, maybe the fact that lots of invalid login attempts have to be made to find the valid ones will be good enough to trigger fraud detection at the bank.

At first glance, it seems as if this method could be effective but I was not able to find any documentation regarding usage of such a method. This implies that it is likely that there are some valid reasons against.

This leads me to the following questions: What would be the drawbacks of using the described method to remediate existing phishing attacks in terms of ethics and effectiveness? Why would a company that is often impersonated in phishing attacks use it or why not?

Inefficiency of search algorithms for intranet or corporate websites caused by poor design and/or implementation

I noticed recently that some of the search features on corporate websites and intranets seem to have implemented some of the search algorithms that are commonly associated with Facebook Graph Search or Google’s SEO ranked search results.

This is commonly seen when a user enters a very specific keyword but the exact matching results are not returned or not ranked highly on the search results, whereas a partially matching result will be ranked highly.

My suspicion is that with many organizations creating social networks and doing extensive analytics on internal traffic have the tendency to implement the types of search algorithms that place more weight on criteria such as recency and number of existing page views when returning search results. Unfortunately this has also created the side-effect of exact matching keywords (e.g. document names and other exact search phrases) not returning at the top of the search result.

This is despite the fact that many of these search features allow a user to filter results by things like document type and other meta data, which should allow a more specific or targeted results returned.

Has anyone else experienced this during their research and have you found the cause for this? Other research or examples from end users would also be helpful.