Was youtube-nocookie.com always serving cookies or did it start recently? Is it a scam?

I’ve been using Youtube embeds in enhanced privacy mode by

chang[ing] the domain for the embed URL in your HTML from https://www.youtube.com to https://www.youtube-nocookie.com

I remember checking via DevTools (Application/Storage tab) that no cookie was actually set.

A customer just notified me that they did find cookies set by the domain .youtube-nocookie.com — weirdly, something about "consent pending", which does not change when I click play, as other sources state.
They have also alerted me to some shenanigans in Local Storage, namely an item with the key yt-remote-device-id, which has a UUID and an expiration date 10 years in the future.

I have always suspected that Enhanced Privacy Mode is somewhat of a exaggeration, but this seems to defeat the purpose almost entirely. And it makes youtube-nocookie practically useless w.r.t. a less painful GDPR-compliant user experience.

Is this a recent change? Is there any documentation or changelog on that?

Serving “less trusted” content on the same domain

Let’s say we run a web app at "example.org". It uses cookies for user authentication.

Our website also has a blog at "example.org/blog", hosted by a third party. Our load balancer routes all requests to "/blog" (and subpaths) to our blog host’s servers. We don’t distrust them, but we’d prefer if security issues with the blog host can’t affect our primary web app.

Here are the security concerns I’m aware of, along with possible solutions.

  1. The requests to the blog host will contain our user’s cookies.
    • Solution: Have the load balancer strip cookies before forwarding requests to the blog host.
  2. An XSS on the blog allows the attacker to inject JS and read the cookie.
    • Solution: Use "HTTP-only" cookies.
  3. An XSS on the blog allows the attacker to inject JS and make an AJAX request to "example.org" with the user’s cookies. Because of the same origin policy, the browser allows the attacker’s JS to read the response.
    • Solution: Have the load balancer add some Content-Security-Policy to the blog responses? What’s the right policy to set?
    • Solution: Suborigins (link) looks nice, but we can’t depend on browser support yet.

Is there a way do safely host the blog on the same domain?

Trademark Registry Business Serving 10 Countries. Recently Launched and Doing Well Already

Welcome to the auction for Regmarker.com

Intro

Regmarker is an online international trademark law firm registering trademarks in 10 territories namely; United States of America, United Kingdom, Canada, China, Europe, Pakistan, Brazil, Australia, Russia and Mexico.

The company was recently launched in July and has already made 9 high ticket sales.

This business is 100% outsourced and you do not need to know any law in order…

Trademark Registry Business Serving 10 Countries. Recently Launched and Doing Well Already

How to make Google understand that is not self serving reviews?

I run a blog where I write reviews of restaurant and/or pubs, and also user can leave rating (aggregateRating)

For years, my reviewRating was indexed fine.

Some months ago, in serp reviewRating has been replaced in favor of aggregateRating, I think because of this google rule

Ratings must be sourced directly from users. [*]

Now, also aggregateRating was removed, I suppose because of this?

Pages using LocalBusiness or any other type of Organization structured data are ineligible for star review feature if the entity being reviewed controls the reviews about itself. For example, a review about entity A is placed on the website of entity A, either directly in their structured data or >through an embedded third-party widget. [*]

My blog has a lot of reviews of many different places. How can I make google understand that these are not self serving reviews?

This is an example of my page markup:

{  "@context": "http://schema.org/",  "@type": "LocalBusiness",  "name": "Resturant Name",  "description": "orem ipsum dolor sit amet, consectetur adipiscing elit. Nunc eu eros sed eros gravida fermentum non sed...",  "image": {    "@type": "ImageObject",    "url": "https://i.picsum.photos/id/310/700/525.jpg",    "width": 700,    "height": 525  },  "Review": {    "@type": "Review",    "name": "Resturant Name",    "reviewBody": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc eu eros sed eros gravida fermentum non sed ante. Maecenas malesuada orci sapien, vitae hendrerit mauris eleifend in. Integer facilisis dignissim scelerisque. Nam quis dictum metus. .",    "author": {      "@type": "Person",      "name": "Jhon Doe"    },    "datePublished": "2013-11-08T14:41:19+01:00",    "dateModified": "2020-06-02T21:24:19+02:00",    "reviewRating": {      "@type": "Rating",      "ratingValue": "4.3",      "bestRating": 5,      "worstRating": 1    }  },  "aggregateRating": {    "@type": "AggregateRating",    "ratingValue": 3.4,    "ratingCount": 32,    "bestRating": 5,    "worstRating": 1  },  "address": "Street Address",  "priceRange": "€€",  "telephone": "12346789" } 

When tested with Structured Data Testing Tool I’ve no error and the previews shows aggregateRating indeed

What if I also add “publisher” property? Would be it helpful?

[*] from google technical guidelines

re-CAPTCHAs.Net Online and Serving

I’d like to let you know that a new re-CAPTCHA bypass and auto solver service is online in the Internet market at the moment the website is https://re-captchas.net and it is serving 24/7 stable and strong.

Solving re-CAPTCHAs at a rate of 100% success and 10 to 120 seconds solving time ratio on average. Accepted payment is Bitcoins and no other than it.

Visit https://re-captchas.net now!

Serving unique content to user with Geo Locaton

I hope this is in the right spot.

When it comes to changing website content based on visitor location, serving targeted ads and content…in theory, could this not be applied to single individuals online?

For instance, you normally use CNN. Say you wanted to serve a completely different reality to a handful of individuals without them recognizing the stories they're seeing don't completely match to the actual stories most people see…based on their exact location on the grid, down to the…

Serving unique content to user with Geo Locaton

Web UX – Serving two versions of site based on selected customer type

I’m building a site for an accounting firm. They would like to serve two distinct home pages for “Individuals” and “Businesses.”

The site would look similar but have different content relating to the services they offer these two types of customers.

  • What is the best way to allow users to choose their customer type? (Currently thinking pop-up prompt on first visit)
  • How easy should it be for users to switch between types?
  • How obvious should I make it for the user to know which version of the site they are on?
  • Any differences between desktop and mobile?

Thank you!

SharePoint Online is not serving index.aspx file from document library on mobiles

I have created an angular app and uploaded it to the SiteAssets library of my SharePoint online site renaming index.html to index.aspx. If I open the index.aspx file on my notebook everything works as expected: The index.aspx file is served to the browser and the browser shows my angular app:

But if I open the index.aspx file in any browser on my smartphone the index.aspx file is not served to the browser but a strange looking page is served instead:

There must be some configuration switch I am missing that tells SharePoint to not server index.aspx on mobiles. Any help is much appreciated!

Best wishes Michael

Serving local HTTP over VPN [on hold]

My local server is serving several webpages to my local network via http. If I’m not at home, I’m connecting to that server via a VPN hosted on the webserver (wireguard).

I do not route all traffic through that VPN, only DNS requests.

When I visit my local http webpages when I’m not at home, is the connection encrypted by using the VPN or should I switch to HTTPS?

nginx is running but not serving sites

Need a little help. I had nginx up and running for about 2 years I am not sure what I did when trying to update a certificate but now my sites are not accessible.

I went to my website:

https://ttrss.shiromar.com

I noticed the certificate expired so I went about renewing it and ran:

sudo certbot –nginx -d ttrss.shiromar.com

and I got an error about certbot not being able to access the website for verification. I checked networking and forwarding rules and everything seemed fine so I decided to start the certificate process a new and ran:

sudo certbot delete –cert-name ttrss.shiromar.com

this was when my site became inaccessible, certbot can’t reach my site so it can’t issue a certificate. I commented out the ssl lines in the server block and restarted nginx and php and still couldn’t reach it.

here is the server block for ttrss:

    server  {     listen          80;     listen          [::]:80;     server_name     ttrss.shiromar.com www.ttrss.shiromar.com;     return          301 https://$  server_name$  request_uri;     }     server {     listen          443 ssl http2;     listen          [::]:443 ssl http2;     server_name     ttrss.shiromar.com www.ttrss.shiromar.com;     root /var/www/ttrss;     index index.php;     access_log /var/log/nginx/ttrss_access.log;     error_log /var/log/nginx/ttrss_error.log info;     location = /robots.txt {     allow all;     log_not_found off;     access_log off;     }     location / {     index           index.php;     }     #ssl_certificate         /etc/letsencrypt/live/ttrss.shiromar.com/fullchain.pem;     #ssl_certificate_key     /etc/letsencrypt/live/ttrss.shiromar.com/privkey.pem;     #ssl_trusted_certificate /etc/letsencrypt/live/ttrss.shiromar.com/chain.pem;     location ~ \.php$   {     try_files $  uri = 404; #Prevents autofixing of path which could be used for exploit     fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;     fastcgi_index index.php;     include /etc/nginx/fastcgi.conf;     }     } 

here is a netstat showing ports open

    sudo netstat -tanpl|grep nginx     tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2052/nginx: master     tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      2052/nginx: master     tcp6       0      0 :::80                   :::*                    LISTEN      2052/nginx: master     tcp6       0      0 :::443                  :::*                    LISTEN      2052/nginx: master 

This server is running in a VM on Hyper-V. I did have a checkpoint from early last year, I tested it and that does work but it’s a bit too old.

I have triple checked ip addresses and port forwarding rules, and I keep coming back to an issue with nginx or a setting in ubuntu that’s blocking all 443/80 traffic. Oh and this is Ubuntu 18.04 and

nginx version: nginx/1.14.0 (Ubuntu)

Here is a status of ufw:

    sudo ufw status     Status: active     To                         Action      From     --                         ------      ----     OpenSSH                    ALLOW       Anywhere     8181/tcp                   ALLOW       Anywhere     Nginx Full                 ALLOW       Anywhere     443/tcp                    ALLOW       Anywhere     80/tcp                     ALLOW       Anywhere     3000                       ALLOW       Anywhere     OpenSSH (v6)               ALLOW       Anywhere (v6)     8181/tcp (v6)              ALLOW       Anywhere (v6)     Nginx Full (v6)            ALLOW       Anywhere (v6)     443/tcp (v6)               ALLOW       Anywhere (v6)     80/tcp (v6)                ALLOW       Anywhere (v6)     3000 (v6)                  ALLOW       Anywhere (v6) 

This is self hosted, the server is right in front of me. I too wondered about the ISP. Like I wrote, this all happened very sudden. One moment everything was working and the next it wasn’t.

We can rule out the ISP because as I wrote, the backup checkpoint works fine. I actually have the backup VM running right now and if I want the backup to start serving I just need to change the port forwarding rules but that backup is way too old. I actually tested this yesterday, I changed the port forwarding and everything was fine.