Do search engines consider the length of longer percent-encoded URLs or shorter non-English character URLs for SEO?

Do search engines consider the length of non-English characters or do they consider the length of percentage-encoded characters for an SEO score?

For example:

http://example.com/%d8%b1%d9%88%d8%b3%d9%8a%d8%a7-%d8%a7%d9%84%d8%b9%d9%85%d9%84-%d8%b9%d9%84%d9%89-%d8%a7%d8%a8%d8%aa 

I searched here and found related posts are 3 to 7 years old. Is there any update in how popular search engines index URLs with non-English characters?

In a content security policy header: Should the url’s be quoted or not, and is there any security implication to this decision?

So in a CSP like the below:

content-security-policy: upgrade-insecure-requests; frame-ancestors 'self' https://stackexchange.com

Should the url part be quoted like this (example from mozilla security) – even though this example has both styles:

# Disable unsafe inline/eval and plugins, only load scripts and stylesheets from same origin, fonts from google, # and images from same origin and imgur. Sites should aim for policies like this. Content-Security-Policy: default-src 'none'; font-src 'https://fonts.googleapis.com';              img-src 'self' https://i.imgur.com; object-src 'none'; script-src 'self'; style-src 'self' 

Or unquoted like this:

# Disable unsafe inline/eval, only load resources from same origin except also allow images from imgur # Also disables the execution of plugins Content-Security-Policy: default-src 'self'; img-src 'self' https://i.imgur.com; object-src 'none' 

[1] Examples from here: https://infosec.mozilla.org/guidelines/web_security#content-security-policy

Emailed HTML Rich Text Column Relative URLs

Does someone knows another “fix” for this? I am looking for a simple way. And how is this in sharepoint 2010?

from http://rrfreeman.blogspot.com.br/2010/12/emailed-relative-html-rich-text-column.html

Scenario

You have a list with a multi line text column with Enhanced Rich Text. The item contains links from the current server. You are emailing the contents of that column via SharePoint Designer 2007 Workflows

Issue

The hyperlinks in the email do not work. The server name was automatically stripped from the URL, making a relative path.

Resolution

Workaround: Add the following note to the field description notifying users of the problem and the workaround. “Use http://TinyUrl.com for hyperlinks and images on this site. This will prevent emailed relative links.” This works because SharePoint only strips URLs it recognizes as local URLs These are usually defined in the Alternate Access Mappings You can use any URL redirector

Fixes: Use Visual Studio Workflows and replace the Relative URLs with Absolute URLs Intercept the outbound emails (on the SharePoint or Email servers) and replace Use a custom Workflow Action that prepends the servername into the URLs Use a different field type. Try different custom Rich Text editor columns.

Problems with Biggers Sitemaps with 50K URLs

My sitemap contains 50K URLs/7,8 MB and this following URL syntax:

<?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd"> <url> <loc> https://www.ninjogos.com.br/resultados?pesquisa=vestido, maquiagem, </loc> <lastmod> 2019-10-03T17:12:01-03:00 </lastmod> <priority> 1.00 </priority> </url> </urlset> 

The problems are:

• Search Console says “Sitemap could not be read”;

• The Sitemap takes 1 hour to load and Chrome stops working;

enter image description here

• In firefox the Sitemap has downloaded in 1483ms and fully loaded after 5 mins);

Things I’ve done without sucess:

• Disable GZip compression;

• Delete my .htaccess file;

• Create a test Sitemap with 1K URLs and the same syntax and sent it to Search Console and it’s worked but the 50K URLs Sitemap still shows “”unable to fetch Sitemap”;

enter image description here

• Tried to inspect the url directly but it gave error and asks to try again later while the 1K urls worked;

• Tried to validate the Sitemap in five different sites (YANDEX, ETC) and all worked without no error/warning

Any light?

How To Filter Out Scraped URL’s That Already Have Your Websites URL?

Here is the situation. I was running a Comment Poster on Blogging Platforms and got to 30k successfull posts before Windows restarted on me and killed the campaign. It was only 20% done so my question is how do I avoid reposting to these same 30k URL’s if I do another campaign? I did not get a chance to export the successful posted URL’s. Is there a feature that can scan the newly scrapped URL’s and remove the ones that already have my websites link? I want to avoid duplicate posting.

Are AWS Signed URL’s crawled by google?

I have used Amazon pre signed url to share content.

https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html

Is google able to crawl this url? I’m sharing this url with just one client. What about other services? there are some of theme that let you share content with someone by creating a seemingly random url (or even using hashes) like www.somedomain.com/something/15b8b348ea1d895d753d1acb57683bd9 Is that url crawled by google or other search engines?

Thanks

What Can I Do With The “Unknown” List In GSA-SER>>Options>>Advanced>>Tools>>Search Online For URLs?

I am experimenting with this feature again.
The Identified list is very long, but there are a few thousands Unknown URLs.
I see I can save this list, but I am unsure what these URLs are.
What makes them unknown? How are they different than URLs that just aren’t an engine match at all?
How can I use these?  Thanks…