What are some URLs to get to amazon sitemap? I have some…need more [closed]

I have some below. If you have any please include them in answer or if you find any logic to get more sitemap URL’s by utilizing following links then I will greatly appreciated.

Got these URL’s here

Sitemap: http://www.amazon.com/sitemaps.f3053414d236e84.SitemapIndex_0.xml.gz Sitemap: http://www.amazon.com/sitemaps.1946f6b8171de60.SitemapIndex_0.xml.gz Sitemap: http://www.amazon.com/sitemaps.bbb7d657c7e29fa.SitemapIndex_0.xml.gz Sitemap: http://www.amazon.com/sitemaps.11aafed315ee654.SitemapIndex_0.xml.gz Sitemap: http://www.amazon.com/sitemaps.c21f969b5f03d33.SitemapIndex_0.xml.gz 

My website has less than 10 pages: Why does my sitemap have 448 discovered URLs?

I’ve attempted to improve the SEO of my website by submitting a sitemap to Google Search Console.

The status is a success but I have 448 discovered URLs with less than 10 pages on my website: clientsforcounsellors.com/sitemap.xml

Also, when I type in my domain name in the address bar, followed by any slug, e.g. clientsforcounsellors.com/sdlkgr, I’m redirected to my homepage instead of having a 404 page displayed.

What’s the problem here? Do soft 404’s have anything to do with this?

Problem of URLs with internal links with ‘base href’

I’ve got a website with includes the following directive in the HEAD section:

<base href="https://foo.com" /> 

Now, I want to create internal links to sections within the page ‘https://foo.com/product/name-of-the-product’

<a href="#photos">Photos</a> 

But, instead of linking to ‘https://foo.com/product/name-of-the-product#photos’, it links to ‘https://foo.com/#photos’.

Any tip to fix this? Thank you.

alternating between projects & dont remove urls

alternating between projects
when i looked before im sure i saw a way to alternate between projects, i.e. turning projects
on & off for a set period, (so you have less projects running at once so it does not overburden the system)
but when i looked again i couldnt find it.

i thought it was under
main admin screen>options

but looked a few times & couldnt find it, does anyone know where this is ?

dont remove urls

under
project>options

where it says
“dont remove urls – check hint”
what does that mean exactly – does it mean it wont remove the urls when it reverifies them ?
i read the mouse-over text but was’nt sure what it meant.

& “remove after 1st verification try” is greyed out & cant be selected.

Are IFRAMEs inherently unable to display file:/// URLs?

Yes, the browser and webserver are on the same machine!

Whenever I try to embed a HTML file from outside of the WWW root in an iframe, thus using the file:/// URL syntax, it is ignored. No error message logged or anything. Just nothing is done whatsoever.

If I change the path from file:///C:\blablabla.html to ./blablabla.html, and put the file in the WWW root, it will display the HTML page in the iframe. So it’s not some kind of fundamental issue with displaying iframes or anything. I have tried both with and without URL-encoding the C:\... part.

Are IFRAMEs inherently unable to display file:/// URIs, even though the MozDev page mentions no restrictions whatsoever for the src attribute? https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe

Best Practices to use Indetified URLs from GSA PI for SER

Hi 

I am scraping my own Targets with ScrapeBox 24*7
After I scrape 5-10GB of Data, I trim to root, de-duplicate and send the resulted list to GSA PI.

GSA PI feeding Identified list to my GSA SER Instance which verifies the lists and being used for link building.

Now my questions are.

*Question 1*
Should I keep adding my new targets into the existing Identified list OR I should wipe out my identified list occasionally and start over with a fresh GSA PI identified list?

Because over time, All the targets are already tried by GSA SER and there is nothing more left under this identified list to verify.

If yes, How occasionally? OR you can tell after how big your Identified list becomes when you wipe it out.

I am running GSA SER on a dedicated server with 2000 threads and getting 100+ LPM. So you can get an idea when I need to wipe it out.

*Question 2*
I am using SB Link Extractor to scrape the initial targets. I believe I am getting LESS Unique targets. 

How can I increase it?

Thanks in Advance