I have some below. If you have any please include them in answer or if you find any logic to get more sitemap URL’s by utilizing following links then I will greatly appreciated.
Got these URL’s here
Sitemap: http://www.amazon.com/sitemaps.f3053414d236e84.SitemapIndex_0.xml.gz Sitemap: http://www.amazon.com/sitemaps.1946f6b8171de60.SitemapIndex_0.xml.gz Sitemap: http://www.amazon.com/sitemaps.bbb7d657c7e29fa.SitemapIndex_0.xml.gz Sitemap: http://www.amazon.com/sitemaps.11aafed315ee654.SitemapIndex_0.xml.gz Sitemap: http://www.amazon.com/sitemaps.c21f969b5f03d33.SitemapIndex_0.xml.gz
Please tell me how to disable this function
How to edit the deleted URL in all verified URLs?
Hi, I just realized that the web 2.0 site that was in a GSA SER campaign running was removed. I would like to edit all those verified URLs that have website 2.0 removed.
I want to change the removed URL (web2.0) on all verified links.
Are there any ways to do that manually or automatically?
I’ve attempted to improve the SEO of my website by submitting a sitemap to Google Search Console.
The status is a success but I have 448 discovered URLs with less than 10 pages on my website: clientsforcounsellors.com/sitemap.xml
Also, when I type in my domain name in the address bar, followed by any slug, e.g. clientsforcounsellors.com/sdlkgr, I’m redirected to my homepage instead of having a 404 page displayed.
What’s the problem here? Do soft 404’s have anything to do with this?
I’ve got a website with includes the following directive in the HEAD section:
<base href="https://foo.com" />
Now, I want to create internal links to sections within the page ‘https://foo.com/product/name-of-the-product’
But, instead of linking to ‘https://foo.com/product/name-of-the-product#photos’, it links to ‘https://foo.com/#photos’.
Any tip to fix this? Thank you.
alternating between projects
when i looked before im sure i saw a way to alternate between projects, i.e. turning projects
on & off for a set period, (so you have less projects running at once so it does not overburden the system)
but when i looked again i couldnt find it.
i thought it was under
main admin screen>options
but looked a few times & couldnt find it, does anyone know where this is ?
dont remove urls
where it says
“dont remove urls – check hint”
what does that mean exactly – does it mean it wont remove the urls when it reverifies them ?
i read the mouse-over text but was’nt sure what it meant.
& “remove after 1st verification try” is greyed out & cant be selected.
Yes, the browser and webserver are on the same machine!
Whenever I try to embed a HTML file from outside of the WWW root in an
iframe, thus using the
file:/// URL syntax, it is ignored. No error message logged or anything. Just nothing is done whatsoever.
If I change the path from
./blablabla.html, and put the file in the WWW root, it will display the HTML page in the iframe. So it’s not some kind of fundamental issue with displaying iframes or anything. I have tried both with and without URL-encoding the
IFRAMEs inherently unable to display
file:/// URIs, even though the MozDev page mentions no restrictions whatsoever for the
src attribute? https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe
I am scraping my own Targets with ScrapeBox 24*7
After I scrape 5-10GB of Data, I trim to root, de-duplicate and send the resulted list to GSA PI.
GSA PI feeding Identified list to my GSA SER Instance which verifies the lists and being used for link building.
Now my questions are.
Should I keep adding my new targets into the existing Identified list OR I should wipe out my identified list occasionally and start over with a fresh GSA PI identified list?
Because over time, All the targets are already tried by GSA SER and there is nothing more left under this identified list to verify.
If yes, How occasionally? OR you can tell after how big your Identified list becomes when you wipe it out.
I am running GSA SER on a dedicated server with 2000 threads and getting 100+ LPM. So you can get an idea when I need to wipe it out.
I am using SB Link Extractor to scrape the initial targets. I believe I am getting LESS Unique targets.
How can I increase it?
Thanks in Advance
I have a client that wants to track users and see how many times they are viewing certain pages without a user login portal as they think it will deter users from viewing the pages. Am I missing something or is there a way to do this maybe with unique urls or cookies possibly.
Hi @Sven sorry I can’t seem to find the .txt file sorry. I want to change the list of “authority” sites that get posted to, for example currently it’s linking to https://www.telegraph.co.uk/search?=KEYWORD
Is it possible to either add my own, or edit the existing? Thanks