Google search console consider 404 urls as crawl anomaly category and not 404 not found category

I’ve got a site which we have mention as 404 not found for a few particular pages so that google can consider and push it to Excluded Not found (404) category, but still, google is considering those pages as crawl anomaly category. Please find the screenshot below.

enter image description here

enter image description here

Why does Googlebot attempt to crawl /admin/install.php?

On one site I own, I recently started seeing Googlebot checking for non-existing URIs:

66.249.76.89 - - [23/Feb/2020:10:18:48 +0100] "GET /robots.txt HTTP/1.1" 404 118 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-" 66.249.76.87 - - [23/Feb/2020:10:18:49 +0100] "GET /admin/install.php HTTP/1.1" 404 181 "-" "Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.96 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "-" 

This would all be well and good, if not for the fact that it has never done so before, this URI never existed (I own the domain for 10+ years) and looks suspiciously like casually scanning for possible security issues.

89.76.249.66.in-addr.arpa domain name pointer crawl-66-249-76-89.googlebot.com. is also indeed a Googlebot address.

Can anyone shed more light on this?

Google Crawl rate & the log file mystery

According to Google Search Console for a website I am working on, Googlebot crawls ~5000 pages per day (min 2500, max 8500).

However, when looking at the Apache log files, GoogleBot only shows up ~10 times per day …

For example:

66.249.64.88    [22/Jan/2020:15:09:01   +0100]  [22/Jan/2020:15:09:01 +0100]    GET / HTTP/1.1  200 1358    Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) 

It is GoogleBot since doing a reverse dns does point to Google servers:

$   host 66.249.64.88 88.64.249.66.in-addr.arpa domain name pointer crawl-66-249-64-88.googlebot.com  

But I am wondering : If GoogleBot appears only 10 times in Apache log files while it crawls 5000 pages per day, where are the remaining 4990 crawls going?

How can I know which resource GoogleBot crawls when it does not appear in the log files ?

Thanks!

Crawl distributed file system error: The object was not found

SharePoint 2013 Standard server.

We have a a distributed file system (DFS) which I want to crawl.

I use a UNC path like:

\machine1234\Departments\HR 

The account that runs the crawler can access it. I tested by logging in with that account and pasting the above path into explorer.

I run the a full crawl and get the following error message:

The object was not found

I run with and without proxy, same difference. Any ideas?

How to stop continuous crawl even though the content sources are all idle?

I have a 6 server 2016 crawl farm on 2012R2 Windows. Server1-6. 5&6 host crawler, admin component, and content processing. 3&4 host content processing and analytics. 1&2 host indexes and query processing.

We were getting a lot of errors on our continuous and incrementals so I stopped them to troubleshoot. However, even though the content sources are all idle, the crawl log still shows the continuous as running. Also, any crawls i start now just run without crawling anything. No errors but no successes. I have tried restarting the whole farm, stopping osearch, sptimer, & spadmin services and bouncing IIS as well. The crawl targets are up and providing content and permissions are correct there. No resource contentions anywhere.