Does Google search console assume there must be a retina image?

We noticed a considerable proportion of 404-s in Crawl stats report and upon inspection it turns out that pretty much all the images on our WP blog yield a 404 for their "retina" version, i.e. https://example.com/blog/image@2x.jpg. We do not supply a retina version in our markup currently, and I don’t think we ever did. Is there a reason that you know of that Google thinks there must be a retina version and yields a 404?

An example of an image block markup follows:

<figure class="wp-block-image size-large"> <noscript>     <img width="1024" height="920"          src="/wp-content/uploads/2021/01/motorcycle.jpg"          alt="" class="wp-image-14392"          srcset="/wp-content/uploads/2021/01/motorcycle.jpg 1024w, /wp-content/uploads/2021/01/motorcycle-300x270.jpg 300w, /wp-content/uploads/2021/01/motorcycle-768x690.jpg 768w, /wp-content/uploads/2021/01/motorcycle-860x773.jpg 860w, /wp-content/uploads/2021/01/motorcycle-680x611.jpg 680w, /wp-content/uploads/2021/01/motorcycle-250x225.jpg 250w, /wp-content/uploads/2021/01/motorcycle-50x45.jpg 50w"          sizes="(max-width: 1024px) 100vw, 1024px" /> </noscript> <img width="1024" height="920" src='data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%201024%20920%22%3E%3C/svg%3E'      data-src="/wp-content/uploads/2021/01/motorcycle.jpg" alt="" class="lazyload wp-image-14392"      data-srcset="/wp-content/uploads/2021/01/motorcycle.jpg 1024w, /wp-content/uploads/2021/01/motorcycle-300x270.jpg 300w,  /wp-content/uploads/2021/01/motorcycle-76x68.jpg 76w, /wp-content/uploads/2021/01/motorcycle-50x45.jpg 50w"      data-sizes="(max-width: 1024px) 100vw, 1024px" /> <figcaption>Motorcycle</figcaption> 

Is there a process for petitioning Google to factor accessibility into search results ranking?

As the questions says, is there a process for petitioning Google to factor accessibility into search results ranking?

I know individual accounts can submit feature requests to specific Google products, but I was wondering if there’s a more formal process, even an enterprise-level one.

Is there any evidence that summary headings affect either SEO or user retention?

Many sites use key item Summary or Takeaway "headings" that summarize a blog page’s content. You can see an example at Investopedia (see section "Key Takeaways"). They will have short bullet points taken from the article and summarizing the article content.

Is there any evidence that these summaries either help with SEO or help with user retention?

Add metabox if there is at least one post available

So I am building a WordPress Dashboard widget which will showcase all posts & pages where a Gutenberg Block is active.

The below code does it’s job and pulls in an array based on get_posts().

Here is what I’m attempting to do:

  1. Is there a way that I can invoke the add_action and the pardot_dashboard_widget() function ONLY if there is at least one or more posts in get_posts()? If it’s an empty array, don’t even bother creating the metabox.

Here is the code:

/**  * Pardot Widget for Dashboard  */ function pardot_dashboard_widget() {     add_meta_box(         'pardot_dashboard_meta_box',         esc_html__( 'Pardot Form Locations', 'wporg' ),         'pardot_dashboard_stats',         'dashboard',         'side', 'high'     ); } add_action('wp_dashboard_setup', 'pardot_dashboard_widget');  function pardot_dashboard_stats() {     $  args = [         's'         => '<!-- wp:acf/pardot-form ',         'sentence'  => 1,         'post_type' => [             'post',             'page'         ],     ];     $  pardot_form_query = get_posts($  args);     if (!$  pardot_form_query) {        echo 'There are no active pardot forms available.';     } } 

Is there a performance or storage space advantage to lowering time/timestamp precision in PostgreSQL?

PostgreSQL allows time/timestamp to specify a precision:

time, timestamp, and interval accept an optional precision value p which specifies the number of fractional digits retained in the seconds field. By default, there is no explicit bound on precision. The allowed range of p is from 0 to 6.

From PostgreSQL 13 Documentation

However it states that storage space is a constant 8-bytes for (timestamp and time without timezone) and 12-bytes for time with timezone regardless of p.

In the case that one doesn’t need extra precision — say milliseconds(p = 3) or seconds(p=0) would suffice — is there an advantage to explicitly lowering the precision?

Is there any way I can defend against automated file download bots?

My company offers a very specifically tailored Android application to supplement our other software. Since users must have an account in our other software in order to use the Android application, is not useful to anyone who does not already subscribe to use our other software. Because of this, we are in a position to know exactly how many Android app users we should have.

We host the application file (.apk file) on our own website and direct our first-time users to download the application from there. After that, we have an automated update system built into the app that notifies the users that there is an update available and it and will update their app for them if they choose to accept the update.

We have very basic analytics in place on our website to monitor manual vs. automatic (update) downloads of our APK. We can see what file was downloaded at what time and by what IP address. After several months of manual and automated download analytics numbers that match up with our user count, we suddenly logged several hundred more manual downloads than we have users last month.

The download pattern I observed when investigating is that the same IP address is downloading the APK in bursts of 2-9 times all within the span of about a minute, and then within another minute or two another IP address does the same thing. This happened on and off for several days and I suspect there is some kind of bot/automated software that found our APK and is now downloading several copies of it for reasons I can’t currently comprehend.

I am hoping to find out if there exists some server configuration, 3rd-party technology, or even some kind of website programming technique we may be able to put into place to protect our site from this behavior? I don’t have reason to believe this "bot" is causing us any monetary or intellectual harm at this point in time, but if this continues it will certainly render our download analytics useless.

Are there pitfalls to implementing a hybrid site with both responsive layouts and dynamic serving layouts?

I have a 20 year old site that I want to upgrade to a responsive layout. Originally it was a desktop-only layout but when a significant portion of my user base went mobile I implemented dynamic serving layouts where I sniff the user agent and serve different HTML based on whether the user agent is for a mobile device or not.

I started to implement the responsive layouts, but I found that about 5% of my user base is on older browsers that don’t have all the CSS and JavaScript I would like to use. For example, only 95.42% of users fully support CSS grid layouts: https://caniuse.com/css-grid While I wouldn’t want to take the time to develop a site just 5% of my users, I already have a site that works for those users and I don’t want to lose that much of my traffic when I move to responsive.

My current plan is to still do server side tests based on the user agent like this pseudo code:

use responsive-layout if (bot)  use desktop-layout if (msie or firefox < 54 or chrome < 58  or edge < 16) use mobile-layout if (opera-mini or android-browser or samsung-internet < 5 or safari-mobile < 10.3) use responsive-layout otherwise 

Most of my users and search engine crawlers would get the responsive layout. Only specific non-capable and older browsers would get my existing static layouts.

I know that Google supports either responsive layouts or dynamic serving layouts but I haven’t been able to find any information about a hybrid approach like this. Are there any pitfalls (especially with regards to SEO) of mostly using responsive but falling back to dynamic serving for some browsers?