Is there a regex way to match generally all possible subdomains in robots.txt?


Given a website with the fictional domain example.com.
The owner of this website added a subdomain : x.example.com.

  • After one year, the owner changed x to y so to have y.example.com
  • After two years, the owner changed y to z so to have z.example.com

Each of the three scenarios did not involve a change of all example.com structures at robots.txt so the owner got a serious long term SEO problem because crawling software were requested to scan non existing webpages (x, and y ones respectively).

What regex prophylaxis could have been used by the owner, beforehand to prevent the SEO problem;
Is there a regex way to match generally all possible subdomains in robots.txt?