How to stop Google search, by using “noindex” and “nofollow,” from offering options to private pages on a website

I have a family history website, call it "my_family.com". The primary file, index.php, has some introductory remarks of explanation and an html form into which one puts the website’s password (there’s a single password used by all family members). If one runs "my_family.com" and inserts the correct password and clicks on the "Submit" button, the php code in the file takes you to the first of several html files — call it "first.html," which gives one links to further html files. All of these files contain family trees, copies of letters, photos, reminiscences, obituaries, etc., and none of which should be available to non-family-members. I soon found out found that if one put the phrase "my_family.com" into the Google search window (whether on computer of smartphone), one got a list of options, not just a Login option but about eight to ten 3-4 word excerpts from html files on the website; and if one clicked on any of these latter options, one bypassed the password process and was taken directly to other files on the website, i.e., files that should never be publicly revealed.
What I’ve done to avoid such access is to create a cookie in the original index.php file. If the user inserts the correct password, the cookie is set to "passwordCorrect" Each subsequent html file then checks whether the cookie has that value before the user is allowed to move on Putting in the cookies has solved the problem of public access, but nevertheless a Google search still shows the 3-4 word excerpts. I have tried to stop Google search from doing this by putting into the header section of first.html: ”” (without the outer quotes). But that has been in the file for about three weeks and has proved useless. I tried using Google Search Console to get Google to make an early "crawl" of the file my_family.com, but am frustrated by the lack of examples about how to use it, and don’t think I succeeded. Maybe I should be asking for a crawl of the file my_family.com/first.html, instead of the basic my_family.com website? I’d appreciate any advice anyone has about this. For example, how do I determine when the last crawl was, when can I expect the next crawl, is the meta tag in the correct file, etc? Thanks

I’v indexed my pages in search console, but Google didnt show them in search results [duplicate]

I have indexed all of my webpages into google search console tools.. but Google did not show the results in search results.

my website is : https://voyage-actualite.com/

Can you help me please.. see the latests articles, they wont show in search results

Thank you

Multi-level paging where the inner level page tables are split into pages with entries occupying half the page size

A processor uses $ 36$ bit physical address and $ 32$ bit virtual addresses, with a page frame size of $ 4$ Kbytes. Each page table entry is of size $ 4$ bytes. A three level page table is used for virtual to physical address translation, where the virtual address is used as follows:

  • Bits $ 30-31$ are used to index into the first level page table.
  • Bits $ 21-29$ are used to index into the 2nd level page table.
  • Bits $ 12-20$ are used to index into the 3rd level page table.
  • Bits $ 0-11$ are used as offset within the page.

The number of bits required for addressing the next level page table(or page frame) in the page table entry of the first, second and third level page tables are respectively

(a) $ \text{20,20,20}$

(b) $ \text{24,24,24}$

(c) $ \text{24,24,20}$

(d) $ \text{25,25,24}$

I got the answer as (b) as in each page table we are after all required to point to a frame number in the main memory for the base address.

But in this site here it says that the answer is (d) and the logic which they use of working in chunks of $ 2^{11} B$ I feel ruins or does not go in with the entire concept of paging. Why the system shall suddenly start storing data in main memory in chucks other than the granularity defined by the page size of frame size. I do not get it.

Google Search Console not showing proper other pages results (they are indexed)

i have a website and than i have a wordpress blog in /blog route of that site they are kind of independent for now since i havent done any internal linking on them so i created this new post n my blog and submitted it for indexing it got positive results and its also showing up on google but i cant see its results on google search console does any one know why?

side note: i havent submitted site map for now.(and if its because of site map than is it possible to view results without sitemap?)

Who’s the artist that drew the art on pages 148 and 220 of the PHB?

Anyone know the name of the artist or artists that drew the art on page 148 and 220 in the Player’s Handbook? I know there’s a list of artists in the credits, but looking through it, I haven’t been able to find that art specifically on any of their online portfolios.

Also, does anyone know if there exist uncropped versions of said art, and where you might find them?

Google is caching my homepage almost properly but not other pages

My website is built on NodeJS with ReactJS as frontend and has server side rendering.

Problem 1: Home page is cached properly by Google but other pages are cached without their CSS.

Problem 2: Blogs which are brought dynamically are not shown in the cache.

Link : https://www.google.com/search?q=site%3Aamitkk.com&oq=site%3Aami&aqs=chrome.0.69i59j69i57j69i58.12895j0j7&sourceid=chrome&ie=UTF-8

Thanks in advance

Amit