Setting Up Private Employee Survey Area On Company Website [closed]

Long-time listener, first-time caller.

Our company has about 120 Employees and growing. We are at a point where we need to collect self-evaluations and other survey data from our workforce but only our administrators and managers have user accounts with our Google Apps (because those costs add up!) Since we can’t require anyone to have a personal Google account, we don’t have a reliable way to verify or authenticate the rest of our employees as they fill out surveys. Our solution so far is to hand out paper forms and do the data entry manually.

I’ve been charged with finding a solution. I was thinking it’d be possible to set up a member area of sorts on our website where employees could register and log in for surveys and such. I could get Google Sheets talking to the survey database and we’d be off to the races.

Our public-facing site is hosted on SquareSpace, if that makes any difference.

Anyway, the world has changed many times over since I’ve had anything to do with the back end of a website (it’s true; I’m not a pro) and I’m completely unsure of where to start, but I can probably build it once I get my bearings so I’m here looking for suggestions on how to start.

Help?

Yandex not crawling compressed sitemap index

I have submitted a sitemap index file (one that links to other sitemaps that contain the actual URLs search engines are instructed to crawl). It is GZip compressed.

Using the Yandex sitemap validation tool it tells me it is valid and has 202 links and no errors.

However, in Yandex Webmaster it shows up with a small, grey sign in the status column. When clicked it says ‘Not indexed’.

Yandex is not indexing the URLs provided in the file, which are all new. Though it states it has consulted the sitemap.

Any ideas what may be wrong?

Pagination (archive posts) getting out of hand – what to do from an SEO POV?

WordPress as many of us know creates an archive loop of old posts.

From an SEO POV, some of my categories are now at 100+ pages which is a lot of bloat…

Sure, I can set each of these paginated pages a canonical link but still, feels like it is a bit unnecessary for all these indexed pages that just contain a title and an excerpt (which is basically duplicate content..)

Is one approach to simply switch off archive loops or does Google ignore these archived pages?

Thanks

Is there a regex way to match generally all possible subdomains in robots.txt?

Given a website with the fictional domain example.com.
The owner of this website added a subdomain : x.example.com.

  • After one year, the owner changed x to y so to have y.example.com
  • After two years, the owner changed y to z so to have z.example.com

Each of the three scenarios did not involve a change of all example.com structures at robots.txt so the owner got a serious long term SEO problem because crawling software were requested to scan non existing webpages (x, and y ones respectively).

What regex prophylaxis could have been used by the owner, beforehand to prevent the SEO problem;
Is there a regex way to match generally all possible subdomains in robots.txt?

I’m really confused with my .htaccess config

My directory structure is

- Assets - Dashboard    - index.php    - account.php index.php about.php verify.php 

What i basically want it to be:

  1. Remove the .php extension for example http://example.org/about.php should be http://example.org/about/ (including the trailing slash as well)
  2. If .php is encountered in the URL redirect it back to http://example.org/about/
  3. Instead of having the URL as http://example.org/verify.php?key=123456 it should be http://example.org/verify/123456
  4. Conditions to be met in sub directory as well for example http://example.org/dashboard/ should be the URL instead of http://example.org/dashboard/index.php

My .htaccess file looks like this

RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([^.]+)$   $  1.php [L] RewriteRule ^verify/([0-9a-zA-Z]+)\.php /verify.php?key=$  1 [NC,L]  <FilesMatch "\.(jpg|png|svg|css)$  "> Header set Cache-Control "proxy-revalidate, max-age=0" </FilesMatch>   <Files .htaccess> order allow,deny deny from all </Files>  Options All -Indexes  ErrorDocument 403 /404.html ErrorDocument 404 /404.html ErrorDocument 500 /404.html 

Im really sorry for not giving much clarity.

I’m running it on localhost XAMPP web server.

  • Rule 3 and 4 works absolutely fine!
  • Rule 1 works fine but when it encounters the trailing slash it gives me a 404.
  • Rule 2 doesn’t seem to work.

Email sent to 2 addresses with shared same organization domain @123abc.com and one bounced back. Was it successfully delivered to the other address?

It is my first time asking questions, so my apologies if there is any mistakes. I sent an email to 2 addresses (2 different departments in same organization with shared @123abc.com), one bounced back from mailer-daemon@googlemail.com due to ‘address not found’. I later found out it was a generated email address. Could someone please tell me if my email was successfully delivered to the other ‘good’ address (the other department)? Thank you very much for your great help in advance.

Is there a standard for “virtual receipts”, and is it actually used anywhere?

I just got another e-mail from my food store after I had placed an order. It has no plaintext version, only a HTML one. Only with extreme amounts of efforts from me could I parse out the products and their individual prices and quantities… until they change their e-mails the next time.

I currently "only" parse out the delivery date/time, the total price for the order and the order id. Which is insanity.

Is there really no "digital receipt" standard? They seem to have no hidden JSON/CSV blob anywhere in their e-mail, or even manually downloadable from their website when logged in. How is one supposed to actually make a local database of what they buy and the prices and stuff? Even just figuring out how to parse their e-mails for the total price was quite a bit of work, and I’m certain that almost nobody out there does this.

How come this was apparently overlooked, in spite of being such an important and basic thing for "e-commerce"? Am I really expected to manually input all of this data or spend countless hours figuring out their broken HTML blob and keep updating it whenever they change their e-mails, and do this for every single store I ever buy anything from?

I strongly suspect that there is some standard, probably released as an RFC in 1997 or something, but nobody wants to implement it because it means "giving away control" in their eyes?

SEO Keyword Density Issue

I have a website that has a keyword density of 8% and 4% for my keywords, but I only used the keyword once. The website doesn’t have a lot of actual text. Does this high keyword density hurt my site’s SEO even though I only used it once? I checked my keyword density using the SEO Review Tools density checker.

Confusing domain name extensions

There are similar domain name extensions, which are really confusing (at least to me) such as these:

.gift and .gifts

.game and .games

.football and .futbol

This means, in the future, extensions like .coms .nets .orgs may be available.

What are the criteria used to allow these new domain name extensions and who is regulating this ?