How to update Google’s index for AJAX fragments?

We replaced our old site with a WordPress site around 6 months ago. Our old store plugin used links with AJAX fragments which were indexed in Google, like so:!/item/info/etc

Fragments aren’t exposed to the server so as I understand it our only option to redirect was to do a JS based redirect which I implemented many months ago. Based on my research (e.g. I believed this would mean Google would update their indexes, however the old and new results both display in Google next to each other.

Is there a way to hurry it along? I have confirmed the new page that it redirects to is identified as the canonical page.

List index out of range al modificar el tamaño de la lista

Al procesar una lista (obteniendo los datos de un csv) a traves de un bucle, cuando la lista es del tamaño x me lo procesa completamente pero cuando la proceso con esa misma lista pero ahora con tamaño y siendo y>x me salta la exception que menciono

Este es el bucle:


frecuencia_mcd=[ ]
for i in range(len(tendencia_mcd)-1):
(indentación) frecuencia_mcd.append(tendencia[i+1]-tendencia[i])


Powerful DA-35 PA-47 High TF/CF Over-blog Dofollow Backlinks with Google Index Guarantee for $2

Getting Backlink From A Website With STRONG Domain Authority And Page Authority Will Get You Google Ranking & Organic Traffic Success.This is the kind of backlink you want if you are having trouble getting your keyword rankings. I will provide you with an extremely powerful link from real over-blog that will significantly help your Google rankings and domain Strength in no time. Hand-written unique articles related to your main niche with A Quality Link – Outstanding Results!A DA35 PA47 Real Blog (feel free to contact me for samples and more data). 90K unique visitors every Month. boost your domain authority and his valueStats of site:Global Rank: 1,825Country Rank : 158DA : 35PA : 47Trust Flow: 72 Trust Metric: 72 Global Rank: 1,308 Alexa Reach Rank: 1,159 External Backlinks: 234,847,153 Referring Domains: 169,060 EDU Backlinks: 103,181 GOV Backlinks: 8,550 PR Quality: Very Strong Domain Age : 15 Years Why you need high DA guest post backlinks? If you want to keep your ranks on the top position or quickly increase the keywords ranks on the search engine, our guest posting service is your best choice as search engines loves contextual backlinks from high authority sites like the high DA guest blogs here. We guarantee the quality as we post article on the spam free, high authority guest blog sites. Features of my service: Permanent Links.Do-Follow Links.500-600 words Article. Live Links REPORT.Most Advanced Link Building service of the yearMost Reasonable Prices Google index guarantee100% satisfactionAuthority in Search EnginesBrand AwarenessPage Rank boost (takes years though)Link JuiceBoost in rankingsFast RankingThis is a Legit website with Legit authors and real traffic. It’s not a PBN or a website which would write almost anything. We have a strict editorial policy and we monitor our content regularly to ensure a positive experience for all our readers. Niches Covered – Business – Entertainment – Finance – Health – Global Events – Daily News – Interviews – Science – Technology (Gadgets, Software, Cars, etc) – USA Breaking News – Press Release

by: 1serp786
Created: —
Category: Link Building
Viewed: 246

Why is a Postgres 11 hash index so large?

Postgres 11.4 on RDS and 11.5 at home.

I’m looking at hash indexes more closely today because I’m having problems with a citext index being ignored. And I find that I don’t understand why a hash index is so large. It’s taking about 50 bytes/row when I’d expect it to take 10 bytes + some overhead.

I’ve got a sample database with a table named record_changes_log_detail table that has 7,733,552 records, so ~8M. Within that table is a citext field named old_value that’s the source for the hash index:

CREATE INDEX record_changes_log_detail_old_value_ix_hash     ON record_changes_log_detail     USING hash (old_value); 

Here’s a check on the index size:

select 'record_changes_log_detail_old_value_ix_hash' as index_name, pg_relation_size ('record_changes_log_detail_old_value_ix_hash') as bytes, pg_size_pretty(pg_relation_size ('record_changes_log_detail_old_value_ix_hash')) as pretty 

That returns 379,322,368 bytes, or about 362MB. I’ve dug into the source a little, and this fine piece a bit more.

It sound like a hash index entry for a row is a TID paired with the hash key itself. And some kind of index counter within the page. That’s two 4-byte integers and, I’m guessing a 1 or 2 byte integer. As a naive calculation, 10 bytes * 7,733,552 = 77,335,520. The actual index is a roughly 5x larger than that. Granted, you need space for the index structure itself, but it shouldn’t take the rough cost per row from ~10 bytes to ~50, should it?

Here are the details of the index, read using pageinspect extension and then manually pivoted for legibility.

select *  from hash_metapage_info(get_raw_page('record_changes_log_detail_old_value_ix_hash',0));   magic   105121344 version 4 ntuples 7733552 ffactor 307 bsize   8152 bmsize  4096 bmshift 15 maxbucket   28671 highmask    32767 lowmask 16383 ovflpoint   32 firstfree   17631 nmaps   1 procid  17269 spares  {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,17631,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0} mapp    {28673,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}  select * from hash_page_stats(get_raw_page('record_changes_log_detail_old_value_ix_hash',1));  live_items  2 dead_items  0 page_size   8192 free_size   8108 hasho_prevblkno 28671 hasho_nextblkno 4294967295 hasho_bucket    0 hasho_flag  2 hasho_page_id   65408 

200,000 SER GSA back links for increase links juice and faster index on google for $7

200 k SE R GSA Back links For Increase Link Juice and Faster Index on Google Way to massive exposure for your Websites from various social media This is great for higher search engine ranking as well as best way to do SE O for your website and high authority websites, usually known as high quality “Link Juice”. Wondering how to optimize your website for higher search engine ranking? Here is a shortest SE O guide for you about understanding the process of back links building: Level *3 ====== *** This level assist you to increase your Level*2 websites link juice, as well as GSA SE R back links will help you to optimize your website for Google and other major search engines. Level*2 ====== *** Add “Level#1” link to various social media sites ### Crafts more relevant articles and add “Level#1” link on buffer sites ### Add “Level#1” link to various websites like Wiki, Article Directory, Forums and ETC Level*1 ====== ### Links from quality traffic websites *** Links from aged & established websites *** Links from relevant pages ### Links from Unique C Class IP s *** Links from high DA, PA, T F pages Accept for per order: ### 1 to 100 Keywords (should be one topic) ### 1 to 100 URLs Important: We will use randomly all keywords on all URLs (if you have more than 1 URL) The most recent statistics on Search Engine Optimization process are mind-blowing thanks for your order sir.

by: Eyasin9
Created: —
Category: Link Building
Viewed: 253

Is it recommended to have tab index navigation for complex web application?

I am doing requirement gathering for a complex in house web application. My customer is adamant to have keyboard tab index interaction for the application along with regular mouse interaction. My research says, there is no user who is physically challenged, may require tab index interactions.

What is the value add if we incorporate tab index interaction across application? What are the pro and cons to have tab index in a complex web application?


Error Undefined index al utilizar ajax

Buen día,

Tengo el siguiente inconveniente, estoy tratando de pasar una valor por de ajax a un formulario PHP, pero me dice que tengo un error Undefined index. el código que lama la función PHP es el siguiente:

<td><input type="text" class="form-control" name="ID_Nova" id="ID_Nova" placeholder="Número de documento" required="true" value ="<?php echo $  ID_Novedad ?>" readonly ></td> <body onload="lista_DescuentoNovedad(ID_Nova.value);">

La función de ajax es la siguiente:

 function lista_DescuentoNovedad(ID_Novedad){     $  (document).ready(function() { 		/*var ID_Novedad = $  ('#ID_Nova').val();*/ 		var datastring ='ID_Novedad=' + ID_Novedad;           $  .ajax({           beforeSend: function(){              $  ("#lista_DescuentoNovedad").html('<b>Actualizando lista de Descuentos en la novedad...</b>');            },           url: 'lista_DescuentoNovedad.php', 		  data: datastring,           type: 'POST',           success: function(x){             $  ("#lista_DescuentoNovedad").html(x); 			/********************HAY QUE VALIDAR #lista_clientes*******************************/             $  ("#lista_clientes").dataTable();            },            error: function(jqXHR,estado,error){}            });           });  }

El código PHP donde recibe la información es el siguiente:

include ("funciones/conex.php"); $  link=Conectarse();  $  ID_Novedad = $  _POST['ID_Novedad']; $  AnoActual=date("Y"); $  con=mysql_query("SELECT * FROM tbl_dtldcto where ID_Novedad=$  ID_Novedad AND Estado = 'PENDIENTE'",$  link);

apache won’t index folder from another mount

I’m trying to enable directory listing for a folder outside the web root, from a different local ext4 mount that uses Basic Authentication, but I’m getting an empty list and no logged errors. What’s strange is that if I put in the known location of a file under this directory in my browser, it downloads the file just fine.

enter image description here

Here’s my example.conf file:

<virtualhost *:80>    ServerAdmin   ServerName   ServerAlias     DirectoryIndex index.php   DocumentRoot /var/www/     <Directory />     Options FollowSymLinks     AllowOverride All     </Directory>    LogLevel warn   ErrorLog  /var/apachelogs/error.log   CustomLog /var/apachelogs/access.log combined    Alias /blah2 "/blah1/blah2"     <Location /blah2>               Options +Indexes +MultiViews +FollowSymLinks               IndexOptions +FancyIndexing     </Location>   </virtualhost> 

And here’s my .htaccess

AuthType Basic AuthName "Authentication Required" AuthUserFile "/home/myusername/.htpasswd" Require valid-user 

Also, I’ve commented IndexIgnore out in /etc/apache2/mods-enabled/autoindex.conf

#IndexIgnore .??* *~ *# RCS CVS *,v *,t 

Doing some testing, my configuration works fine if I move /blah1/blah2 under my home directory and run chmod -R 755 ~/blah1/blah2. There’s something about it being on another mount that is messing up mod_autoindex, even though apache can clearly read the files themselves. Removing authentication doesn’t help. With LogLevel warn I get no logged errors. After changing my LogLevel to trace4, here’s my error log.

Here’s the mount line from /etc/fstab:

UUID=[theuuid] /blah1 ext4 rw,nosuid,nodev,errors=remount-ro    0    0 

Index Your Backlinks for $5

Picture this scenario – You’ve just created a guest post, bought a PBN post or built foundation links. But… Your Backlinks Are Not Indexed In Google I’ve tried many Indexing software and tools. And I’ve come to find as the most effective to index my backlinks. I know many other SEO’s agree. Let me Index your backlinks by running them through Indexification. We Have Drip Feeding Options You can choose between having your backlinks indexed immediately or I can drip feed your links for you. Drip feed options include: 1 day drip-feeding2 day drip-feeding3 day drip-feeding4 day drip-feeding5 day drip-feeding6 day drip-feeding7 day drip-feedingSend me up to 5000 links for me to index. Disclaimer Whilst is the most effective index tool I’ve used, I, nor they can guarantee 100% index rate. However, as I’ve mentioned, they do have one of the highest index rates out there. Think About This – What’s the point of a backlink if it’s not indexed in Google? So What You Waiting For!? ORDER NOW!

by: JLev
Created: —
Category: Link Building
Viewed: 226

Guest Post On DA65 Google,Bing, Aol News Approved Site With Index Backlink for $60

Price dropped from $ 80 to $ 25 for a limited time! Did you know, having a backlink from a Google News Approved website is 1,000 times better than a normal website? I will give you a permanent DOFOLLOW backlink from my DA 65 Tech news website from a Sponsored/Guest post. This is not a PBN site! About our website:- Domain Age is 8 Years oldGoogle, Bing and AOL News Approved What’s included in the gig? Article publishing with a permanent DOFOLLOW backlink.The post will be listed on the homepage of our site for a limited time until new posts are published by our authors. Get article indexed on Google News, Bing News, and AOL News. Articles that I will publish will not have any sponsored labels and will look 100% natural. If you are to provide us an article:- It must be 100% uniqueIt must have at least 300 words.It can have maximum of 3 links inside.It must be Technology related/topic We do not give backlinks to Adult websites. If you would like a backlink to a gambling site, the article must be Technology related, and keywords like gambling/casino in the title of the post will not be accepted. If you would like to know the domain URL, please inbox me.

Created: —
Category: Guest Posts
Viewed: 279