100% Automated Hotel, Flight & Rental Car Site – Huge Profitable – Easy to Manage – Newbie Friendly!

Amazing Travel Booking Website – Get this profitable Hotel Search Engine Website – Fully Automated Niche. Huge Profitable Niche with ready to go online business allows you to make money. No technical knowledge is required.

(Key Features)

  • Huge Profitable Website.
  • Powerful Travel Booking Niche.
  • Fully Automated Website.
  • Easy to use & No Maintenance Required.
  • A Complete Online Money Making Business.
  • No Technical Knowledge and Experience required.
  • It’s ready to…

100% Automated Hotel, Flight & Rental Car Site – Huge Profitable – Easy to Manage – Newbie Friendly!

Fully Automated Games Tips Website – Huge Profitable Niche – Ads Ready – No Experience Needed!

This is 100% automated Games Tips Website – Huge Profitable Niche with ready to go online business allows you to make money. No technical knowledge is required.

(Top Features)

  • Professional Design, SEO Friendly Website, Based on WordPress.
  • Keyword Rich, Short, Catchy Domain.
  • Great Profitable Niche.
  • Easily to manage and operate.
  • Fully Automated Website.
  • A Complete Online Money Making Business.
  • No…

Fully Automated Games Tips Website – Huge Profitable Niche – Ads Ready – No Experience Needed!

100% Automated Hotel, Flight & Rental Car Site – Easy to Manage – Huge Profitable – No Exp Needed!

Amazing Travel Booking Website – Get this profitable Hotel Search Engine Website – Fully Automated Niche. Huge Profitable Niche with ready to go online business allows you to make money. No technical knowledge is required.

(Key Features)

  • Huge Profitable Website.
  • Powerful Travel Booking Niche.
  • Fully Automated Website.
  • Easy to use & No Maintenance Required.
  • A Complete Online Money Making Business.
  • No Technical Knowledge and Experience required.
  • It’s ready…

100% Automated Hotel, Flight & Rental Car Site – Easy to Manage – Huge Profitable – No Exp Needed!

Amazing Career Tips Website – Huge Profitable Niche – 100% Automated – No Experience Needed!

This is 100% automated Career Tips Website – Huge Profitable Niche with ready to go online business allows you to make money. No technical knowledge is required.

(Top Features)

  • Professional Design, SEO Friendly Website, Based on WordPress.
  • Keyword Rich, Short, Catchy Domain.
  • Great Profitable Niche.
  • Easily to manage and operate.
  • Fully Automated Website.
  • A Complete Online Money Making Business.
  • No…

Amazing Career Tips Website – Huge Profitable Niche – 100% Automated – No Experience Needed!

Efficiently DROP huge table with cascading replication in PostgreSQL

What I have:

Database: PostgreSQL 9.3

Table T,

  • Structure: 10 integers/bools and 1 text field
  • Size: Table 89 GB / Toast 1046 GB
  • Usage: about 10 inserts / minute
  • Other: reltuples 59913608 / relpages 11681783

Running cascading replication: Master -> Slave 1 -> Slave 2

  • Replication Master -> Slave 1 is quite fast, a good channel.
  • Replication Slave 1 -> Slave 2 is slow, cross-continent, about 10 Mbit/s.

This is a live, used database with about 1.5TB more data in it.


What’s needed to be done:

  • Drop all data to start with a fresh setup (to do constant cleanups and not allow it to grow this big).

Question: What would be the most efficient way to achieve this:

  • without causing huge lags between Master and Slave 1
  • without causing Slave 2 to get irreversibly lagged to a state where catching up is not possible

As I see it:

  • Safe way – do a copy, swap places, DELETE data constantly watching lag
  • Other way – do a copy, swap places, DROP table – but this would cause enormous amounts of data at once and Slave 2 would get lost?

Automated Travel Website – Huge Buy It Now Bonuses

This is a complete professional Travel Website that allows users to search and book their flights and accommodation for trips anywhere in the world. In numbers – it contains over 5 million hotels in 50,000 cities with 1 million reviews, more than 600 airlines, and a variety of Car Rental companies on one website.

This is a perfect opportunity to own a strong online business for a passive income.

How Do You Make Money?

Revenue share up to 80% for each sale. The…

Automated Travel Website – Huge Buy It Now Bonuses

PostgreSQL 13 – Improve huge table data aggregation

I have a huge database (current size is ~900GB and new data still comes) partitioned by Year_month and subpartition by currency. The problem is when I try to fetch aggregation from the whole partition it goes slow. This is a report so it will be queried very often. The current size of partition which I want to aggregate: 7.829.230 rows. Each subpartition will be similar. Table schema (anonymized):

-- auto-generated definition CREATE TABLE aggregates_dates (     currency              CHAR(3)                                    NOT NULL,     id                    uuid            DEFAULT uuid_generate_v4() NOT NULL,     date                  TIMESTAMP(0)                               NOT NULL,     currency              CHAR(3)                                    NOT NULL,     field01               INTEGER                                    NOT NULL,     field02               INTEGER                                    NOT NULL,     field03               INTEGER                                    NOT NULL,     field04               INTEGER                                    NOT NULL,     field05               INTEGER                                    NOT NULL,     field06               CHAR(2)                                    NOT NULL,     field07               INTEGER         DEFAULT 0                  NOT NULL,     field08               INTEGER         DEFAULT 0                  NOT NULL,     field09               INTEGER         DEFAULT 0                  NOT NULL,     field10               INTEGER         DEFAULT 0                  NOT NULL,     field11               INTEGER         DEFAULT 0                  NOT NULL,     value01               INTEGER         DEFAULT 0                  NOT NULL,     value02               INTEGER         DEFAULT 0                  NOT NULL,     value03               INTEGER         DEFAULT 0                  NOT NULL,     value04               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value05               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value06               INTEGER         DEFAULT 0                  NOT NULL,     value07               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value08               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value09               INTEGER         DEFAULT 0                  NOT NULL,     value10               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value11               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value12               INTEGER         DEFAULT 0                  NOT NULL,     value13               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value14               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value15               INTEGER         DEFAULT 0                  NOT NULL,     value16               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value17               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value18               NUMERIC(24, 12) DEFAULT '0'::NUMERIC       NOT NULL,     value19               INTEGER         DEFAULT 0,     value20               INTEGER         DEFAULT 0,     CONSTRAINT aggregates_dates_pkey         PRIMARY KEY (id, date, currency) )     PARTITION BY RANGE (date); CREATE TABLE aggregates_dates_2020_01     PARTITION OF aggregates_dates         (             CONSTRAINT aggregates_dates_2020_01_pkey                 PRIMARY KEY (id, date, currency)             )         FOR VALUES FROM ('2020-01-01 00:00:00') TO ('2020-01-31 23:59:59')     PARTITION BY LIST (currency); CREATE TABLE aggregates_dates_2020_01_eur     PARTITION OF aggregates_dates_2020_01         (             CONSTRAINT aggregates_dates_2020_01_eur_pkey                 PRIMARY KEY (id, date, currency)             )         FOR VALUES IN ('EUR'); CREATE INDEX aggregates_dates_2020_01_eur_date_idx ON aggregates_dates_2020_01_eur (date); CREATE INDEX aggregates_dates_2020_01_eur_field01_idx ON aggregates_dates_2020_01_eur (field01); CREATE INDEX aggregates_dates_2020_01_eur_field02_idx ON aggregates_dates_2020_01_eur (field02); CREATE INDEX aggregates_dates_2020_01_eur_field03_idx ON aggregates_dates_2020_01_eur (field03); CREATE INDEX aggregates_dates_2020_01_eur_field04_idx ON aggregates_dates_2020_01_eur (field04); CREATE INDEX aggregates_dates_2020_01_eur_field06_idx ON aggregates_dates_2020_01_eur (field06); CREATE INDEX aggregates_dates_2020_01_eur_currency_idx ON aggregates_dates_2020_01_eur (currency); CREATE INDEX aggregates_dates_2020_01_eur_field09_idx ON aggregates_dates_2020_01_eur (field09); CREATE INDEX aggregates_dates_2020_01_eur_field10_idx ON aggregates_dates_2020_01_eur (field10); CREATE INDEX aggregates_dates_2020_01_eur_field11_idx ON aggregates_dates_2020_01_eur (field11); CREATE INDEX aggregates_dates_2020_01_eur_field05_idx ON aggregates_dates_2020_01_eur (field05); CREATE INDEX aggregates_dates_2020_01_eur_field07_idx ON aggregates_dates_2020_01_eur (field07); CREATE INDEX aggregates_dates_2020_01_eur_field08_idx ON aggregates_dates_2020_01_eur (field08); 

Example Query (not all fields used) which aggregate whole partition (This query might have many more WHERE conditions but this one is the worst case)

EXPLAIN (ANALYSE, BUFFERS, VERBOSE) SELECT        COALESCE(SUM(mainTable.value01), 0)            AS                                    "value01",        COALESCE(SUM(mainTable.value02), 0)       AS                                    "value02",        COALESCE(SUM(mainTable.value03), 0)       AS                                    "value03",        COALESCE(SUM(mainTable.value06), 0)       AS                                    "value06",        COALESCE(SUM(mainTable.value09), 0)    AS                                    "value09",        COALESCE(SUM(mainTable.value12), 0)      AS                                    "value12",        COALESCE(SUM(mainTable.value15), 0) AS                                    "value15",        COALESCE(SUM(mainTable.value03 + mainTable.value06 + mainTable.value09 + mainTable.value12 +                     mainTable.value15), 0) AS                                    "kpi01",        COALESCE(SUM(mainTable.value05) * 1, 0)                                         "value05",        COALESCE(SUM(mainTable.value08) * 1, 0)                                         "value08",        COALESCE(SUM(mainTable.value11) * 1, 0)                                      "value11",        COALESCE(SUM(mainTable.value14) * 1, 0)                                        "value14",        COALESCE(SUM(mainTable.value17) * 1, 0)                                   "value17",        COALESCE(SUM(mainTable.value05 + mainTable.value08 + mainTable.value11 + mainTable.value14 +                     mainTable.value17) * 1, 0)                                   "kpi02",        CASE            WHEN SUM(mainTable.value02) > 0 THEN (1.0 * SUM(                        mainTable.value05 + mainTable.value08 + mainTable.value11 +                        mainTable.value14 + mainTable.value17) / SUM(mainTable.value02) * 1000 * 1)            ELSE 0 END                                                                      "kpiEpm",        CASE            WHEN SUM(mainTable.value01) > 0 THEN (1.0 * SUM(                        mainTable.value05 + mainTable.value08 + mainTable.value11 +                        mainTable.value14) / SUM(mainTable.value01) * 1)            ELSE 0 END FROM performance mainTable WHERE (mainTable.date BETWEEN '2020-01-01 00:00:00' AND '2020-02-01 00:00:00')   AND (mainTable.currency = 'EUR') GROUP BY mainTable.field02; 

EXPLAIN:

+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ |QUERY PLAN                                                                                                                                                                          | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ |HashAggregate  (cost=3748444.51..3748502.07 rows=794 width=324) (actual time=10339.771..10340.497 rows=438 loops=1)                                                                 | |  Group Key: maintable.field02                                                                                                                                                      | |  Batches: 1  Memory Usage: 1065kB                                                                                                                                                  | |  Buffers: shared hit=2445343                                                                                                                                                       | |  ->  Append  (cost=0.00..2706608.65 rows=11575954 width=47) (actual time=212.934..4549.921 rows=7829230 loops=1)                                                                   | |        Buffers: shared hit=2445343                                                                                                                                                 | |        ->  Seq Scan on performance_2020_01 maintable_1  (cost=0.00..2646928.38 rows=11570479 width=47) (actual time=212.933..4055.104 rows=7823923 loops=1)                        | |              Filter: ((date >= '2020-01-01 00:00:00'::timestamp without time zone) AND (date <= '2020-02-01 00:00:00'::timestamp without time zone) AND (currency = 'EUR'::bpchar))| |              Buffers: shared hit=2444445                                                                                                                                           | |        ->  Index Scan using performance_2020_02_date_idx on performance_2020_02 maintable_2  (cost=0.56..1800.50 rows=5475 width=47) (actual time=0.036..6.476 rows=5307 loops=1)  | |              Index Cond: ((date >= '2020-01-01 00:00:00'::timestamp without time zone) AND (date <= '2020-02-01 00:00:00'::timestamp without time zone))                           | |              Filter: (currency = 'EUR'::bpchar)                                                                                                                                    | |              Rows Removed by Filter: 31842                                                                                                                                         | |              Buffers: shared hit=898                                                                                                                                               | |Planning Time: 0.740 ms                                                                                                                                                             | |JIT:                                                                                                                                                                                | |  Functions: 15                                                                                                                                                                     | |  Options: Inlining true, Optimization true, Expressions true, Deforming true                                                                                                       | |  Timing: Generation 4.954 ms, Inlining 14.249 ms, Optimization 121.115 ms, Emission 77.181 ms, Total 217.498 ms                                                                    | |Execution Time: 10345.662 ms                                                                                                                                                        | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 

Server spec:

  • AMD 64 Threads
  • 315GB Ram
  • 6xSSD RAID 10 Postgres Config:
postgresql_autovacuum_vacuum_scale_factor: 0.4 postgresql_checkpoint_completion_target: 0.9 postgresql_checkpoint_timeout: 10min postgresql_effective_cache_size: 240GB postgresql_maintenance_work_mem: 2GB postgresql_random_page_cost: 1.0 postgresql_shared_buffers: 80GB postgresql_synchronous_commit: local postgresql_work_mem: 1GB 

Google change of address tool from .co.uk to .com resulted in huge derank

December 2020 I moved a site from a .co.uk to a .com. No other changes, simple 301 redirect and use of the tool in google webmaster tools. The site had existed since 2006 and was the de facto site in the vertical, meaning I had plenty of P1s on google for medium tail keywords.

Now, 2.5 months later I am still hugely deranked for 10,000s of key words, and as a result I’ve dropped 40% of my traffic.

I was wondering whether anyone had any advice. Obviously this is a huge issue for me as it’s my livelihood I’ve built up over 15 years of hard work. I followed googles direction perfectly (my background is actually web dev and SEO) and it’s still deranked me.

I’m not sure the linking policy here, but the site is tyrereviews dot com, it used to be tyrereviews dot couk, and for example we used to rank 1 or 2 for the search "michelin primacy 4" in google UK however we are now page 2.

The new .com domain is targeted properly to the UK in webmaster tools too.

This is the same for many many medium tail keywords.

Given their huge variety, why is it so often concluded that the penalties needed to use a Weapon of Legacy are never worth it?

A common trend when discussing Weapons of Legacy is to compare their benefits to that of normal magical items, notice that they pretty much match up, and then conclude that because the Weapons of Legacy have extra penalties associated with them, they are clearly inferior to any comparable normal magical item. However, some of the benefits of some of the Weapons of Legacy are as good as unique (for there level, at least), so how is it so frequently and easily concluded that their extra penalties outweigh their unique benefits? What I’ve just linked does a fairly good job of assessing the value and rarity of said benefits, but despite the frequency with which I’ve seen the claim that the penalties for Weapons of Legacy always outweigh their benefits, I’ve never seen any such analysis of their penalties, which are rather varied. What property of these penalties is so severe that it allows the Weapons of Legacy to be disregarded without any further analysis, despite the variety in these penalties?

How much damage does an Enlarged Huge Gray Ooze deal with their pseudopod?

The stat block for the Huge Gray Ooze indicates that on a hit its pseudopod deals:

6d6 acid damage plus 2d6 acid damage, or 12d6 acid damage while the ooze is enlarged.

Looking at the smaller gray ooze’s stat block, it seems that the additional 2d6 damage is possibly an error. Furthermore, the source material for the creature seems to indicate that the additional 2d6 is not intended either.

How much damage is the Huge Gray Ooze supposed to deal when it hits with their pseudopod? Is it effectively 8d6 acid or 6d6? What about when it uses the Enlarge feature on itself, which doubles the damage dice for the pseudopod?