Mysql get records more then 3 in interval of 1 minute, return group_concat ID

Currently i have this dataset, i need to return grouped ids that are within the range of 60 seconds and have more than 3.

CREATE TABLE test  (   `id` bigint NOT NULL AUTO_INCREMENT,   created_date TIMESTAMP(1) NOT NULL,   origin_url   VARCHAR (200) NOT NULL,   client_session_id VARCHAR (50) NOT NULL,   PRIMARY KEY (`id`),   UNIQUE KEY `UC_PRE_CHAT_TALKID_COL` (`id`) ); 
INSERT INTO test VALUES (1,'2021-01-18 11:02:24.0', 'https://zendes.com/', 'znkjoc3gfth2c3m0t1klii'), (2,'2021-01-18 11:02:35.0', 'https://zendes.com/', 'znkjoc3gfth2c3m0t1klii'), (3,'2021-01-18 11:02:03.0', 'https://zendes.com/', 'znkjoc3gfth2c3m0t1klii'), (4,'2021-01-18 11:11:28.0', 'https://rarara.com/', 'znkjoc3gfth2c3m0t1klii'), (5,'2021-01-18 11:11:36.0', 'https://rarara.com/', 'znkjoc3gfth2c3m0t1klii'), (6,'2021-01-18 11:11:05.0', 'https://rarara.com/', 'znkjoc3gfth2c3m0t1klii'); 

db<>fiddle here

something like this:

ids     origin_url              client_session_id 1,2,3   https://testett.com/    znkjoc3gfth2c3m0t1klii 4,5,6   https://rarara.com/     znkjoc3gfth2c3m0t1klii 

Google Analytics wrongly records events when using Internet Explorer

I’m a web designer and came across an odd finding. On our website, we track our contact form submissions using Google Tag Manager & Google Analytics.

I noticed that every once in a while the number of submissions recorded by Google Analytics is higher than the actual amount of messages we received.

So, according to Analytics, we should have received 5 submissions last week – when in fact, we just received 2 messages. (From this page: https://avinton.com/services/avinton-data-platform/)

The tracking setup seems to be correct (and has been correctly tracking submissions for over a year). So, I did a lot of digging and finally found out that all those "ghost" submissions were visitors using Internet Explorer. In December, we even had 2 recorded form submissions in Analytics, originating from pages without ANY contact forms on them! (such as https://avinton.com/blog/2017/10/clustering/)

While Internet Explorer totally messes up the page’s CSS, I successfully received my own test message. So the form seems to work, at least in my IE version.

So, does that mean some version of Internet Explorer wrongly fires events? Or did visitors actually try to send us a message which didn’t get through? Any suggestions on what to look at next are greatly appreciated!

Why custom resource records in Google DNS panel is not working?

I bought a domain from domains.google.

Then I went into my panel, connected it to Google Sites, and then I went on to create custom DNS resource records for my subdomains.

Google Sites works perfectly. But my subdomains are not resolving.


enter image description here


For example, I have added these records. But when I use nslookup to query them, or when I enter them in browser, I get non-existent error. I have waited for more than 72 hours now.

enter image description here


What should I do?

Can differing A, CNAME, TXT, and NS records in multiple domains TLD’s cause email deliverability issues?

I’m troubleshooting an issue where, after switching TLD’s internal and ESP-based emails are getting blocked when going to external customers. Could different A, CNAME, TXT, and NS records cause email deliverability issues? Short of posting actual differences, is there anything obvious before I look for other issues?

Sort Records in Original Order After UNION ALL

I have a source table including column StartTime, EndTime, and a couple of other columns as you can see below.

My goal is to create a query that returns a dataset in which StartTime and EndTime are merged into a single Time column (so that each record appears twice, once with the StartTime as Time, once with the EndTime as Time).

I can easily achieve this with a subquery returning StartTime for the Time column, another subquery returning the EndTime for the Time column, the merge the results by UNION ALL. However, I can’t seem to figure out how to sort the new dataset so that the order is

  • This record, StartTime as Time
  • This record, EndTime as Time
  • Next record, StartTime as Time
  • Next record, EndTime as Time

and so on……

Your help would be greatly appreciated!

Sample below – Thank you!

Source Table: (LogID is an auto-increment PK in no particular order, so it can’t be used for sorting)

LogID Description StartTime EndTime
1 Travel to new site 2019-7-31 05:30 2019-07-31 06:30
2 Meeting 2019-07-31 06:30 2019-07-31 07:00
3 Presentation 2019-07-31 07:00 2019-07-31 7:30
4 Travel to new site 2019-07-31 7:30 2019-07-31 12:00
5 Setup 2019-07-31 12:00 2019-07-31 13:15

Desired Result:

LogID Description Time
1 Travel to new site 2019-7-31 05:30
1 Travel to new site 2019-07-31 06:30
2 Meeting 2019-07-31 06:30
2 Meeting 2019-07-31 07:00
3 Presentation 2019-07-31 07:00
3 Presentation 2019-07-31 7:30
4 Travel to new site 2019-07-31 7:30
4 Travel to new site 2019-07-31 12:00
5 Setup 2019-07-31 12:00
5 Setup 2019-07-31 13:15

Filter records and find those with minimal date

I have the following tables in Microsoft SQL Server:

  1. Users: Includes uID (primary key), birthDate, name
  2. Conversations: Includes convID (primary key), hostID (foreign key for uID in Users)
  3. Participants: Includes uID (foreign key for uID in Users), convID (foreign key for convID in Conversations).

I need to write a query which finds the name, ID and birth date of the oldest user who didn’t participate in any conversation, and that his name contains the letter ‘b’. If there is more than one user, I need to return all of them.

I don’t know how to both filter the users, and than find those with the minimal birth date (the oldest).

Reduce Count(*) time in Postgres for Many Records

Query:

EXPLAIN ANALYZE select count(*) from product; 

ROWS: 534965

EXPLANATION :

Finalize Aggregate  (cost=53840.85..53840.86 rows=1 width=8) (actual time=5014.774..5014.774 rows=1 loops=1)   ->  Gather  (cost=53840.64..53840.85 rows=2 width=8) (actual time=5011.623..5015.480 rows=3 loops=1)         Workers Planned: 2         Workers Launched: 2         ->  Partial Aggregate  (cost=52840.64..52840.65 rows=1 width=8) (actual time=4951.366..4951.367 rows=1 loops=3)               ->  Parallel Seq Scan on product prod  (cost=0.00..52296.71 rows=217571 width=0) (actual time=0.511..4906.569 rows=178088 loops=3) Planning Time: 34.814 ms Execution Time: 5015.580 ms 

How can we optimize the above query to get the counts very quickly?

This is a simple query, however, its variations can include different conditions and join with other tables.

Evenly select Records on categorical column with Repeating Sequence and pagination in Postgres

Database: Postgres
I have a product(id, title, source, ...) table which contains almost 500K records. An example of data is:

| Id | title    | source   | |:---|---------:|:--------:| | 1  | product1 | source1  | | 2  | product2 | source1  | | 3  | product3 | source1  | | 4  | product4 | source1  | | .  | ........ | source1  | | .  | ........ | source2  | | x  | productx | source2  | |x+n |productX+n| sourceN  | 

There are are 5 distinct source values. And all records have source values random.

I need to get paginated results in such a way that: If I need to select 20 products then the results set should contain results equally distributed based on source and should be in a repeating sequence. 2 products from each source till the last source and again next 2 products from each source. For example:

| #  | title    | source   | |:---|---------:|:--------:| | 1  | product1 | source1  | | 2  | product2 | source1  | | 3  | product3 | source2  | | 4  | product4 | source2  | | 5  | product5 | source3  | | 6  | product6 | source3  | | 7  | product7 | source4  | | 8  | product8 | source4  | | 9  | product9 | source5  | | 10 |product10 | source5  | | 11 | ........ | source1  | | 12 | ........ | source1  | | 13 | ........ | source2  | | 14 | ........ | source2  | | .. | ........ | .......  | | 20 | ........ | source5  | 

What is the optimized PgSql query to achieve the above scenario considering the LIMIT, OFFSET, sources can increase or decrease?