Google Analytics wrongly records events when using Internet Explorer

I’m a web designer and came across an odd finding. On our website, we track our contact form submissions using Google Tag Manager & Google Analytics.

I noticed that every once in a while the number of submissions recorded by Google Analytics is higher than the actual amount of messages we received.

So, according to Analytics, we should have received 5 submissions last week – when in fact, we just received 2 messages. (From this page: https://avinton.com/services/avinton-data-platform/)

The tracking setup seems to be correct (and has been correctly tracking submissions for over a year). So, I did a lot of digging and finally found out that all those "ghost" submissions were visitors using Internet Explorer. In December, we even had 2 recorded form submissions in Analytics, originating from pages without ANY contact forms on them! (such as https://avinton.com/blog/2017/10/clustering/)

While Internet Explorer totally messes up the page’s CSS, I successfully received my own test message. So the form seems to work, at least in my IE version.

So, does that mean some version of Internet Explorer wrongly fires events? Or did visitors actually try to send us a message which didn’t get through? Any suggestions on what to look at next are greatly appreciated!

Why custom resource records in Google DNS panel is not working?

I bought a domain from domains.google.

Then I went into my panel, connected it to Google Sites, and then I went on to create custom DNS resource records for my subdomains.

Google Sites works perfectly. But my subdomains are not resolving.


enter image description here


For example, I have added these records. But when I use nslookup to query them, or when I enter them in browser, I get non-existent error. I have waited for more than 72 hours now.

enter image description here


What should I do?

Can differing A, CNAME, TXT, and NS records in multiple domains TLD’s cause email deliverability issues?

I’m troubleshooting an issue where, after switching TLD’s internal and ESP-based emails are getting blocked when going to external customers. Could different A, CNAME, TXT, and NS records cause email deliverability issues? Short of posting actual differences, is there anything obvious before I look for other issues?

Sort Records in Original Order After UNION ALL

I have a source table including column StartTime, EndTime, and a couple of other columns as you can see below.

My goal is to create a query that returns a dataset in which StartTime and EndTime are merged into a single Time column (so that each record appears twice, once with the StartTime as Time, once with the EndTime as Time).

I can easily achieve this with a subquery returning StartTime for the Time column, another subquery returning the EndTime for the Time column, the merge the results by UNION ALL. However, I can’t seem to figure out how to sort the new dataset so that the order is

  • This record, StartTime as Time
  • This record, EndTime as Time
  • Next record, StartTime as Time
  • Next record, EndTime as Time

and so on……

Your help would be greatly appreciated!

Sample below – Thank you!

Source Table: (LogID is an auto-increment PK in no particular order, so it can’t be used for sorting)

LogID Description StartTime EndTime
1 Travel to new site 2019-7-31 05:30 2019-07-31 06:30
2 Meeting 2019-07-31 06:30 2019-07-31 07:00
3 Presentation 2019-07-31 07:00 2019-07-31 7:30
4 Travel to new site 2019-07-31 7:30 2019-07-31 12:00
5 Setup 2019-07-31 12:00 2019-07-31 13:15

Desired Result:

LogID Description Time
1 Travel to new site 2019-7-31 05:30
1 Travel to new site 2019-07-31 06:30
2 Meeting 2019-07-31 06:30
2 Meeting 2019-07-31 07:00
3 Presentation 2019-07-31 07:00
3 Presentation 2019-07-31 7:30
4 Travel to new site 2019-07-31 7:30
4 Travel to new site 2019-07-31 12:00
5 Setup 2019-07-31 12:00
5 Setup 2019-07-31 13:15

Filter records and find those with minimal date

I have the following tables in Microsoft SQL Server:

  1. Users: Includes uID (primary key), birthDate, name
  2. Conversations: Includes convID (primary key), hostID (foreign key for uID in Users)
  3. Participants: Includes uID (foreign key for uID in Users), convID (foreign key for convID in Conversations).

I need to write a query which finds the name, ID and birth date of the oldest user who didn’t participate in any conversation, and that his name contains the letter ‘b’. If there is more than one user, I need to return all of them.

I don’t know how to both filter the users, and than find those with the minimal birth date (the oldest).

Reduce Count(*) time in Postgres for Many Records

Query:

EXPLAIN ANALYZE select count(*) from product; 

ROWS: 534965

EXPLANATION :

Finalize Aggregate  (cost=53840.85..53840.86 rows=1 width=8) (actual time=5014.774..5014.774 rows=1 loops=1)   ->  Gather  (cost=53840.64..53840.85 rows=2 width=8) (actual time=5011.623..5015.480 rows=3 loops=1)         Workers Planned: 2         Workers Launched: 2         ->  Partial Aggregate  (cost=52840.64..52840.65 rows=1 width=8) (actual time=4951.366..4951.367 rows=1 loops=3)               ->  Parallel Seq Scan on product prod  (cost=0.00..52296.71 rows=217571 width=0) (actual time=0.511..4906.569 rows=178088 loops=3) Planning Time: 34.814 ms Execution Time: 5015.580 ms 

How can we optimize the above query to get the counts very quickly?

This is a simple query, however, its variations can include different conditions and join with other tables.

Evenly select Records on categorical column with Repeating Sequence and pagination in Postgres

Database: Postgres
I have a product(id, title, source, ...) table which contains almost 500K records. An example of data is:

| Id | title    | source   | |:---|---------:|:--------:| | 1  | product1 | source1  | | 2  | product2 | source1  | | 3  | product3 | source1  | | 4  | product4 | source1  | | .  | ........ | source1  | | .  | ........ | source2  | | x  | productx | source2  | |x+n |productX+n| sourceN  | 

There are are 5 distinct source values. And all records have source values random.

I need to get paginated results in such a way that: If I need to select 20 products then the results set should contain results equally distributed based on source and should be in a repeating sequence. 2 products from each source till the last source and again next 2 products from each source. For example:

| #  | title    | source   | |:---|---------:|:--------:| | 1  | product1 | source1  | | 2  | product2 | source1  | | 3  | product3 | source2  | | 4  | product4 | source2  | | 5  | product5 | source3  | | 6  | product6 | source3  | | 7  | product7 | source4  | | 8  | product8 | source4  | | 9  | product9 | source5  | | 10 |product10 | source5  | | 11 | ........ | source1  | | 12 | ........ | source1  | | 13 | ........ | source2  | | 14 | ........ | source2  | | .. | ........ | .......  | | 20 | ........ | source5  | 

What is the optimized PgSql query to achieve the above scenario considering the LIMIT, OFFSET, sources can increase or decrease?

Can switching from .com TLD to non .com TLD affect emails even if MX records migrated correctly

I have a high-level question about a customer that switched from a .com TLD to a .fun TLD. They didn’t switch hosting, only their TLD. They’re now having issues with their emails, personal and campaign based, being blocked.

Before I dig in to the technical stuff, I wanted to know if anyone had a similar issue. Before I dig in to MX records and such, I didn’t know if there were known TLD issues with ones like .fun, etc. Sorry if this is a vague question. And I promise I’ve been googling and asking first!

How to delete all records which are not referenced from other tables

I have a table to which a bunch of other tables has an FK reference. Is there any way of deleting records from this table only if they are not being referenced?

I know that I can left join the referencing table and check for null, but I have about 10 tables (more will be added) with FKs referencing this table, so it would be cool to have a generic way of doing it.

There are often not more than a handful of records I need to remove. I suppose I could loop and try to remove each record individually and protect each deletion with BEGIN/EXCEPT, but that is an ugly concept.

Does this kind of functionality exist in Postgres? Kind of a soft delete, or delete-if-allowed.

Is there a better way of displaying ‘Count’ of records beside other columns in a select query?

I have a table with below structure :

Test1(c_num  number ,c_first_name varchar2(50), c_last_name varchar2(50)) 

1)There is a normal index on c_num column.

2)The table has nearly 5 million records.

I have a procedure as you can see below. I want to display the Count(*) along with other columns.I want to know if there are better ways of doing this so that we can have better performance.

create or replace procedure get_members(o_result out sys_refcursor)    is begin  open o_result for   select c_num,          c_first_name,          c_last_name,          (select count(*) from test1) members_cnt -->Is there a better way instead of doing this?   from test1;  end; 

Thanks in advance