Generic update of “normal” columns with data from JSON

I am trying to do a "generic" insert/update of table columns using JSON. There are tons of info how to update a JSON column, but I have found none that updates several normal columns in a table generic.

Insert columns with JSON data works

This works as I understand it in a generic way:

INSERT INTO inputtable SELECT * FROM json_populate_record (NULL::inputtable,     '{       "id": "0",       "name": "orkb type foo examples tutorials orkb",       "sum": 5743,       "float_col": 94.55681687716474     }' ); 

Fiddle: https://www.db-fiddle.com/f/eaQG8H4yqY9hnQBZjzJgz/29

Update columns with JSON data?

There are examples of "non-generic" update using JSON, but I am searching for a "generic" solution. In my dreams it should work as this pseudo code:

with data as ( select * from json_each_text(     '{       "id": "0",       "name": "orkb type foo examples tutorials orkb",       "sum": 5743,       "float_col": 95     }') ) update inputtable set = (select * from data) where id=0 

Is this possible to do? If it is, how do I do this?

Non cluster indexing with combination or on single entity for multiple columns

I am new to db design I also read concepts of non cluster indexing and know about the combined non cluster indexing but in my scenario I am having user table with multiple search on column

table column as follows   userName  fatherName  empId  cardNumber and so on. (above all 4 no one column is PK) 

I am having around 50 millions records over here due to large data search take huge time due to that I am thinking to create indexes in table but these all 4 searches are optional means possible user can fill all four or may be one or two of them but I was confused if I make all four separate index and one for all 4 then it might be creating issue on performance of while entering/inserting the data I created even a single index for all 4 column in combined but while I am searching with only card number then it takes huge time shall I created all four separate indexes would this work in combined search of empId and cardNumber

If I created separated once then what about combined searches If I create combinations then indices are 15 which obviously not a good way for insertion… lot of confusion I have can anyone help me thanks in advance

Indexed columns in SQL Server do not appear to work for basic queries according to execution plan

Disclaimer: I’m not a DBA. I have picked up a few things from this board in the past that I’m building from.

I have a table of google analytics session start times. I have an index on each column. I want to filter for all sessions that were started between two dates. Screenshot below shows the query, and the index.

Query text and index properties

The query runs quickly but I do not believe it’s using the index based on the Execution plan which both says that there’s a missing index and shows a table scan rather than an index scan:

execution

Why?

Is it because of something about the way I’m searching through the datetime? If instead of looking between dates, I set it equal to a date, the execution plan shows it using the index:

Using index

But it’s not just this table or datetime. Here’s a different table with an index on a varchar column:

metadata index

And a simple query on this one also tells me I’m missing the index:

missing md index

I’m stumped.

Prevent block editor from adding overlap by default to columns

When I create a new columns block, the editor adds the overlap option by default, so the class “is-style-twentytwentyone-columns-overlap” gets added.

I do not want this, but I can’t seem to get rid of it although the class is not in my inc/block-patterns.php file.

I’m using a created child theme for Twenty Twenty One theme.

I also tried to unregister the block style with custom js file:

wp.domReady(() => {     wp.blocks.unregisterBlockStyle('core/columns', 'twentytwentyone-columns-overlap'); } ); 

Here I can not see the option anymore to choose it, but when creating new columns it’s still added by default.

Any idea on how to get the columns without overlap by default please?

How to do a ranked search based on number of multiple columns matched


Context

I am trying to create a ranked-customer-search that will order results based on "most likely correct". We have several factors we search by, but to keep it simple I will stick with just name, phone number, & email. The goal is that if the customer has an existing account, we use that instead of creating a new account.

It is also worth noting that for this system, a customer account is US state-specific. So it is technically possible for a single person to have 49 existing accounts and still need to have a "new account" created, so there are often many duplicate accounts we can copy information from.

My Attempt

The query below uses binary values to determine the rank of a result. So a match based on a phone number (0100) scores higher than a match based on name (0001) or email (0010). This works pretty well but isn’t quite ideal. For example, an account that matches both email & name will rank lower than an account that matches only a phone number. Unfortunately, this is pretty much my limit when it comes to creating queries.

(The UNIONs below are required because they allow better indexes to be used)

SELECT     results.id,     SUM(results.score) AS total_score FROM (     (         SELECT             c1.id,             4 AS score         FROM customers c1         WHERE             c1.phone = :phone     ) UNION ALL (         SELECT             c2.id,             2 AS score         FROM customers c2         WHERE             c2.email = :email     ) UNION ALL (         SELECT             c3.id,             1 AS score         FROM customers c3         WHERE             c3.first_name = :first_name             AND c3.last_name = :last_name     ) ) results GROUP BY     results.id ORDER BY     total_score DESC LIMIT 10; 

My Question

I am not sure how to change it so that matching multiple less-important factors ranks higher than a single important factor?

Also, since this is my first time creating a search query like this, there is likely a better and/or more standard way of doing this; so any resources you can share related to this would also be greatly appreciated!

Group by Multiple columns and then count different value of same column

I want to achieve the result which tells me the number of males and females of each disability types in each district. each district can have multiple disabilities.

So far i have reached the following query :

  SELECT    DistrictId,    fb.DisabilityTypeId,    SUM(     CASE WHEN GenderId = 1 THEN 1 ELSE 0 END   ) AS Male,    SUM(     CASE WHEN GenderId = 2 THEN 1 ELSE 0 END   ) AS Female  FROM    Districts d    LEFT OUTER JOIN FormAddresses a ON d.Id = a.DistrictId    INNER JOIN PeopleForms pf ON a.PeopleFormId = pf.Id    INNER JOIN FormBeneficiaries fb ON pf.Id = fb.PeopleFormId    INNER JOIN FormPersonalInfos fp ON pf.Id = fp.PeopleFormId  where    a.IsDeleted = 0    AND pf.FormTypeId = 2    AND d.CityId = 3  GROUP BY    DistrictId,    fp.GenderId,   fb.DisabilityTypeId 

which gives the following result :

DistrictId  |   DisabilityTypeId    |   Male  | Female     1       |       2               |   1     |     0     3       |       2               |   0     |     3     5       |       16              |   1     |     0     5       |       20              |   2     |     0     5       |       20              |   0     |     1     

But i want to achieve the following result :

DistrictId  |   DisabilityTypeId    |   Male  | Female     1       |       2               |   1     |     3     5       |       16              |   1     |     0     5       |       20              |   2     |     1 

i somehow managed to get the expected result, but that with some complex sub-queries in the select clause of each gender which i didnt like and was not sure about the performance.

how can i write an efficient query for the desired result that i want?

Thanks.

Random Hyperlinks Not Working in Nested Columns with Lists

I am new here and have beginner level experience with WordPress so please bear with me if I am leaving out any pertinent information.

Towards the lower portion of this https://www.logicmanager.com/ca-solutions-menu/ landing page, I have a series of containers with columns and nested columns. Within these nested columns I have included hyperlinked text to list items. Ideally, all of the hyperlinked list items would click out to their respective landing pages.

However, for some reason unbeknownst to me, most of the hyperlinks are not working properly and the text items are un-clickable.

Does anyone know why this might be and how I can fix it? Are there any additional details I can provide to help resolve this issue? Is this forum even meant for questions like this?? Many thanks in advance!

Mysql: Any way to properly index 3 ENUM columns with the same options ? (A OR B OR C)

I have 3 enum() columns with the same option values inside.
I first tried to use the "set" datatype which originally was meant to do that (hold multiple values from a set) but it seems that datatype isn’t managed for 15+ years and isn’t even supporting an index.
Is there a nice way to index those 3 columns so I can use them in searched without destroying query performance ?

SELECT * FROM TABLE WHERE a=’x’ OR b=’x’ OR c=’x’

I thought about creating a virtual field which uses a boolean logic (&) on the 3 enum field-numbers and combines them into a large number but that’s quite a hack and not nice to maintain.

Has someone solved this sort of task elegantly ? (I do not want to use a support table and JOIN it, I want to stay with a single table)

Storing timeseries data with dynamic number of columns and rows to a suitable database

I have a timeseries pandas dataframe which dynamically increases the columns every minute as well as adds a new row:

Initial:

timestamp                100     200     300 2020-11-01 12:00:00       4       3       5 

Next minute:

timestamp                100     200     300   500 2020-11-01 12:00:00       4       3       5     0 2020-11-01 12:01:00      14       3       5     4 

The dataframe has these updated values and so on every minute.

so ideally, I want to design a database solution that supports such a dynamic column structure. The number of columns could grow to over 20-30k+ and since it’s one minute timeseries, it will have 500k+ rows per year.

I’ve read that relational db’s have a limit on the number of columns so that might not work here, but also, since I am setting the data for new columns and assigning a default value(0) to previous timestamps, I lose out on the DEFAULT param that’s there on MySQL.

Eventually, I will be querying data for 1 day, 1 month to get the data for the columns and their values.

Please suggest a suitable database solution for this type of dynamic row and column data.