When was my indexes in the database fragmented?

I am trying to create a script that finds the index fragmentation, have it log to a table that I can query each time it runs. I need to fins out specifically what time my indexes in a table get fragmented. Below is the script I have so far to find the index fragmentation. What can I add to:

  1. find the time indexes are fragmented
  2. have it log to a table

Thanks in advance.

SELECT S.name as 'Schema', T.name as 'Table', I.name as 'Index', DDIPS.avg_fragmentation_in_percent, DDIPS.page_count FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS DDIPS INNER JOIN sys.tables T on T.object_id = DDIPS.object_id INNER JOIN sys.schemas S on T.schema_id = S.schema_id INNER JOIN sys.indexes I ON I.object_id = DDIPS.object_id AND DDIPS.index_id = I.index_id WHERE DDIPS.database_id = DB_ID() and I.name is not null AND DDIPS.avg_fragmentation_in_percent > 30 and DDIPS.page_count > 1000 ORDER BY DDIPS.avg_fragmentation_in_percent desc 

Matching several indexes

I have a large set of data in which I need to compare several samples in different tests and under varying conditions. I am looking for a way to pair and analyze these easily. As an example, lets say I have Samples (S) a, b, and c, which undergo tests (T) 1 and 2, under conditions (C) x, y and z, which output results (R) R1 and R2.

S   T   C   R1  R2 a   1   x   2.9 a   1   y   2.6 a   1   z   8.7 a   2   x   9.4 0.372 a   2   y   8.1 0.208 a   2   z   7.6 0.154 b   1   x   7.5 b   1   y   7.3 b   1   z   1.7 b   2   x   3.9 0.213 b   2   y   7.9 0.435 b   2   z   2.5 0.294 c   1   x   6.2 c   1   y   1.8 c   1   z   6.3 c   2   x   1.5 0.246 c   2   y   6.0 0.496 c   2   z   1.7 0.167 

The tests have different outputs, and I need to apply specific functions depending on the test. Such as:

Test1[a,b] = R1a/R1b Test2[a,b] = R1a/R2a - R1b/R2b 

The tests should only be applied to samples with matching conditions, but each sample should be paired. So a result would be:

S1  S2  T   C   R a   b   1   x   2.9/7.5 a   c   1   x   2.9/6.2 a   b   2   x   9.4/0.372-3.9/0.213 a   c   2   x   9.4/0.372-1.5/0.246 a   b   1   y   2.6/7.3 ... 

I’ve been trying to get this right for a while and just end up confusing myself. Anyone have a solution or suggestions? If you want an easily copyable format of the example:

{{S,T,C,R1,R2},{a,1,x,2.9},{a,1,y,2.6},{a,1,z,8.7},{a,2,x,9.4,0.372},{a,2,y,8.1,0.208},{a,2,z,7.6,0.154},{b,1,x,7.5},{b,1,y,7.3},{b,1,z,1.7},{b,2,x,3.9,0.213},{b,2,y,7.9,0.435},{b,2,z,2.5,0.294},{c,1,x,6.2},{c,1,y,1.8},{c,1,z,6.3},{c,2,x,1.5,0.246},{c,2,y,6.,0.496},{c,2,z,1.7,0.167}}

What is the appropriate recommendation concerning making new indexes on our production database?

We are working on ERP application with a SQL server 2008 R2 database in compatibility level 80. I’m working as SQL server DBA I want to make performance tuning against our database but I’m facing many obstacles because our application may not be compatible with higher compatibility level so I cant use DMVs which may help me to find the most expensive queries which is running frequently against our production database.

I tried to run SQL server profiler to extract workload file and run this trc file on database tuning advisor to explore it’s recommendation concerning our database, including index creation and SQL server statistics. I found many opinions said that do not blindly execute DTA recommendation.

I tried to run SQL server activity monitor to discover the most expensive queries and displayed it’s execution plan and I found also recommendations to execute non-clustered indexes.

My questions are:

How can I depend on DTA or execution plan to tune performance?

If I execute these recommendations (indexes) and I face regression on performance, could I drop it easily without any threats and will it be created automatically while Index rebuild operation or rebuild indexes operation drop and create the only existed indexes?

What are the best practices to make new indexes?

Why are column store indexes disabled automatically when a database is restored?

I have recently done a server migration where we backed up and restored all the databases from a SQL Server 2012 version to SQL Server 2019. However the restored databases appear to all have their column store indexes disabled by default.

Is this expected behaviour? If so, why?

These are archive databases used for selects only, rebuilding every clustered index is going to take a significant amount of time. Any other options?

(sorry, I asked this in the wrong place here. Can that question be removed?)

Indexes on query improvement


Bolded are primary keys

Hotel (hotelNo, hotelName, city), Room (roomNo, hotelNo, type, price), Booking (hotelNo, guestNo, dateFrom, dateTo, roomNo), Guest (guestNo, guestName, guestAddress)

SELECT r.roomNo, r.type,  r.price FROM Room r,  Booking b, Hotel h  WHERE r.roomNo = b.roomNo AND b.hotelNo = h.hotelNo AND h.hotelName ="Hilton" AND r.price > 100 

Can someone explain how I would use any indexing to improve query performance? I was thinking of just sorting it by price, that way the SQL server doesn’t have to check every line manually but is there any other indexes that should be built upon?

Is there any search platform which computes indexes based on semantics of words in text?

I want to store emails for my data science project and search for different phrases in my entire collection. The phrases I will be searching might be different than the actual words, but I should always get those emails in return.

What is the best platform to do this? I need a search db that computes indexes in an email based on the semantics (consider stemmers, synonyms etc), elasticsearch or cloudsearch directly won’t work.

Also, how effective is FREETEXT function in SQL Server? Can it serve the purpose?

Google has more than 500k cached indexes of my ex website. How can I remove them?

My client’s ex website was full of copy-pasted news and the websites reputation was 0. Also he wanted to get rid of WordPress, so we re-developed his website and did not use old data.

But Google’s cache is full of old page indexes, more than 500k. More, new websites slugs are different that the old ones, so all the cached indexes redirects to 404 page.

I need to remove them but search console does not provide an option to handle this much indexes. What can I do?

Why was R designed as a 1-starting indexes language?

Spawning from here:

We usually don’t care whether indices start at $ 0$ or $ 1$ (except in the sense we’d rather start with our favourite if it doesn’t matter, and the old joke is that set theorists start at $ 0$ and non-Peano number theorists start at $ 1$ ). I say usually, because there are a few cases where we do:

  • […]
  • […]
  • If there’s programming involved, use the same indexing for your mathematical exposition as in the code itself, which varies by language. For example, Python starts at $ 0$ , whereas R starts at $ 1$ .

Why did they choose to start from $ 1$ ?

maximum number of multiples of A[i] in the array where the indexes of the multiples should be less than i(i.e j

given an array of size n…we have to find the maximum number of multiples of A[i] in the array where the indexes of the multiples should be less than i(i.e j

My approach:-i used bruteforce to solve this…i used nested loops to solve this…the first loop from the last element and the second loop from the first element to the second last element

I am unable to optimise it…need some help