How to use complex filtering queries in Gmail?

I am playing with a different / complex filtering queries / string in Gmail.

I found this answer (to my own question):

after:1552896000 before:1552924800 

And I was able to use it without any problems, i.e. I managed to filter e-mails with given dates.

Then I found this answer:

If email is AND contains:"Monday OR Wednesday OR Friday" THEN send it to trash 

and got a bit lost.

Is this a real string to be pasted somewhere into Gmail (where?) or a pseudo-code to explain filter settings that needs to be applied?

Where should I put queries as complex a above? When I try to create a rule to filter my emails, all that I see is a filter configuration box with some simple fields and no place to put a query directly.

Actually, I don’t need queries as comples as above, but I’d like to merge two or more simple queries (as in first example, if possible) to filter out e-mails sent in given period of time for two or more days:

after:1502294400 before:1502352000 AND after:1552896000 before:1552924800 

But I am getting no results, neither from first nor from second day. Is this possible at all in Gmail?

Automating Site Usage Reporting: Top queries by month, abandoned queries etc

My question is almost identical to one raised 5 years ago:

Pragmatically download export search usage reports

Again, being able to access, download and schedule the distribution of reports with the dynamic links located on a site page: 

The key requirement as @Petter has stated is to allow business users to access the reports without needing to bother SharePoint or Site collection admins.

Combining two queries into one related to missing indexes in SQL server

I’ve been using some of the queries found at Specifically, the missing indexes query and the missing index warnings query.

Instead of trying to go back and forth between the two result sets, I’m trying to combine them into one query, so I can directly see what cached query plan from sys.dm_exec_query_plan() corresponds to what missing index in sys.dm_db_missing_index_details.

The current iteration of my query is like this:

SELECT CONVERT(decimal(18,2), user_seeks * avg_total_user_cost * (avg_user_impact * 0.01)) AS [index_advantage],  migs.last_user_seek, mid.[statement] AS [Database.Schema.Table], qps.ProcName, qps.objtype, qps.usecounts, mid.equality_columns, mid.inequality_columns, mid.included_columns, migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact, OBJECT_NAME(mid.[object_id]) AS [Table Name], p.rows AS [Table Rows] ,qps.query_plan FROM sys.dm_db_missing_index_group_stats migs WITH (NOLOCK) INNER JOIN sys.dm_db_missing_index_groups mig WITH (NOLOCK) ON migs.group_handle = mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details mid WITH (NOLOCK) ON mig.index_handle = mid.index_handle INNER JOIN sys.partitions p WITH (NOLOCK) ON p.[object_id] = mid.[object_id] Left Outer Join (     Select top 50 OBJECT_NAME(qp.objectid) ProcName, cp.objtype, qp.query_plan, cp.usecounts, d.referenced_id     From sys.dm_exec_cached_plans cp With (NOLOCK)     Cross Apply sys.dm_exec_query_plan(cp.plan_handle) qp     Left Outer Join sys.sql_expression_dependencies d With (NOLOCK) on d.referencing_id = qp.objectid     Where qp.dbid = DB_ID()         And cast(query_plan as nvarchar(max))  like N'%MissingIndex Database="#[' + db_name() + '#]" Schema="#[dbo#]" Table="#[' + d.referenced_entity_name +N'#]"%' escape '#'     Order By cp.usecounts desc       ) qps on cast(qps.query_plan as nvarchar(max))  like N'%MissingIndex%'         + Case When mid.equality_columns is null then ''                 else 'Column Name="' + Replace(Replace(Replace(mid.equality_columns, ', ', 'Column Name="'), '[', '#['), ']', '#]%') end         + Case When mid.inequality_columns is null then ''                 else 'Column Name="' + Replace(Replace(Replace(mid.inequality_columns, ', ', 'Column Name="'), '[', '#['), ']', '#]%') end         + Case When mid.included_columns is null then ''                 else 'Column Name="' + Replace(Replace(Replace(mid.included_columns, ', ', 'Column Name="'), '[', '#['), ']', '#]%') end       escape '#'         And mid.object_id = qps.referenced_id WHERE mid.database_id = DB_ID() AND p.index_id < 2  ORDER BY index_advantage DESC OPTION (RECOMPILE); 

My first attempt used an outer apply instead of a left join, but the execution time was significant (45+ minutes) on a production database, so I tried the left join. I don’t know with certainty how long it takes, but I stopped execution at the 15-minute mark.

Is it even possible to make such a query from those two?

systemd-resolved not cache not effective when queries from haproxy

We’re running HAProxy 1.8 on Ubuntu 18.04 and we’ve been noticing slow startup times. We’ve previously pinned slow startups on DNS lookups, since we have lots of backends which HAProxy has to resolve at startup.

However, since the server is running systemd-resolved with caching enabled, this should not be an issue (most of the backends are using the same host). We have confirmed that systemd-resolved is in fact running and cache is enabled by trying some dig commands and looking at the network traffic.

But when HAProxy starts, we see lots of outbound DNS lookup traffic, even though those queries should be cached. This is also confirmed by systemd-resolved stats which show thousands of cache misses.

Switching to dnsmasq instead of systemd-resolved makes the system behave as expected – the first few DNS lookups are not cached, but everything after them is cached and HAProxy startup is fast.

The question is: What could cause systemd-resolved to not cache DNS queries, and why does it only happen with HAProxy queries, and not when using dig?

jQuery load() method and css media queries

I just finished writing a pagination script using jQuery’s load() method to retrieve the data table. There seems to be a problem in formatting the loaded table. Of course, I can place styles inside the HTML, as shown here, but that would be a problem with responsive design and media queries. My load line looks like this:

$  ("#content").load("PaginationData.php", "PageNo="+PageNo+"&PageSize="+PageSize+"&SortColumn="+SortColumn+"&Order="+Order+"&Limit="+Limit) 

And it is retrieving this table:

<table id="History" style="width:970px; boarder:1px solid #000000; margin:15px auto; background-color:#FFFFFF;">    <tr style="background-color:#000000; color:#FFFFFF;">       <th>Name</th><th>Address</th><th>Email</th>    </tr> <?php while($  row = mysql_fetch_array($  result)) {    $  Name=$  row['Name'];    $  Address=$  row['Address'];    $  Email=$  row['Email'];    echo('    <tr>       <td style="padding:5px;">'.$  Name.'</td>       <td style="padding:5px;">'.$  Address.'</td>       <td style="padding:5px;">'.$  Email.'</td>    </tr>'); } ?> </table> 

I know that I can remove the inline css, define classes and add a function at the end of this load clause to add the css using jQuery, but this does not help me out regarding media queries and dynamic screen and device changes.

I have also thought that I could, perhaps, detect a media query change in some other class and provide css changes based on that, but that seems quite clutsy. Perhaps it would be better to just pull the data and assign it to a table in the calling script?

Thoughts are appreciated.

SQLAlchemy how to merger queries

I am trying to create a database for a chess league I am trying to create.

I have written the query below which gives me the result I want but it looks very long-winded and I’d like a second opinion on how it could be written better.

I have two tables, one represents a team and another a “board”. The board holds the result for each player and the board number they played.

class Board(db.Model):     __tablename__ = 'board'     id = db.Column(db.Integer, primary_key=True)     board_number = db.Column(db.Integer)     team_id = db.Column(db.Integer, db.ForeignKey(""))     result = db.Column(db.Float)  class Team(db.Model):     __tablename__ = 'team'     id = db.Column(db.Integer, primary_key=True)     name = db.Column(db.String(50))     boards = db.relationship('Board', backref='teams', lazy=True) 

So to calculate the league table I have created 4 different queries: wins, losses, draws and total points.

They are then joined together and ordered by total points to create the league table.

    wins = (     db.session.query(,                      db.func.count(Board.result).label('win')                      )     .filter(Team.league_id == 1)     .join(Board).filter_by(, result=1)     .group_by(     .subquery() )    losses = (     db.session.query(,                      db.func.count(Board.result).label('loss')                      )     .filter(Team.league_id == 1)     .join(Board).filter_by(, result=0)     .group_by(     .subquery() )   draws = (     db.session.query(,                      db.func.count(Board.result).label('draw')                      )     .filter(Team.league_id == 1)     .join(Board).filter_by(, result=0.5)     .group_by(     .subquery() )  total_points = (     db.session.query(,                      db.func.sum(Board.result).label('total')                      )     .filter(Team.league_id == 1)     .join(Board, (Board.team_id ==     .group_by(     .subquery() )   league_table = (     db.session.query(,, draws.c.draw, losses.c.loss,     .join((wins, ==     .join((losses, ==     .join((draws, ==     .join((total_points, ==     .order_by((     .all() ) 

Could the 4 queries be merged into one?

Keeping objects in RAM vs more queries to the database

So this is my DataStructure:

Project   - Name   - ID   - Image   - History        - User        - Comment 

When my application first starts it is pulling all projects with all details.

obviously very slow -high RAM (mostly because of images)

For now it is working but as projects add up performance is getting more worse every day.

I’ve done some research but cannot figure out what would be the best.

The options I think I have:

  1. Lazy-Loading on ProjectDetails (load details when user clicks on it)
  2. Using Thumbnails for Images (showing the real image only in detail-mode)

Questions for Option 1:

  1. Wouldn’t this increase the query-count drasticly ?
  2. What is better after all keeping things in RAM or query more often ?
  3. What do I do with the Data I’ve loaded lazy ? Do I dispose everything when the details collapse or should I store them for some time in case the user opens it again ?

I’m planning on implementing both but I’m afraid of the unanswered questions mentioned above.

Should I use entity framework for CRUD and let the database handle the complexity that comes with high end queries?

I am new to ef and liking it since it reduces the overhead of writing common queries by replacing it with simple add, remove functions. Agreed.

Today I got into the argument with my colleague who has been using it for a while and approached him for advice on when to use Stored Procedures and when to use EF and for what?

He replied;

Look, the simple thing is that you can use both but what’s the point of using an ORM if you are doing it all in database i.e. stored procedures. So, how would you figure out what to do where and why? A simple formula that I have learned that use ORM for all CRUD operation and queries that require 3-4 joins but anything beyond that you better use stored procedures.

I thought, analyzed and replied;

Well, this isn’t the case, I have seen blogs and examples where people are doing massive things with EF and didn’t need to write any procedure.

But he’s stubborn and calling it performance overhead which is yet beyond my understanding since I am relatively new as compared to him.

So, my question is that whether you only need to handle CRUD in ef or should do a lot more in EF as a replacement of stored procedures.

Rest queries on ProjectOnline should use _API/ProjectData or _API/ProjectServer?

I’ve been trying to identify what is the correct way to get data from Project Online for some time now (I just want to get data, not update it). I can find documentation on ProjectData and a lot of examples using it. But it requires permission edition on the Project Online configuration.

I Barely can find any example or documentation for the ProjectServer and its endpoints seem to have fewer options than ProjectData.

Which one should be used to GET info from the ProjectOnline and why?

Doc for ProjectData:

Why do I define my Queries, Data, and Mutation as Singleton when using GraphQL in .NET Core?

Why do I define my Queries, Data, and Mutation as Singleton when using GraphQL in .NET Core?

From the doc’s dependency injection page:

public void ConfigureServices(IServiceCollection services) {   services.AddSingleton<IDependencyResolver>(s => new FuncDependencyResolver(s.GetRequiredService));    services.AddSingleton<IDocumentExecuter, DocumentExecuter>();   services.AddSingleton<IDocumentWriter, DocumentWriter>();    services.AddSingleton<StarWarsData>();   services.AddSingleton<StarWarsQuery>();   services.AddSingleton<StarWarsMutation>();   services.AddSingleton<HumanType>();   services.AddSingleton<HumanInputType>();   services.AddSingleton<DroidType>();   services.AddSingleton<CharacterInterface>();   services.AddSingleton<EpisodeEnum>();   services.AddSingleton<ISchema, StarWarsSchema>(); } 

At the beginning of the docs:

The library resolves a GraphType only once and caches that type for the lifetime of the Schema.

While I understand that these are more like DTOs in which they hold values or their class content doesn’t change at all… Why do I specify them as singleton instead of just letting them get instantiated?