What can cause higher CPU time and duration for a given set of queries in trace(s) ran on two separate environments?

I’m troubleshooting a performance issue in a SQL Server DR environment for a customer. They are running queries that consistently take longer in their environment than our QA environment. After analyzing traces that were performed in both environments with the same parameters/filters and with the same version of SQL Server (2016 SP2) and the exact same database, we observed that both environment were picking the same execution plan(s) for the queries in question, and the number of reads/writes were close in both environments, however the total duration of the process in question and the CPU time logged in the trace were significantly higher in the customer environment. Duration of all processes in our QA environment was around 18 seconds, the customer was over 80 seconds, our CPU time was close to 10 seconds, theirs was also over 80 seconds. Also worth mentioning, both environments are currently configured to MAXDOP 1.

The customer has less memory (~100GB vs 120GB), and slower disks (10k HHD vs SSD) than our QA environment, but but more CPUs. Both environments are dedicated to this activity and should have little/no external load that wouldn’t match. I don’t have all the details on CPU architecture they are using, waiting for some of that information now. The customer has confirmed they have excluded SQL Server and the data/log files from their virus scanning. Obviously there could be a ton of issues in the hardware configuration.

I’m currently waiting to see a recent snapshot of their wait stats and system DMVs, the data we originally received, didn’t appear to have any major CPU, memory or Disk latency pressure. I recently asked them to check to see if the windows power setting was in performance or balanced mode, however I’m not certain that would have the impact we’re seeing or not if the CPUs were being throttled.

My question is, what factors can affect CPU time and ultimately total duration? Is CPU time, as shown in a sql trace, based primarily on the speed of the processors or are their other factors I should be taking in to consideration. The fact that both are generating the same query plans and all other things being as close as possible to equal, makes me think it’s related to the hardware SQL is installed on.

SEO – onload components seen as separate pages by Google

I have tried to optimized my blog by loading some component after the page load to improve the performance. Since I have done this, the performance has increased but I now see that those components have been indexed in Google search.

I have use the following code to load my components

window.onload = function (e)  {   loadComments();   loadFeeds(); } 

and then one of the functions:

function loadComments() {     event.preventDefault();     console.log('Loading comments');     fetch('https://www.laurentwillen.be/gadgets/xiaomi-mi-10-lite-5g-test-avis/?module=comments&r=".rand(0,1500); ?>',          {             method: 'GET',             headers : new Headers()         })     .then(response => response.text())     .then((response) =>          {             document.getElementById('comments-content').innerHTML=response;             // PREFEED COMMENT FORM             reply_links = document.querySelectorAll(".feed_form");             for (x=0;x<reply_links.length;x++)                 {                     local_reply = reply_links[x];                     local_reply.addEventListener("click", feedComment);                                      }          })     .catch((err)=>console.log(err))  } 

I can see that the url https://www.laurentwillen.be/gadgets/xiaomi-mi-10-lite-5g-test-avis/?module=comments is now indexed in Google and that’s not what I want.

Should I load the page differently? Or should I add something to the loaded component?

Thanks

Has the Underdark ever been a separate plane to the Material Plane?

I play D&D 5e; I am not that familiar with the other editions of D&D. However, I’m looking for lore on the Underdark from any edition, since the settings (e.g. the Forgotten Realms) are still roughly common to most editions (even if certain events have occurred in some editions and not in others).

For context, in my own homebrew universe, I’ve decided that the Underdark is in fact another plane, although it is accessible from the Material Plane via certain tunnels and such that are like subtle portals (similar to Fey Crossings). However, this question is not about my homebrew universe (which I doubt I’ll change regardless of the outcome of this question).

I was looking into the Underdark, searching through information online and in 5e books, to see if the Underdark is a different plane or whether it is simply beneath the “surface” of the Material Plane. It seems as though it’s the latter, which means I’ll have to go to greater efforts to adapt existing adventures to my homebrew universe that were written in the Forgotten Realms, for example.

However, I believe I got my idea about the Underdark being a different plane from somewhere, so I was wondering if there have ever been any adventures or settings within D&D where the Underdark has been considered a different plane.

I’ve read online that Matt Colville has used this idea from an adventure called “Night Below”, which was apparently an old 2e adventure. I did used to watch some of his videos, so maybe that’s where I got this idea from? But even if this lead proves false, are there any adventures or settings that have ever treated the Underdark as a different plane from the Material Plane?

Why do we need a separate notation for П-types?


Main

I am confused about the motivation behind the need for a separate notation for П-types, that you can find in type systems from λ2 on. The answer usually goes like so – think about how one can represent a signature of identity function – it can be λa:type.λx:a.x or λb:type.λx:b.x. The subtle part, they say, is that these two signatures not only not equal, they are not alpha-equivalent as type variables a and b are free variables inside their correspondent abstractions. So to overcome this pesky syntactic issue, we present П binder that plays nicely with alpha-conversion.

So the question: why is that? Why not just fix the notion of alpha-equivalence?

UPDATE z:

Oh, silly of me, λa:type.λx:a.x and λb:type.λx:b.x are alpha equivalent. But why a:type -> a -> a and b:type -> b -> b arent then.

UPDATE suc z:

Aha, interesting, I guess this is a perfect example of selective blindness =D

I am reading the book Type Theory and Formal Proof, and in the chapter about lambda2 author motivates the existence of П using exactly that kind of argumentation – one cant say that \t:*.\v:t.v : * -> t -> t because this makes two alpha-equivalent terms\t:*.\v:t.v and \g:*.\v:g.v have different types, as corresponding types are not alpha-equivalent, where types like t:* -> t -> t are in fact alpha-invariant. Mind the difference between t:* -> t -> t and * -> t -> t. But, doesn’t it make this argument a bit trivial, and is it even something meaningful to talk about type a -> b where a and b are unbound by any quantifiers variables. Andrej Bauer pointed out in the comments that П is indeed resembles a lambda abstraction with a few additional bells and whistles.

All in all, I am done with that one, thank you guys.

Why was this Base64 encoding of password string with the last two characters in a separate encoding?

As i was testing the security of my own network, i visited the login page of my router. I wanted to see how it managed the credentials. This was when i noticed it transformed the entered password to a ciphered text, with some obvious visible patterns. This was found via burpsuite, and was decoded with the base64. However, the decoded text ONLY provided the password in clear text, except the last two characters.

Transformed            ||        Clear text PW            || Decoded from base64 ================================================================================ YWRtaW4%3D                       admin                      admaW4%3D  cGFzc3dvcmQ%3D                   password                   passwocmQ%3D MTIzNGY%3D                       1234f                      123NGY%3D  YWRtaW5hZG1pbjIyMjI%3D           adminadmin2222             adminadmin22MjI%3D YWRtaW5hZG1pbjIyMTE%3D           adminadmin2211             adminadmin22MTE%3D 

All obfuscated text ends with %3D which is something i wanted to comment about but i just found it out from this link that it’s due to URL encoding of the ‘=’ sign.

And i just figured out the answer to this questions whilst creating it..

The process is: clear text password => Base64 encoding => URL encoding of last 2 characters and '=' character => Base64 encoding of the URL encoded characters

How can I add a seperate damage dice to my hombrew magic item so that it rolls two separate dice on beyond’s new dice roller?

Got curious for when my paladin reaches lvl 11/2 warlock and gets a permanent 1d8 divine smite all the time. My main weapon is magical gifted by my patron so im looking to see if it’s possible to add the extra dice as a feature so it’s properly displayed and usable in dnd beyonds dice roller feature. I can’t figure out the right combo in the magic item creation page.

Refresh token using a separate auth server?

I’d like to use JWTs for user authorization. My intention is use an auth server and an app server to keep them separate. This way my auth server will be the only JWT issuing server and the only server w/login and sign up logic.

I’ve recently run into this issue however – how do I refresh a user’s access token if my auth server is separate?

I’d like to use middleware via node.js to check the validity of a JWT, but if it fails, I’d need to contact the auth server, present the user’s refresh token, to get a new access token.

So, what’s the best way to do this? Would I use middleware to issue a remote request to get a new JWT? It seems there’s no other way, so I thought I’d check w/the community.

Are the Githyanki and the Githzerai separate races?


"If the two races were ever to team up against the illithids" – MToF, pg 85

"The githzerai were born as a race…" – MToF, pg 93, under the heading Githzerai

These two quotes imply that the Githyanki are (lorewise) two separate races. However,

"Long gone are the days when the gith race" – MToF, pg 87, under the heading VLAAKITH’S DILEMMA

Are the Gith one race with two subraces, or two races?

I would prefer 5e sources, that distinctly mention this, but if nothing else is avoidable, then sources from other editions are acceptable.

How can I pull off being two two separate characters at the beginning of a campaign?

I will be playing a Changeling in an upcoming campaign (homebrew setting). I’m interested in hiding not only my race, but also having one or more "retainers" — alternate identities / personas that my changeling can slip into and out of as needed.

The catch is, I wish to conceal (at least at first) the fact that the PC and the NPC retainers are actually a single entity! The most straightforward way of doing this would seem to be observers seeing "both" characters arrive to the starting town at the same time. I would want this to deceive NPCs and fellow PCs.

What magic / mundane / class features do I need to pull off such a deception at level 1? If not, what’s the earliest level you could realistically pull the wool over NPC eyes, solo? The ruse must be able to confound casual observers, but bonus points if it can stand up to additional scrutiny!

I don’t have a set class for this character, but am leaning towards Bard, Rogue, or Wizard. Since I’m trying to fool the party as well, count on no help from them.

separate prerendered static page for open graph crawlers on netlify ( cant redirect by detecting bots ) [closed]

I want to create high browser cached shell app which is basically one cached file index.html which works like spa and it uses progressive enchantment to fill with content, but there is problem for open graph meta tags for Facebook, LinkedIn, Twitter, and of course with SEO also ( yes I know that these days google can parse JavaScript oriented applications but others can’t ), and we cant redirect on server side by bot detection because we are using static file hosting like netlify, so basic idea is this:

put canonical url for every page to redirect crawler to /seo/$ page, and on /seo subfolder we will be serving pre-rendered static pages which contains all open graph tags and also don’t contain all the css ans JavaScript, just content and html tags ( this is not necessary but idea ise to optimize unnecessary bandwidth )

Would this solution be considered good practice? Would this solution be considered cloaking? What are downsides of this solution? Is it considered bad practice to serve “striped down” pages to bots? ( with same content but not all functionality ) Do you maybe have any other proposition how to handle static pre-rendered pages that will be used for SEO and open graph tags and be served only to bots

and most important question is beacuse i suppose that links in google search will then be with additional parameter /seo/ which is not good, is there any solution to force google to use original links or to redirect to /og/ urls just for serving open graph tags for facebook and other social network regarding information that google actually can parse javascript today.

Would sitemap.xml or robots.txt somehow be helpful for redirecting just facebook parser?