SQL Server Always On AG – Is a Passive Secondary Required for Automatic Failover?

It is my understanding that for automatic failover to occur in an Always On AG environment, the replicas just need to be configured with "automatic failover," "synchronous commit" and the environment must have proper quorum (i.e. odd number of voting members).

However, according to the "SQL Server 2019 Always On" textbook by Peter A. Carter, for automatic failover to work, the secondary cannot be active and must be passive. Passive meaning the Readable Secondary setting is set to ‘No’ so it is unavailable for direct connections or read access, and obviously no backups can be done on them either.

Here is the exact statement from the textbook:

"Although we can change the value of Readable Secondary through the GUI while at the same time configuring a replica for automatic failover without error, this is simply a quirk of the wizard. In fact, the replica is not accessible since active secondaries are not supported when configured for automatic failover."

Here’s a screenshot of the "quirk"

enter image description here

Note: According to the textbox, the Readonly Secondary should be set to ‘No’ if you want automatic failover to work. Thinking back, I don’t recall a successful automatic failover occurring with this set up and I believe when an issue occurred, the AG was in a RESOLVING state. Note, we have third server for proper quorum.

I’ve searched through Microsoft’s documentation to confirm whether or not the author’s statement is true but have yet to find anything that explicitly states that the secondary replica must be passive for automatic failover to work. This Microsoft article (Failover and Failover Modes (Always On Availability Groups)) states the conditions required for automatic failover but does not mention needing a passive secondary replica.

So is the author correct and Microsoft has failed to advertise this OR is the author incorrect and automatic failover does work when the secondary replica is active/readable? My objective is to have a highly available environment with automatic failover but do we need to sacrifice offloading read operations in order to accomplish this? Adding another server is not an option.

How to make my draggable object always on top?

why my object is behind another object and also disappear when dragging around like this

enter image description here

as you can see the item or draggable object is always behind the second one, also behind all panel like this enter image description here

this is my draggabe script :

public event Action<PointerEventData> OnBeginDragHandler; public event Action<PointerEventData> OnDragHandler; public event Action<PointerEventData, bool> OnEndDragHandler; public bool FollowCursor { get; set; } = true; public Vector3 StartPosition; public bool CanDrag { get; set; } = true;  private Color backgroundColor; [SerializeField] private Image backgroundImage;  public RectTransform rectTransform; public Canvas canvas;  private void Start() {     rectTransform = GetComponent<RectTransform>();     canvas = FindObjectOfType<Canvas>();     backgroundImage = GetComponentInChildren<Image>();      backgroundColor = backgroundImage.color; }  public void OnBeginDrag(PointerEventData eventData) {     backgroundColor.a = .4f;     backgroundImage.color = backgroundColor;      if (!CanDrag)     {         return;     }      OnBeginDragHandler?.Invoke(eventData); }  public void OnDrag(PointerEventData eventData) {     backgroundColor.a = 1f;      if (!CanDrag)     {         return;     }      OnDragHandler?.Invoke(eventData);      if (FollowCursor)     {         rectTransform.anchoredPosition += eventData.delta / canvas.scaleFactor;     } }  public void OnEndDrag(PointerEventData eventData) {     backgroundColor.a = 1f;     backgroundImage.color = backgroundColor;      if (!CanDrag)     {         return;     }      var results = new List<RaycastResult>();     EventSystem.current.RaycastAll(eventData, results);      _DropArea dropArea = null;      foreach (var result in results)     {         dropArea = result.gameObject.GetComponent<_DropArea>();          if (dropArea != null)         {             break;         }     }      if (dropArea != null)     {         if (dropArea.Accept(this))         {             dropArea.Drop(this);             OnEndDragHandler?.Invoke(eventData, true);             return;         }     }      rectTransform.anchoredPosition = StartPosition;     OnEndDragHandler?.Invoke(eventData, false); }  public void OnInitializePotentialDrag(PointerEventData eventData) {     StartPosition = rectTransform.anchoredPosition;     rectTransform.SetAsFirstSibling(); } 

Does the optimized column order for a PostgreSQL table always have variable length types at the end?

There’s a popular and seemingly authoritative blog post called On Rocks and Sand on how to optimize PostgreSQL tables for size to eliminate internal padding by re-ordering their column length. They explain how variable-length types incur some extra padding if they’re not at the end of the table:

This means we can chain variable length columns all day long without introducing padding except at the right boundary. Consequently, we can deduce that variable length columns introduce no bloat so long as they’re at the end of a column listing.

And at the end of the post, to summarize:

Sort the columns by their type length as defined in pg_type.

There’s a library that integrates with Ruby’s ActiveRecord to automatically re-order columns to reduce padding called pg_column_byte_packer. You can see the README in that repo cites the above blog post and in general does the same thing that the blog post describes.

However, the pg_column_byte_packer does not return results consistent with the blog post it cites. The blog post pulls from from PostgreSQL’s internal pg_type.typelen which puts variable-length columns always at the end via an alignment of -1. pg_column_byte_packer gives them an alignment of 3.

pg_column_byte_packer has an explanatory comment:

    # These types generally have an alignment of 4 (as designated by pg_type     # having a typalign value of 'i', but they're special in that small values     # have an optimized storage layout. Beyond the optimized storage layout, though,     # these small values also are not required to respect the alignment the type     # would otherwise have. Specifically, values with a size of at most 127 bytes     # aren't aligned. That 127 byte cap, however, includes an overhead byte to store     # the length, and so in reality the max is 126 bytes. Interestingly TOASTable     # values are also treated that way, but we don't have a good way of knowing which     # values those will be.     #     # See: `fill_val()` in src/backend/access/common/heaptuple.c (in the conditional     # `else if (att->attlen == -1)` branch.     #     # When no limit modifier has been applied we don't have a good heuristic for     # determining which columns are likely to be long or short, so we currently     # just slot them all after the columns we believe will always be long. 

The comment appears to be not wrong as text columns do have a pg_type.typalign of 4 but they’ve also got a pg_type.typlen of -1 which the blog post argues gets the most optimal packing when at the end of the table.

So in the case of a table that has a four byte alignment column, a text column, and a two byte alignment column, pg_column_byte_packer will put the text columns right in between the two. They’ve even got a unit test to assert that this always happens.

My question here is: what order of columns actually packs for minimal space? The comment from pg_column_byte_packer appears to be not wrong as text columns do have a pg_type.typalign of 4, but they’ve also got a pg_type.typlen of -1.

Is a natural 20 always a critical hit?

Critical Hits agree:

When you make an attack and succeed with a natural 20 (the number on the die is 20), or if the result of your attack exceeds the target’s AC by 10, you achieve a critical success (also known as a critical hit).

Determine the Degree of Success disagrees:

If you rolled a 20 on the die (a “natural 20”), your result is one degree of success better than it would be by numbers alone. This means that a natural 20 usually results in a critical success and natural 1 usually results in a critical failure. However, if you were going up against a very high DC, you might get only a success with a natural 20, or even a failure if 20 plus your total modifier is 10 or more below the DC.

So if I roll a 20 on my third attack, but it is stil lower than the enemy AC (because of MAP for example), is it a critical hit, or a normal hit?

Does the use of a gunner major action always preclude the use of the Snap Shot minor crew action?

In Starfinder, during starship combat, the Snap Shot minor crew action allows a crew member who has taken a major crew action earlier in the round to fire a starship weapon during the gunnery phase. The rules state:

You can fire one of your starship’s weapons with a –2 penalty to the gunnery check. You can take this action only if no other gunner actions have been taken during the gunnery phase (including snap shot).

The wording of the second sentence raises questions. Does it disallow Snap Shot actions completely if another character intends to take a gunner major action? Does it disallow Snap Shot actions unless they’re taken before all gunner major actions for the round? Or does it simply disallow Snap Shot if the character attempting it has already carried out a gunner major action?

The scenario I am facing in play is this: a pilot has completed a pilot major action in the Helm phase and positioned the ship to put the enemy ship in the port quadrant. There are two gunners on board. The first gunner has fired at the enemy ship with a turreted weapon. The second gunner has fired at the enemy ship with a port arc weapon. The pilot’s player points out that the ship also has a forward arc weapon that has the broad arc property–meaning that it can target ships in the port or starboard arcs with a -2 penalty. He wishes to Snap Shoot the broad arc weapon at the ship in the port arc with a cumulative penalty of -4 (-2 for firing outside the weapon’s normal arc and -2 for the fact that it is a Snap Shot). This seems like a reasonable request to me, but do the rules preclude it? If they do, then had the pilot declared this intention before the other gunners rolled, would he have been able to do it within the rules?

When sharing the Eyes of Night darkvision, does a creature needs to always be 10 feet close to the cleric to be granted the benefits?

The Eyes of Night feature from the Twilight Domain Cleric, introduced in Tasha’s Cauldron of Everything pg. 34, grants darkvision to the cleric:

You can see through the deepest gloom. You have darkvision out to a range of 300 feet.

It also allows the cleric to share this darkvision with willing creatures:

As an action, you can magically share the darkvision of this feature with willing creatures you can see within 10 feet of you, up to a number of creatures equal to your Wisdom modifier (minimum of one creature). The shared darkvision lasts for 1 hour. […]

It’s clear that the creature needs to be within 10 feet of the cleric for him to use an action to share the darkvision. But once shared, does that creature needs to be within 10 feet of the cleric to be granted the benefits of the darkvision from Eyes of Night? Since the sharing has a duration 1 hour I’m wondering what if a creature that wandered far away from the cleric would still be granted this benefit.

Do weapon-based qualities always have to be used with the weapon that triggers them?

There are several qualities in the game that relate to specific weapon categories, most notably the "[weapon category] Fighter I-III" line. Some of these are pretty clear in that their benefits apply to only this weapon, such as Short Blade Fighter I adding Piercing 1 to the weapon or Polearm Fighter I stating "you can sweep your pole-arm". Others are pretty clear in that they don’t – Water Dancer I and II require points in Fencing, but grant passive benefits that are written to be universally active.

Some don’t mention anything of the sort, but can be argued to be self-evident. Spear Fighter I doesn’t mention spears at all, and by RAW probably could be used with any weapon, but I would be receptive to the argument that the quality is meant for use with spears only. The flavour text supports this.

Where I run into trouble is the qualities that require you to have a weapon equipped, but don’t explicitly require its use. If I look at Bravoosi Fighter III for example, it requires the character to have a fencing weapon equipped. However, imagine someone using a left-hand-dagger (a fencing weapon) in one hand and another one-handed weapon, maybe a battleaxe or longsword*, in the other. It could be well argued that the character is "armed" with a fencing weapon and thus fulfills the requirements to use this quality, but could make the counterattack with the other (more damaging) weapon. It’s not even implausible in terms of fencing, on the contrary, using one weapon to parry and the other to counter is perfectly sensible. Several other such qualities can lead to similar combinations.

I assume that the RAW here is just sloppily written, but do we have any source on what the RAI is and what are the balance considerations towards allowing or disallowing such combos?

*in the sense of an arming sword that the game calls a longsword.

For dynamically batched objects the `unity_ObjectToWorld` shader variable is always set to default?

I’m using the standard render pipeline and the unity_ObjectToWorld variable for some calculations in my shader. After I enabled dynamic batching these calculations got broken. It seems that unity_ObjectToWorld is set to default when the objects are being batched. Is it by design? I didn’t find anything in the documentation.

Is it always safe to use WITH SCHEMABINDING in a UDF?

SO I have been reading about WITH SCHEMABINDING and how it can improve the performance of queries using a scalar UDF by omitting the table spool operator from the execution plan. I think I understand halloween protection.

My question is: If I add WITH SCHEMABINDING to a UDF used in a SP is it possible that a SP does not give the same results? If yes in what scenario?