Help me Make Sense of Unity 2D Resolution Management

I am trying to set up an environment for a retro style pixel art game in Unity 2D. I aim for a vertical resolution of 240 pixels, and then, I would like the horizontal resolution to be dependent on the target screen’s aspect ratio, so, on a 4:3 screen 320×240, on 16:9 around ˜432×240, and so on.

I have read a bunch of tutorials, added pixel perfect to my main camera, set up a target resolution in player preferences, set the pixel in a unit to 8 pixels everywhere, but still cannot make heads or tails of how it actually works.

Here is an image of my scene with my camera selected:

enter image description here

If I am correct, the dashed line of the camera is my target resolution, although it is weird as the green image I used as a background has a native resolution of 216 pixel vertical, yet it fills up the supposedly 240 pixel high camera. I have no idea what the solid green line is, and don’t even get me started of the canvas, whose render target is set for camera yet it is obviously many times bigger than that.

Can someone please explain it to me how to properly set up a 2D environment described in my opening, point out my mistakes, point me to a good tutorial, or just give me advice on how to understand Unity’s resolution management?

Box2d: High screen resolution / frequency causes high friction?

I’m using Cocos Creator with (built-in) box2d for physics.

Recently our game behaves weirdly on our new device Galaxy S20 Ultra 5G – which has screen size = 1440 x 3200 – frequency = 120Hz.

After stop pushing, all our physical bodies almost stop immediately like they has very high friction. No other device react that way.

Anyone experienced this issue can give me an advice?

OpenVPN works on Ubuntu but not Android – Name Resolution [migrated]

Setup:
Server1 – Primary DNS/Plesk
Server2 – Secondary DNS
Server3 – OpenVPN

On by local computer running Ubuntu 20.04 I can successfully connect to the OpenVPN server and browse any website. My public IP Address shows as the SERVER3 IP Address.

On my Android, I can successfully connect to the OpenVPN server but I can only browse websites hosted on Server1. All other websites get the DNS_PROBE_FINISHED_BAD_CONFIG error message. In the OpenVPN app it shows a successful connection and the correct IP Addresses.

I am using the exact same configuration file for both devices. Note, different certificates are used for the connection.

Looking at the syslog on Server1, I see:

client @0x7f79480ea2b0 ANDROID-PUBLIC-IP-ADDRESS#50743 (www.facebook.com): query (cache) 'www.facebook.com/A/IN' denied 

I don’t get these errors when browsing on the Ubuntu box.

My ovpn file:

dev tun proto tcp remote SERVER3 IP 443 resolv-retry infinite nobind user nobody group nogroup persist-key persist-tun remote-cert-tls server cipher AES-256-GCM auth SHA256 verb 3 key-direction 1 <certificates are here> 

My OpenVPN Config file:

management 127.0.0.1 5555 dev tun ca ca.crt cert server.crt key server.key  # This file should be kept secret dh none server 10.8.0.0 255.255.255.0 ifconfig-pool-persist /var/log/openvpn/ipp.txt push "dhcp-option DNS SERVER1 IP" push "dhcp-option DNS SERVER2 IP" keepalive 10 120 tls-crypt ta.key cipher AES-256-GCM auth SHA256 user nobody group nogroup persist-key persist-tun status /var/log/openvpn/openvpn-status.log log         /var/log/openvpn/openvpn.log log-append  /var/log/openvpn/openvpn.log verb 3 explicit-exit-notify 0 

What were the shortcomings of Robinson’s resolution procedure?

Paulson et alii. From LCF to Isabelle/HOL say:

Resolution for first-order logic, complete in principle but frequently disappointing in practice.

I think complete means they can proof any true formula in first-order logic correct. In the Handbook of Automated Reasoning I find:

Resolution is a refutationally complete theorem proving method: a contradiction (i.e., the empty clause) can be deduced from any unsatisfiable set of clauses.

From Wikipedia:

Attempting to prove a satisfiable first-order formula as unsatisfiable may result in a nonterminating computation

Why is that disappointing?

Order of resolution of several identical initiative Attacks

It sometimes happens that several identical monsters, all using the same initiative, attack a single character.

In my games, this is often a volley of missile fire at a caster.

For pure practicality, I typically roll all the attack rolls together, count up the hits, and then roll all the damage, and assign it to the player as a single total. If the potential damage total is less than their current hit points, there is little difference between this approach and RAW.

However, when there is a potential for damage to render a character unconscious, this approach does differ substantially from RAW, at least for my understanding of them. RAW, I should roll each attack and record each instance of damage separately. Upon the first hit that renders the character unconscious, they immediately drop prone. The subsequent attacks, although occurring on the same initiative, in some sense come ‘after’ the character has fallen prone. Thus they are at disadvantage to hit (assuming missile attacks) but each one that does hit indicates a failed death save, so that three such hits would result in the character’s death.

Questions:

  1. Is my understanding of the situation with RAW correct?

  2. Is there any difference in this situation between multiple attackers on the same initiative and an attacker with multiattack?

  3. Suppose I choose to roll all attacks and damage at once, even in situations where the potential damage was more than a PC’s current hp. [In this case, the chance of the attackers hitting would increase, the chance of the PC going unconscious would increase, but the chance of the PC dying from failed death saves would decrease.] Can this decision be reconciled with (Initiative; PHB, p. 189):

    If a tie occurs, the DM decides the order among tied DM-controlled creatures,

    that is, I have decided to resolve these ties simultaneously? Or would such a choice violate RAW and require me to invoke Rule 0?

Related: How do creatures moving on the same initiative handle the effects of Sleep and Hypnotic Pattern?

What can I read about how we tie the stochastic characteristics of task resolution into statements about a game system’s aesthetics? [closed]

I like making RPG systems. One thing I’ve noticed is that different kinds of task resolution systems make the game significantly different.

Background

For example, games like D&D 3.X and Shadowrun 4E have a very details-oriented approach to task resolution. A typical die roll in combat might be something like 1d20+1+1+4+3+(7+2+3)*1.5+20-2 v.s. 10+8+min(4,1)+5+3+2+5, where each number comes from a different source and things like "I enjoyed breakfast greatly! +3 to hit" and "My shoes are freshly polished for +1 max dex mod to AC" matter greatly.

There are a limited number of modifiers and choosing the right combination for any given character is immensely important to the character’s success in the game.

Other games, like FATE 2.0 or Amber Diceless, have a different approach. There a typical task looks like 5+4dF vs 3+4dF±2. All of the things that are tracked carefully in the first examples are abstracted away into a single modifier. This modifier generally does not exceed 50% of the base skill amount, and is generally regarded as less important than having a higher base skill amount. (In Amber diceless the ‘rolls’ are even more extreme: 1±1 v 3±1 is an example of a task’s mechanical description there).

I am comfortable talking about this kind of difference between RPGs in general. We can talk about levels of abstraction, we can talk about focus, we can describe a system as ‘high-level’ or ‘detail-oriented’ or whatever.

The problem

What I am less comfortable with is the manner in which the stochastic character of a system’s task resolution comes off to participants of RPGs run in it.

For example, I can tell you that the absence of dice in Amber significantly changes the feel of the game versus a similar setting modeled and run in FATE 2.0.
I’m much less articulate as to what the actual differences are, though. I’m aware of some popular pieces on randomness in RPGs, like the ‘goblin dice’ thing, but none of them really talk about the full space of stochastic design available to us as game designers. We can talk about how 2d6 is ‘less swingy’ than 1d13, but how using one or the other more commonly for some hypothetical ruleset would influence our aesthetic perception of that ruleset is not immediately clear.

I’m looking for a published overview of ways that different features of a task resolution system (in terms of stochastic analysis) are relevant to the ‘feel’ (i.e. the perception of aesthetic qualities) of the overall game system from a game-design perspective. In particular, I’m interested in the impact of the magnitude of the stochastic variance of the resolution system on the system, as well as the impact of greater or lesser volatility, and of polynomialization of the distribution (i.e. how binomial, trinomial, etc distribution for a game’s randomizer affects the game’s overall aesthetic).

Basically, I’m looking to read published work addressing the question: How do we tie the stochastic characteristics of task resolution into a statement about the experience of using a particular role-playing game system?

What makes a good answer?

Answers will recommend further reading on the topic to support the claims made in their shorter overview. IJRP preferred. I’m looking for an overview, not a full discussion– it’s sufficient to provide references to appropriate academic literature and to explain how, and that, that literature answers the question. Also, since comments indicate that people are seeking primarily for online sources, let it be explicitly mentioned that offline sources like books are no less good for their being offline (RPGs may be young, but they most certainly predate widespread internet use).

From Analytics data, easy way to get % of visitors with a screen resolution of X or larger

Is there any easy way to use the Analytics data of a website, for example on google analytics, and query it to get the % of users with a browser resolution of X or larger?

It’s good to know the top used resolutions but for example I want to know if I can drop support for very small resolutions so it would be nice to know the % of users using 1024 or smaller for example.

I was wondering if there was an easy way to achieve this somewhere in G-Analytics or an external tool. Right now I’m parsing the raw data and doing this on a custom program I did.

Problems with using a non-reserved top-level domain for local DNS resolution

A network administrator at my organization (let’s call him "Bill") wants to configure an internal DNS with the live top-level domain (TLD) .int for internal IP address resolution (for Active Directory, internal websites, etc.). For example, the domain exampleinternalsite.int would resolve to the some internal site that isn’t visible to the public. Our organization has not registered these domain names with a registrar. Now I know that this is bad practice, but Bill remains unconvinced that this shouldn’t be done.

What are the problems with using a live top-level domain for internal name resolution? Specifically, what are the security implications? In addition, does this somehow conflict with some fundamental way on how DNS and name resolution is supposed to work?

Note: I originally asked this question on Network Engineering SE and was kindly referred over to this site as a better place for this question.