Rigidbody appears to jitter when placed as a sibling to a First Person camera

In my game, the Player can pick up an object (which gets parented to the first-person camera), or the player can push an object (which is parented to the player, but not their first-person camera). Both mechanics use very similar implementations in regards to moving the objects, yet I get a pretty severe amount of ‘perceived’ jitter from Pushing — the object, as far as I can tell, isn’t actually jittering.

(Side note – wasn’t sure where to upload footage so I used mega which can be used merely to view the footage)

Footage of the push, with visible jitter

Footage of the lift for comparison

Footage of the push but parented to the first-person camera

Footage of push frame-by-frame from the side at 0.50 timescale

Footage of the lift frame-by-frame from the side at 0.50 timescale

The object is moved in FixedUpdate and the camera is moved in LateUpdate. You can see pretty clearly that in the side view, the camera moves after the object. Since the object is parented to the camera on Lift, it moves with it and no change is perceived. However on Push it doesn’t move and thus appears to jitter.

I’m not sure how to fix this. I have tried enabling interpolation on the rigidbody, the jitter occurs even if gravity is disabled and the object is lifted off the ground (e.g., not caused by collision with the floor), and I have tried the following code in LateUpdate to sort of nudge it forwards –

protected void CheckCameraPosition() {     if (m_LastCameraPosition == Vector3.zero)         return;      Vector3 newPosition = m_FPCamera.Transform.position;     if (newPosition == m_LastCameraPosition)         return;      Vector3 newOffset = newPosition - m_LastCameraPosition;     newOffset.y = 0f;     Transform.position += newOffset; } 

But the object still jitters and were I to seriously consider the above code I’d be worried about race conditions since the camera movement is executed in LateUpdate as well.

How to cache the main camera as a global variable?

I want to create a utility class that will, among other things, return a cached reference to the main camera. Is the below code correct? My concern is that I’m doing it wrong and that FindGameObjectWithTag is being called each and every time the getter is referenced.

using UnityEngine;  public static class Utils {     public static Camera MainCamera { get; } = Camera.main;  } 

Using a rotation matrix to transform/shift a pinhole camera

I have a pinhole camera model with the following extrinsic (in Earth Centered, Earth Fixed Coordinate, (ECEF) system) and intrinsic parameters.

focal length (x,y) = 55000 px, optical center = (2400,540)

camera center (x,y,z), (ground coordinates) = -2322996.2171387854 -3875494.0767072071 5183320.6008059494 (ECEF)

Rotation matrix (3×3, camera to ground frame) = [[0.88982706839551795,-0.45517069374030594 ,0.032053516353234932], [-0.44472722029994571, -0.84940151315102252, 0.28413864394171567], [-0.10210527838913171, -0.26708932778514777, -0.95824725572701552]]

I need to shift the camera so that it points to the correct position on the ground based on a ECEF transformation matrix (4 X 4), which looks like this:

[[0.99999922456661872, 0.00043965959331068635, -0.0011651461883787318, 7033.5303197340108], [-0.00044011741039666426, 0.99999982604190574, -0.00039269946235032278 ,814.02427618065849], [0.0011649733316053631, 0.00039321195895935108, 0.99999924411047925 ,4139.9400998316705], [0, 0, 0, 1]]

The 3 x 3 matrix portion formed by the first three rows and columns are the rotation component, the first three values in the last column is the translation component. My general understanding is that I need to add the translation component to the camera center coordinates, while multiply the camera to ground rotation matrix with the rotation component. Is this sufficient, or would I need to do something extra ?

Unity 2019.2.5: Convert Mouse Rotate/Pan/Zoom camera to touch

I have a Unity C# script that uses the mouse to rotate/pan/zoom a camera around a scene with a main focal point (PC Build). It works very well for me, but I need to convert it for a kiosk touch screen (windows Surface Pro) and have had no luck modifying the script or finding answers. Would some kind soul please save me from this hell!

My current working mouse script is below.

Any assistance is GREATLY appreciated!

using System.Collections; using System.Collections.Generic;   public class MouseOrbit : MonoBehaviour {     public Transform target;     public float maxOffsetDistance = 2000f;     public float orbitSpeed = 15f;     public float panSpeed = .5f;     public float zoomSpeed = 10f;     private Vector3 targetOffset = Vector3.zero;     private Vector3 targetPosition;       // Use this for initialization     void Start() {         if (target != null) transform.LookAt(target);     }        void Update() {         targetPosition = target.position + targetOffset;           if (target != null) {             targetPosition = target.position + targetOffset;              // Left Mouse to Orbit             if (Input.GetMouseButton(0)) {                 transform.RotateAround(targetPosition, Vector3.up, Input.GetAxis("Mouse X") * orbitSpeed);                 float pitchAngle = Vector3.Angle(Vector3.up, transform.forward);                 float pitchDelta = -Input.GetAxis("Mouse Y") * orbitSpeed;                 float newAngle = Mathf.Clamp(pitchAngle + pitchDelta, 0f, 180f);                 pitchDelta = newAngle - pitchAngle;                 transform.RotateAround(targetPosition, transform.right, pitchDelta);             }             // Right Mouse To Pan             if (Input.GetMouseButton(1)) {                 Vector3 offset = transform.right * -Input.GetAxis("Mouse X") * panSpeed + transform.up * -Input.GetAxis("Mouse Y") * panSpeed;                 Vector3 newTargetOffset = Vector3.ClampMagnitude(targetOffset + offset, maxOffsetDistance);                 transform.position += newTargetOffset - targetOffset;                 targetOffset = newTargetOffset;             }               // Scroll to Zoom             transform.position += transform.forward * Input.GetAxis("Mouse ScrollWheel") * zoomSpeed;          }     } }// CLASS ``` 

Customize default Prompt options (camera to share) when I trigger camera in Firefox with code development

I am having 2 cameras in my PC. When I try to trigger camera in firefox it always opens a permission prompt box with a webcam options. But I don’t want to show the camera options in the prompt box. So Is there any customizable thingy to customize the Prompt box for firefox by updated any javascript changes (I am a developer. So want to fix it by code updation itself for my web application) Anyone please help me resolve this problem ?

Rotate object always at same speed on screen, no matter camera distance?

I am rotating a globe like XCOM’s hologlobe,

enter image description here

I rotate it using Quaternion.RotateTowards(Quaternion from, Quaternion to, float maxDegreesDelta).

I found a good value for maxDegreesDelta, in my case it is 5.0f.

There is a limit on how close or far the camera can be, let’s assume clos is 1.0f and far is 2.0f.

I want to be able to zoom into the globe, but obviously when I do, it rotates a bit too fast then.

When zoomed out, rotation speed is satisfying:

enter image description here

When zoomed in, rotation is too fast, making it more difficult to manipulate:

enter image description here

And the problem is even more evident as game view size gets bigger, i.e. fullscreen.

Using Mathf.Lerp and Mathf.InverseLerp, I’ve tried to make maxDegreesDelta and mouse delta proportional to the distance the camera is but it’s hardly convincing.

Note: I rotate the globe, not the camera.

Question:

How can I ensure object rotates at same speed on screen, no matter how close or far camera is ?

The camera does not capture movements in z-axis

I want to make a simple drag and drop card in Unity, where the card pops up a bit towards the screen when the drag begins (lifting effect), and then drops down back on the table when the drag ends.

My script looks like this:

public class Draggable : MonoBehaviour, IBeginDragHandler, IDragHandler, IEndDragHandler {     void IBeginDragHandler.OnBeginDrag(PointerEventData eventData)     {         this.transform.position = new Vector3(this.transform.position.x, this.transform.position.y, this.transform.position.z - 10);     }      public void OnDrag(PointerEventData eventData)     {         this.transform.position = new Vector3(Input.mousePosition.x, Input.mousePosition.y, -10);      }      public void OnEndDrag(PointerEventData eventData)     {         this.transform.position = new Vector3(this.transform.position.x, this.transform.position.y, this.transform.position.z + 10);      }   } 

The card is an image, and it is child of the canvas. The render mode for the canvas is Screen Space - Overlay.

The drag works correctly, but I cannot capture the movements in the z-axis.
It does capture the movements in the z-axis when I change the render mode. However, then the card is too far away in the z-axis, and it moves with an offset with respect to the mouse pointer.

As far as I understood, I need to do something about realtive distances, but I how can I force the mouse pointer to be on z=0 plane.

Artifacts at seams between meshes in Unity in isometric camera only

I’m trying to piece together some very simple placeholder models that I intend to use as tiles in Unity and I’m seeing odd pixel artifacts along the seams of the tiles that only show up when using an isometric camera. You can see them in this image.

Example of isometric artifacts

And the same geometry but just from a perspective camera shows no artifacts.

Example of perspective with no artifacts

I verified that the two models are exactly aligned right along the seam by checking the actual vertex data. The artifacts are dependent on the camera itself and shift along the seam as the camera is panned and zoomed.

I’ve disabled shadows, set the texture filter mode to point, disabled mipmap generation

The geometry is quite simple and is imported from the OBJ file below.

# normals vn -1 0 0 vn 1 0 0 vn 0 0 1 vn 0 0 -1 vn 0 -1 0 vn 0 1 0  # texcoords vt 0.970703 0.5 vt 0.974609 0.5  # verts v 0 2 0 v 0 2 -4 v 0 2.3 0 v 0 2.3 -4 v 0 2.5 0 v 0 2.5 -4 v 4 2 0 v 4 2 -4 v 4 2.3 0 v 4 2.3 -4 v 4 2.5 0 v 4 2.5 -4  # faces f 3/2/1 2/2/1 1/2/1 f 4/2/1 2/2/1 3/2/1 f 5/1/1 4/1/1 3/1/1 f 6/1/1 4/1/1 5/1/1 f 7/2/2 8/2/2 9/2/2 f 9/2/2 8/2/2 10/2/2 f 9/1/2 10/1/2 11/1/2 f 11/1/2 10/1/2 12/1/2 f 7/2/3 3/2/3 1/2/3 f 9/1/3 5/1/3 3/1/3 f 9/2/3 3/2/3 7/2/3 f 11/1/3 5/1/3 9/1/3 f 2/2/4 4/2/4 8/2/4 f 4/1/4 6/1/4 10/1/4 f 8/2/4 4/2/4 10/2/4 f 10/1/4 6/1/4 12/1/4 f 2/2/5 7/2/5 1/2/5 f 8/2/5 7/2/5 2/2/5 f 5/1/6 11/1/6 6/1/6 f 6/1/6 11/1/6 12/1/6 

With the following texture applied as Albedo on a default Unity material (a bit odd since it was originally generated in MagicaVoxel)

Texture

I’m really at a loss for what could be causing these to show up. Only spotted them because I was testing an outline shader and it was outlining all the artifacts as the normals on those pixels were odd. With a pixel shader set to display _CameraNormalsTexture instead of the color the artifacts are still visible as variances in the normals as you can see in the image below.

Same image but of camera normals