What is the standard practice for animating motion — move character or not move character?

I’ve downloaded a bunch of (free) 3d warriors with animations. I’ve noticed for about 25% of them, the ‘run’ animation physically moves the character forward in the z direction. For the other 75%, the animation just loops with the characters feet moving etc., but does so in place, without changing the character’s physical location.

I could fix this by:
1.) Manually updating the transform in code for this 75%, to physically move the character
2.) Alter the animation by re-recording it, with a positive z value at the end (when I did this it caused the character to shift really far away the rest of the units, probably something to do with local space vs world space I haven’t figured out yet).

But before I go too far down this rabbit hole, I wonder if there is any kind of standard? In the general case, are ‘run’ / ‘walk’ animations supposed to move the character themselves, or is it up to the coder to manually update the transform while the legs move and arms swing in place? Is one approach objectively better than the other, or maybe it depends on the use case? If so, what are the drawbacks of each? I know nothing about animation, so I don’t want to break convention (if there is one).

How to Make GameObject move in Circular Motion

I’ve been trying to work an enemy that moves in a circular motion for my RPG game, but for some reason, whenever I press play the GameObject instantly goes hundreds of units in the X and Y coordinates. It also move back to -1 on the Z axis. Here’s the script to my enemy’s movement:

using System.Collections; using System.Collections.Generic; using UnityEngine;  public class EnemyScript : MonoBehaviour {      private Rigidbody2D rb;      [SerializeField]     float rotationRadius = 2f, angularSpeed = 2f;     float posX, posY, angle = 0f;      // Start is called before the first frame update     void Start()     {         // Gets the RigidBody2D component         rb = GetComponent<Rigidbody2D>();     }      // Update is called once per frame     void Update()     {         Movement_1();     }      void Movement_1()     {         posX = rb.position.x + Mathf.Cos(angle) + rotationRadius;         posY = rb.position.y + Mathf.Sin(angle) + rotationRadius;         transform.position = new Vector2(posX, posY);         angle = angle + Time.deltaTime * angularSpeed;          if (angle >= 360f)         {             angle = 0f;         }     } } 

Unity – Root Motion Physic Material Issue

I have a slope with an "icy" physic material with 0 static and dynamic friction. I also have this same physic material set to my model’s capsule collider. The model’s rigid body has a mass of 10.

When the penguin preforms its root motion belly slide on the ice, I expect the physics engine to do its work and see the penguin slide down the slope. However, I instead see a slow, sort of inching down the slope as shown in linked video. When I turn root motion off, the sliding works as expected.

What is going on here? What about root motion would be causing this?

I have tried a slew of root transform position baking options (including y) and it seems none make any difference.

Any insight is much appreciated! Thanks.

Is there any root motion workflow that doesn’t require feet adjustments on frame by frame basis (Blender)?

Ive been searching for an hour or so, but wasn’t able to find an specific answer about this problem. I need to make animations moving the character’s root around for my game, so in place animations aren’t a possibility. For now, I moved just the root bone (whole character) in straight line to ensure it has the speed movement I need in game, but now I need to start to add the actual feet movement to this animation movement. It happens that both feet follow the root movement (obviously), even when one of them should be planted. I did find some information pieces around the web, but nothing fixed this problem for me. I really need a best workflow for this, cause fixing the feet position on frame by frame basis seems to be a very bad, not to mention time consuming, way to do things.

I even considered unparent the feet from the root, but I’m positive this will cause problems in Unity as I need to use Humanoid rig there, which presupposes that the whole bone chain is connected to a single root.

I’m using Blender 2.8 and my rig was genereted by Makehuman, with the optimized for game engines option checked.

Does anynone knows a good workflow for this case? Maybe I missed something in my previous searches, cause this doesn’t seems to be a big deal at all, it must have a better way to this..

Thanks in advance for any insights.

Mathematics behind Motion Blur

Recently I understood how the Gaussian Filter works and I was awed by the beauty of the mathematics behind it. Now while doing some coding stuffs in Octave I came across another type of blur, which is mentioned as $ image$ $ blur$ , and it gave me an output like this, enter image description here

It is beautiful, but i cannot find on the internet any explanation on the mathematics behind it?

Everywhere I searched $ Gaussian$ $ Filter$ is popping up, but i believe that this 2 are different to some extent as $ Gaussian$ $ Filter$ gave me this output–>

enter image description here

Which u can clearly see is $ different$ in appearance than the $ motion$ $ blur$ .

So, my question is, how is motion blur implemented?

Algorithm for coherent motion. Which bus is app user on?

I am currently working on an app with a map of the city, with markers for each bus. As a feature, the phone should show which bus the user is on.

To achieve this I am working on building a function that consumes a stream of a set of buses and their positions (Stream<Set<Tuple2<BusId, Location>>>), and a stream of phone location, to produce a Stream of bus predictions. The prediction should contain a confidence level.

The function should return the current prediction in real-time, and handle scenarios where the user changes the bus.

How could this be accomplished?

Both streams contain very precise locations at a rate of once every second.

$O(1)$ time, $O(1)$ state random access Brownian motion?

I would like to generate discrete samples $ 0 = B(0), B(1), \ldots, B(T)$ of a Brownian motion $ B : [0,T] \to \mathbb{R}^d$ . It is possible to get $ O(\log T)$ time random access into a consistent sequence of samples $ B(k)$ by constructing the path in tree fashion. We choose $ B(T) = \sqrt{T} Z_0$ where $ Z_0$ is a standard Gaussian, then let $ B(T/2) = B(T)/2 + \frac{\sqrt{T}}{2}Z_1$ where $ Z_1$ is an independent standard Gaussian, construct $ B$ for the intervals $ [0,T/2]$ and $ [T/2,T]$ independently conditional on $ B(0), B(T/2), B(T)$ , and so on recursively.

Since any particular $ B(k)$ constructed in this fashion only depends on $ O(\log T)$ Gaussian random values, this gives us $ O(\log T)$ time random access into a consistently sampled sequence of Brownian motion as long as we have random access random numbers. The only state we need is the random seed. As long as we use the same seed, evaluations of $ B(k)$ for different $ k$ will be consistent, in that their joint statistics will be the same as if we had sampled Brownian motion in sequential fashion.

Question: Is it possible to do better than $ O(\log T)$ , while preserving the $ O(1)$ state requirement (only the seed must be stored)?

My sense is no, and that it would easy to prove if I found the right formalization of the question, but I don’t know how to do that.

Can I measure time by counting frames and trusting on the ‘240 FPS’ that my iPhone 7+ slow motion camera is capable of record?

I’m using a slow motion video recorded using an iPhone 7+ to track something but would like to avoid recording a chronometer to know the time the process is taking. I need to measure about 10 seconds with an uncertainty of at most 0.1 s… Is this possible by just counting 2400 frames of my homemade video?