Resource Recommendations for Mathematica in Theoretical Physics

I know there are a lot of resources available out there for learning Wolfram Language. However, I would like to create a specific query here (which might lead to a useful thread in the future). I would be soon starting a PhD in Sting Theory and would like to learn Mathematica to make my life easy. Hence, I am looking for resources that I can use to learn Mathematica that I would be used in string theory research. As of what I know right now, there are three broad classifications of the tasks that a Theoretical physicist would be undertaking,

  1. Complicated algebraic tasks, which may involve vectors, tensors etc and their manipulations. This may also involve tasks like using differential operators, differntial forms and all sorts of algebras like lie algebra, super symmertic algebra etc.

  2. The numerical solution to eigenvalue problems, differential equations etc, which may involve use of several data structures like lists and tables.

  3. Simulation and/or data analysis and visulatization (plots etc).

I know no single book would teach all the three sorts. But can someone recommend books, resources, courses etc for each type or something? Please feel free to add to the list if I have missed a particular classification.

Fixed physics time step and input lag

Everywhere I read says I should fix the time step of my physics simulation, and interpolate the result to the actual frame rate of my graphics. This helps with simplicity, networking, reproducibility, numerical stability, etc.

But as I understand it, fixed time step guarantees between 1 and 2 Δt of input lag, because you have to calculate one step ahead in order to interpolate. If I use 90 Hz physics, it gives me an input lag of about 17 ms, on average.

Since I often see gaming enthusiasts talking about 1 ms input lag, 1 ms delay, and how that makes a difference, I wonder how fast-action games does it, and how they reduce the input lag using fixed time step.

Or they don’t, and 1 ms delay is just marketing mumbo jumbo?

Game physics of tearing apart

For a game I’m programming I’m looking for a kind of realistic mechanic for simulating the tearing apart of objects. Let me explain:

I have a given point p in a 2-dimensional space (possible later also more dimensions so a solution should be scalable, which i assume is not the problem) and I have a number of forces f1, f2, …, fn acting on this point p. So normally this point moves over time according to the combination of this forces. But now I’m looking for a kind of realistic mechanic that if forces vary strongly the point/objects gets split up in two points/objects that move in different directions.

Here a simple visual example:

enter image description here

three "similar" forces resulting in one single force (the point will move according to this single vector)

enter image description here

three forces that "tear" the point apart and result in two vectors (point will be split up in two new points that move according two the respective vector)

I amuse we need two give the point some kind of inner force, that defines how easy it is to tear the point apart?

By kind of realistic I mean something that doesn’t need to be extremely realistic according to the real world physics but something that would fell kind of real in a video game. So an additional benefit would be that it can be easily computed.

Game physics of tearing apart

For a game I’m programming I’m looking for a kind of realistic mechanic for simulating the tearing apart of objects. Let me explain:

I have a given point p in a 2-dimensional space (possible later also more dimensions so a solution should be scalable, which i assume is not the problem) and I have a number of forces f1, f2, …, fn acting on this point p. So normally this point moves over time according to the combination of this forces. But now I’m looking for a kind of realistic mechanic that if forces vary strongly the point/objects gets split up in two points/objects that move in different directions.

Here a simple visual example:

enter image description here

three "similar" forces resulting in one single force (the point will move according to this single vector)

enter image description here

three forces that "tear" the point apart and result in two vectors (point will be split up in two new points that move according two the respective vector)

I amuse we need two give the point some kind of inner force, that defines how easy it is to tear the point apart?

By kind of realistic I mean something that doesn’t need to be extremely realistic according to the real world physics but something that would fell kind of real in a video game. So an additional benefit would be that it can be easily computed.

How to make a dynamic/variable length physics based rope in Unreal?

So I have followed several tutorials on making a physics based rope. I have found success but, it seems like a waste of time to keep creating a mesh and calibrating it for each rope that I create. I really need a more streamlined approach to be able to create them faster as I have a lot of different ropes of varying lengths and diameter.

It seems to be rather inefficient to create a skeletal mesh for each rope that I create. It would be really nice if I could create a rope of say 1 unit and then add multiple of these units together to create a longer one that has the correct physics or at least very little manual calibration needed.

This doesn’t need to be done at runtime but, I’d like the finished asset to be able to do it all inside of Unreal Editor.

Has anyone done anything similar to this or can point me in a generally good direction? I really don’t know where to start with this one. Any help would be greatly appreciated.

How to enable the option “edit physics shape” in the Sprite Editor?

I can successfully import .png files, create tilesets, and use them in my tilemaps in Unity. Thanks to DMGregory, I just learned that there exists an option to customize the Tilemap Collider 2D in the Sprite Editor, which allows me to set a custom collider for every tile instead of going through them one by one.

The steps I follow while creating a tileset in Unity is as follows.

  1. Import the .png file.
  2. Open the Sprite Editor.
  3. Select the option Multiple for Sprite Mode.
  4. Select Slice from the Sprite Editor.

Then, I create a Tile Palette in a folder, drop and drag my created tiles, and start creating my level.

I have never encountered the option to modify the colliders tile by tile. What am I doing wrong?

Changing execution order of Animator so it can blend with Physics

I’m trying to make an effect that a ragdolled character controlled by an animator also affected by the physics collisions. Similar to this game: Crazy Shopping.

The problem is animator controller overrides every change that happens in the Fixed Update or internal physics update (even when the animator update mode set to Animate physics). enter image description here

I think this can be achieved by some how changing the order of execution of the animator so it can happen before physics update. This way physics can affect the animated object. There are solutions like using a second object which contains the ragdoll and in the LateUpdate you can set the positions and rotations of the animated object which works ok but not the thing in my head.

ActiveRagdoll by MetalCore999 is also a really great work which i would love to learn how it works in behind.

How can i achieve this? i don’t even know if my solution would work properly?

Do you have any suggestion or a different way of thinking. I would really appreciate a road map on this.

[ Physics ] Open Question : Why do the Star Trek writers think that stopping engines will stop forward momentum in space?

Shouldn’t they at least know that they’d have to reverse thrusters to give the equal and opposite force to stop forward momentum? Shouldn’t someone writing for one of the premier scifi franchises know this basic fact? Verisimilitude is the hallmark of good fiction writing.

Can I use the 2d physics engine in a 3d game (or viceversa) in Unity?

This is entirely for performance. The 2D physics are less expensive, but I require 3D for some scenes. I never need both at the same time. I know you can have 2D with an orthographic perspective in a 3D engine, but what I want is really the physics engine. Also, is there a way of turning off these engines? I´ve made most collisions from scratch and am only using them for some raycasts at the beginning and for some collider/rigidbody.casts in not every, but a lot of frames (If I understand correctly, they are calculated from the physics engine in each FixedUpdate()).

Interpolation of render with fixed physics update

In Fix Your Timestep, they briefly address the remainder time in relation to rendering. Saying that remainder can be used to generate an interpolated rendering.

We can use this remainder value to get a blending factor between the previous and current physics state simply by dividing by dt. This gives an alpha value in the range [0,1] which is used to perform a linear interpolation between the two physics states to get the current state to render.

http://web.archive.org/web/20190506030345/https://gafferongames.com/post/fix_your_timestep/

So let’s say your last two physics updates we’ll call A and B are from 00:10.033 and 00:10.066, and it is now 00:10.070 when we want to perform a render. We have a remainder of .004.

I take “interpolation between the two physics states” to mean we compare all the objects in update A and B and slide them back from B towards A by 88% (.033-.004)/.033). This would mean I’m actually rendering the physical state at 00:10.037, correct? So my physics updates A and B are really more like “previous” and “next” and my interpolation is between the previous and the next, correct?