What’s the easiest way to animate a (non-player) mesh?

I made a coffin with some animations in Maya, exported it to Unreal, and I want to start the animation from a blueprint, but I can’t for the life of me figure out how.

I tried:

  • Play Animation
  • Get Anim Instance > Cast To AnimBlueprint > call custom event in AnimBlueprint (LiftCoffinInAir)
  • Get Anim Instance > Cast To AnimBlueprint > set boolean (which is what the event would do)
  • A blueprint interface called from here, and with an event in the animation blueprint

enter image description here

The only way I got the animation to play at all was by setting the boolean to true every blueprint update.

enter image description here

This means that my animation graph works fine, but for some reason none of the events in the event graph work.

Any ideas?

How to extrude mesh?

Say we have the following simple mesh data:

// additional data // Vector3(x, y, z) - +x left, +y up, +z forward // triangle indices are in clocwise order  // list of vertices that form a place List<Vector3> vertices = new List<Vector3>() {   new Vector3(2f, 0f, 2f),   new Vector3(-2f, 0f, 2f),   new Vector3(2f, 0f, -2f),   new Vector3(-2f, 0f, -2f), };  List<int> indices = new List<int>() {   0, 3, 2, 3, 0, 1 }; 

How to extrude given plane in a certain direction? E.g 2f in y axis, it should form a bottomless cube.

I’m interested in how it works in general, not only this specific example.

modify the elements numbers of a mesh file

I have loaded the mesh of simulation on Mathematica:

geo = Import["geometry.vtk"]; 

The elements of geo are:

 Import["geo.vtk", "Elements"]   {"CuboidData", "CuboidObjects", "Graphics3D", "GraphicsComplex", "LineData", "LineObjects", "PointData", "PointObjects", "PolygonData", "PolygonObjects", "VertexData"} 

I would like to reduce the number of points of the mesh. The points of the mesh are saved in “PolygonData” and “VertexData”, I do not know If is possible to do that. For example, halve the number of points even if the quality is lost

How do I colour a mesh in Kiss3d? (code example request)

This is a question about the sebcrozet/kiss3d graphics engine. I have managed to create a mesh using group.add_mesh(). Now I want to colour my mesh with a specified colour for each vertex (or face). Does anyone know how I do this?

One possible approach seems to be to define a RgbImage with all my different colours somewhere in the image, turn that into a DynamicImage, turn that into a Texture, somehow associate the texture with the mesh, set up a UV map, and set the UV location for each vertex to map to the relevant colour.

The documentation does not go into enough detail or give a specific example. Does anyone have any example code that works?

Ray casting through terrain mesh + Octree ( Mesh collision )

I’m currently in development of a terrain editor which i will use for my game. One thing that is partially stopping my development is that I don’t have collision detection implemented between mouse ray cast and the terrain mesh.

The terrain is consisted of triangles and can be lowered and raised.Also the x and y values of the mesh don’t change. Of course what i need to do is ray cast through my mouse positions and get intersected triangle. I would get the point on the triangle using Barycentric coordinates. Just I’m not quite sure how does octree come into play here. As far as I understand it it works like a 3d quadtree, splitting the box into smaller box that the ray intersects. But which box do i choose ? How do i know which box is the closest one to the terrain, if either raised or lowered terrain ? How many subdivisions would i have? I was thinking on having as much as i have tiles( basically the last level of subdivisions would correspond to a x^3 box).

Right now i’m just doing collision with the 0 Z-value plane, and it works okay, but if the terrain is a bit raised then this can give faulty results.

Other two methods that i found is to iterate through the mesh, which of course is insanely inefficient; the other is to project the terrain in reverse and then i guess do some checking with the mouse coordinates (think it’s called z-buffer checking or something).

If anyone has any other ideas i’m all ears. Just note that the algorithm should be an efficient one (of course this depends on the implementation) and accurate.

Thanks !

I have no Idea how to calculate the faces for my programily generated mesh

So I am making a programily generating cave program and I have all the vertices but I have no idea how to solve for the faces of my mesh. My code:

using System.Collections; using System.Collections.Generic; using UnityEngine;  public class MapGen : MonoBehaviour {     Mesh mesh;      int[] triangles;      public int xSize = 20;     public int zSize = 20;     public int ySize = 20;     [Range(0f, 4.5f)]     public float SurfaceLevel = 3.5f;     Vector3[] interest;     Vector3 old = new Vector3(0, 0, 0);     public bool ShowAlg = false;     [Header("Slows down the scene veiw dramaticly when in play mode!")]     public bool ShowVert = true;      // Start is called before the first frame update     void Start()     {         mesh = new Mesh();         GetComponent<MeshFilter>().mesh = mesh;          CreateShape();         SolveFaces();         UpdateMesh();     }     void CreateShape()     {         interest = new Vector3[(xSize + 1) * (zSize + 1) * (ySize + 1)];          float seed = Random.Range(0.2f, 0.5f);         Debug.Log(seed);          for (int x = 0; x <= ySize; x++)         {             for (int i = 0, y = 0; y <= zSize; y++)             {                 for (int z = 0; z <= xSize; z++)                 {                     float ypn = (Mathf.PerlinNoise(x * seed, z * seed) * 2f);                     float xpn = (Mathf.PerlinNoise(y * seed, z * seed) * 2f);                     float zpn = (Mathf.PerlinNoise(x * seed, y * seed) * 2f);                      if (ypn + xpn + zpn >= SurfaceLevel)                     {                         interest[i] = new Vector3(x, y, z);                     }                      i++;                 }             }         }     }      void SolveFaces()     {         triangles = new int[xSize * ySize * zSize * 9];      }     void UpdateMesh()     {         mesh.Clear();          mesh.vertices = interest;         mesh.triangles = triangles;           mesh.RecalculateNormals();         MeshCollider meshc = gameObject.AddComponent(typeof(MeshCollider)) as MeshCollider;         meshc.sharedMesh = mesh;      }     private void OnDrawGizmos()     {         if (interest == null)             return;         for (int i = 0; i < interest.Length; i++)         {             if (ShowVert == true)             {                 Gizmos.color = new Color (0.286f, 0.486f, 0.812f);                 Gizmos.DrawSphere(interest[i], 0.2f);             }             if(ShowAlg == true)             {                 Gizmos.color = Color.green;                 Gizmos.DrawLine(old, interest[i]);                 old = interest[i];               }         }     } } 

This script is placed on a empty Game Object with a Mesh Filter, Renderer, and a collider. Note they do nothing without the faces. I have set it up so the face array goes on the triangles variable and missing part of the script should go in the SolveFaces() function. Thanks in advance.

OpenGL Strange mesh when animating Assimp

I’m trying to animate a skeletal mesh. The mesh loads with no problems and all is set up correctly. My problem is that when I calculate the matrix for a given keyframe the mesh go nuts.

Here is na image of the problem.

enter image description here

The mesh on the right is using the bonĂ© transform as the final matrix. the ne one the left is using the matrix calculated from keyframes. I don’t know where i’m going wrong with this.

Here is my code:

The Animation Clip where all keyframes are stored:

Matrix4 AnimationClip::GetTransform(Bone *bone, float deltaTime) { Channel* channel = channels[bone->name]; if (channel == nullptr)     return bone->transform; else     return channel->Update(deltaTime); }  ///////////  Channel::Channel(aiNodeAnim* animNode) { name = string(animNode->mNodeName.data);  for (GLuint k = 0; k < animNode->mNumPositionKeys; k++) {     aiVectorKey vec = animNode->mPositionKeys[k];     positions.push_back(         Keyframe<Vector3>(         (float)vec.mTime,             Vector3(vec.mValue.x, vec.mValue.y, vec.mValue.z))); }  for (GLuint k = 0; k < animNode->mNumScalingKeys; k++) {     aiVectorKey vec = animNode->mScalingKeys[k];     scalings.push_back(         Keyframe<Vector3>(         (float)vec.mTime,             Vector3(vec.mValue.x, vec.mValue.y, vec.mValue.z))); }  for (GLuint k = 0; k < animNode->mNumRotationKeys; k++) {     aiQuatKey vec = animNode->mRotationKeys[k];     rotations.push_back(         Keyframe<Quaternion>(         (float)vec.mTime,             Quaternion(                 vec.mValue.x,                 vec.mValue.y,                 vec.mValue.z,                 vec.mValue.w))); } }  Matrix4 Channel::Update(float animTime) { return     CalculatePosition(animTime) *     CalculateRotation(animTime) *     CalculateScaling(animTime); }  Matrix4 Channel::CalculatePosition(float animationTime) { return      Matrix4::CreateTranslation(         CalcInterpolatedPosition(animationTime)); }  Vector3 Channel::CalcInterpolatedPosition(float animationTime) { if (positions.size() == 1)     return positions[0].value;  GLuint positionIndex = FindPosition(animationTime); GLuint nextPositionIndex = (positionIndex + 1);  float deltaTime =      positions[nextPositionIndex].time - positions[positionIndex].time; float factor =      (animationTime - (float)positions[positionIndex].time) / deltaTime;  Vector3 startPos = positions[positionIndex].value; Vector3 endPos = positions[nextPositionIndex].value;  Vector3 delta = endPos - startPos;  return startPos + delta * factor;//Vector3::Lerp(startPos, endPos, factor); }  GLuint Channel::FindPosition(float AnimationTime) { for (GLuint i = 0; i < positions.size() - 1; i++) {     if (AnimationTime < (float)positions[i + 1].time)         return i; } return 0; } 

And Here is my Skinned Mesh code where I’m fetching the keyframes:

void SkinnedMesh::BoneTransform( double delta,  vector<Matrix4>& transforms, AnimationClip *anim) { Matrix4 identity_matrix = Matrix4::Identity();  float ticksPerSecond =      anim->ticksPerSecond == 0 ? 25.0f : anim->ticksPerSecond;  double time_in_ticks = delta * ticksPerSecond; float animation_time =      fmod((float)time_in_ticks, anim->duration);  UpdateTransforms(     animation_time,      anim,      rootBone,     identity_matrix);  transforms.resize(m_num_bones);  for (GLuint i = 0; i < m_num_bones; i++)     transforms[i] =          m_bone_matrices[i].final_world_transform; }  void SkinnedMesh::UpdateTransforms( float p_animation_time, AnimationClip *anim, Bone *parentBone, Matrix4& parentTransform) { Matrix4 boneTransform =// parentBone->transform;     anim->GetTransform(parentBone, p_animation_time);  Matrix4 global_transform =     parentTransform * boneTransform;  if (m_bone_mapping.find(parentBone->name) !=     m_bone_mapping.end()) // true if node_name exist in bone_mapping {     GLuint bone_index = m_bone_mapping[parentBone->name];     m_bone_matrices[bone_index].final_world_transform =         m_global_inverse_transform *         global_transform *          m_bone_matrices[bone_index].offset_matrix; }  for (vector<Bone*>::iterator it =     parentBone->children.begin();     it != parentBone->children.end();      it++) {     UpdateTransforms(         p_animation_time,         anim,         (*it),          global_transform); } }  void SkinnedMesh::Render(Shader* shader, AnimationClip *clip) { vector<Matrix4> transforms; BoneTransform(     (double)SDL_GetTicks() / 1000.0f,     transforms,     clip);  for (GLuint i = 0; i < transforms.size(); i++) {     GLfloat values[16];     Matrix4::ValuePointer(transforms[i], values);      string a = "jointTransforms[";     a.append(to_string(i));     a.append("]");      glProgramUniformMatrix4fv(         shader->GetID(),         shader->GetUniformLocation(a.c_str()),         1,         GL_FALSE,          (const GLfloat*)values); }  for (unsigned int i = 0; i < meshes.size(); i++)     meshes[i]->Render(shader); } 

It seems the matrices are all wrong and therefore deforming the mesh. Can you help me?

EDIT :

The Meh on the right has this code :

Matrix4 AnimationClip::GetTransform(Bone *bone, float deltaTime) { Channel* channel = channels[bone->name]; if (channel == nullptr) return bone->transform; else return bone->transform; 

}

A Smooth and Round Voronoi Mesh

I want the edges of a VoronoiMesh to be smooth and round. I have found the following code from this answer

arcgen[{p1_, p2_, p3_}, r_, n_] :=   Module[{dc = Normalize[p1 - p2] + Normalize[p3 - p2], cc, th},    cc = p2 + r dc/EuclideanDistance[dc, Projection[dc, p1 - p2]];   th = Sign[      Det[PadRight[{p1, p2, p3}, {3, 3}, 1]]] (\[Pi] -         VectorAngle[p3 - p2, p1 - p2])/(n - 1);   NestList[RotationTransform[th, cc],     p2 + Projection[cc - p2, p1 - p2], n - 1]] roundedPolygon[Polygon[pts_?MatrixQ], r_?NumericQ,    n : (_Integer?Positive) : 12] :=   Polygon[Flatten[    arcgen[#, r, n] & /@      Partition[If[TrueQ[First[pts] == Last[pts]], Most, Identity][pts],       3, 1, {2, -2}], 1]] 

Consider for example, the 3×3 hexagonal mesh (see this question for more details)

L1 = 3; L2 = 3; pts = Flatten[    Table[{3/2 i, Sqrt[3] j + Mod[i, 2] Sqrt[3]/2}, {i, L2 + 4}, {j,       L1 + 4}], 1]; mesh0 = VoronoiMesh[pts]; mesh1 = MeshRegion[MeshCoordinates[mesh0],     With[{a = PropertyValue[{mesh0, 2}, MeshCellMeasure]},      With[{m = 3}, Pick[MeshCells[mesh0, 2], UnitStep[a - m], 0]]]]; mesh = MeshRegion[MeshCoordinates[mesh1],    MeshCells[mesh1, {2, "Interior"}]] 

enter image description here

Using roundedPolygon defined above, I can get what I want with

Graphics[{Directive[LightBlue, EdgeForm[Gray], EdgeThickness -> .001],      roundedPolygon[#, 0.3]} & /@ MeshPrimitives[mesh, 2]] 

enter image description here

This looks good already, but I have the following questions:

  1. Is it possible to fill the gaps between cells automatically? I first thought about setting a Background colour on in Graphics that would match the edge colour. This, however, yields a box look that I want to avoid. I could also change the edge thickness, but this doesn’t seem to scale with the lattice size. Any idea how to solve this? The following picture illustrates these cases.

enter image description here

  1. Is it possible to scale the EdgeThickness with the mesh size?

  2. When I consider a square mesh, given, for example, by pts = Flatten[Table[{i, j}, {i, L2 + 2}, {j, L1 + 2}], 1] and mesh = MeshRegion[MeshCoordinates[mesh0], MeshCells[mesh0, {2, "Interior"}]]

enter image description here

roundedPolygon seems to fail, returning, among others, the error

enter image description here

Any idea how to solve this?

  1. Finally, I wonder if it’s possible to display the mesh as a mesh-type object and avoid using Graphics.

I don’t expect to get an answer to everything, but any ideas or suggestions are welcome.

wireguard mesh / overlay without public node

Is there or will there be a mesh/p2p/overlay network VPN using wireguard that does not require a publicly accessible node?

If I understand correctly, it doesn’t seem possible to use wesher wireguard mesh configurator without a VPS / lighthouse / public indexer type node?

I’m aware of these but can you confirm none of them are capable of what we’re talking about? Or is there another one I’m not aware of?

  • ZeroTier – works well but uses their synchronizer public nodes
  • Tinc – works excellent but requires publicly accessible node
  • freelan
  • peervpn
  • wormhole.network
  • meshbird
  • gnunet
  • cjdns
  • gin2n
  • nebula mesh
  • dunvpn
  • n2n
  • dmvpn

Thanks