Artifacts at seams between meshes in Unity in isometric camera only

I’m trying to piece together some very simple placeholder models that I intend to use as tiles in Unity and I’m seeing odd pixel artifacts along the seams of the tiles that only show up when using an isometric camera. You can see them in this image.

Example of isometric artifacts

And the same geometry but just from a perspective camera shows no artifacts.

Example of perspective with no artifacts

I verified that the two models are exactly aligned right along the seam by checking the actual vertex data. The artifacts are dependent on the camera itself and shift along the seam as the camera is panned and zoomed.

I’ve disabled shadows, set the texture filter mode to point, disabled mipmap generation

The geometry is quite simple and is imported from the OBJ file below.

# normals vn -1 0 0 vn 1 0 0 vn 0 0 1 vn 0 0 -1 vn 0 -1 0 vn 0 1 0  # texcoords vt 0.970703 0.5 vt 0.974609 0.5  # verts v 0 2 0 v 0 2 -4 v 0 2.3 0 v 0 2.3 -4 v 0 2.5 0 v 0 2.5 -4 v 4 2 0 v 4 2 -4 v 4 2.3 0 v 4 2.3 -4 v 4 2.5 0 v 4 2.5 -4  # faces f 3/2/1 2/2/1 1/2/1 f 4/2/1 2/2/1 3/2/1 f 5/1/1 4/1/1 3/1/1 f 6/1/1 4/1/1 5/1/1 f 7/2/2 8/2/2 9/2/2 f 9/2/2 8/2/2 10/2/2 f 9/1/2 10/1/2 11/1/2 f 11/1/2 10/1/2 12/1/2 f 7/2/3 3/2/3 1/2/3 f 9/1/3 5/1/3 3/1/3 f 9/2/3 3/2/3 7/2/3 f 11/1/3 5/1/3 9/1/3 f 2/2/4 4/2/4 8/2/4 f 4/1/4 6/1/4 10/1/4 f 8/2/4 4/2/4 10/2/4 f 10/1/4 6/1/4 12/1/4 f 2/2/5 7/2/5 1/2/5 f 8/2/5 7/2/5 2/2/5 f 5/1/6 11/1/6 6/1/6 f 6/1/6 11/1/6 12/1/6 

With the following texture applied as Albedo on a default Unity material (a bit odd since it was originally generated in MagicaVoxel)

Texture

I’m really at a loss for what could be causing these to show up. Only spotted them because I was testing an outline shader and it was outlining all the artifacts as the normals on those pixels were odd. With a pixel shader set to display _CameraNormalsTexture instead of the color the artifacts are still visible as variances in the normals as you can see in the image below.

Same image but of camera normals

Segment 3d mesh into multiple 3d meshes with equal size

Given a 3d mesh, for example the stanford bunny, how can I segment the mesh in a way that each segment has a roughly equal size between them? Assuming the target number of segments is given as an input.

To keep things simple, let’s start with assuming the vertices are uniformly distributed in the mesh, therefore the problem can be simplified to segmenting the mesh so each segment has roughly the same vertices.

I’ve seen methods like finding a bounding box of the mesh, and divide the bounding box uniformly and segment the mesh based on that. But the problem with this method is the size of each segment can be quite different if the shape of the original mesh is irregular.

I’ve also seen a method based on Shape Diameter Function, but again the size of each segment can be different depends on the original mesh.

The problem is I am not sure what’s the right keyword to look up on Google to see what’s being done in the literature. I would appreciate any pointers.

How can I detect differences in 3D meshes and then display them in augmented reality? [on hold]

There used to be an open-source tool called 3D Diff, but now it’s part of 3drepo, and no longer freely available. I’ve found some things that are close, but only work for small STL files.

I’m trying to find a way of comparing two 3d meshes, (can be any of the popular formats, DGN, FBX, etc.) to see the differences between the two of them. For example, imagine two different meshes of a house, but the newer mesh contains a new chimney that was not present before. Are there any resources out there for automatically comparing these two meshes and visually displaying the differences?