How to texture procedurally generated custom terrain mesh

I am making an infinite procedural terrain for my game and currently trying to apply textures to it. The generated terrain has different biomes, so I want to texturize the terrain based on biome and have different textures applied based on the steepness and height of the terrain. I also have different areas generated, such as settlements and also roads so I want to apply specific textures to specific places like those as well.

Unity terrain only supports I think 8 layers, but generally, it should be around 4 for 4 channels? In my case I generate a custom mesh, so I don’t really need to worry about this, however, I am trying to figure out how to generate some sort of texture map (splatmap), and have lots of textures in it. I want to use (albedo, normal, height) for each ‘layer’. Considering that the biomes are generated based on the noise map and each chunk can have many different biomes in it, I need to have more than 4 layers available. For example, steep areas like cliffs will need to have a rocky texture applied and each biome will have a different rocky texture. So if a chunk has two biomes in it, there are lots of different textures that need to be used, including various misc textures for paths and settlement grounds, etc.

Something like this, where a single chunk mesh has for example two biomes. Also, the terrain has a settlement that has a dirt texture applied to the terrain, which extends to both biomes. So essentially, lots of textures in just one mesh.

enter image description here

The other chunk(s) can have similar biomes or even completely other types of biomes. Also, would like to perhaps add the ability to apply different textures to a specific position on terrain chunk at runtime, for example, if the player would want to create a path or if a specific object is placed that would spread some specific texture around the object on the terrain. Something like this for example, where a placed tree has spread some leaf ground texture around:

enter image description here

So, I am looking for a flexible way in applying various different textures to generated terrain mesh. I would like to do that in the script when I am generating a chunk, so I would know which biomes is it at the position and as well as things like steepness and etc.

I am using Shader Graph to create the terrain shader. I’m not really experienced with shaders, so would like to ask for some help on what would be the correct way to achieve something like this?

Floodfilling a texture in HLSL Compute shader

I have a very large texture which I want to fill with values representing "distance in units from a river tile". The texture is already seeded with these origin river points (meaning distance/value = 0 in them).

From these points I want to floodfill outwards, incrementing the value with each step from these origin points.

Doing this on the CPU is no problem using a stack or similiar structure but ideally I want to do this in the middle of a Compute shader execution which runs over the entire texture.

I have read this which sound similiar to what I want to do but it mentions there might be a smarter way to do this with Compute shaders – which is what I am using.

Any ideas on how to solve this on compute shaders?

DirectX/SharpDX How to render bitmap drawing to texture off-screen?

This isn’t specifically related to gamedev, but SO seems dead when it comes to questions about DirectX so I thought I’d try asking here:

I’ve been making changes to a desktop recording project found here which uses OutputDuplication. This project originally records the entire desktop to video, my changes instead allow specifying a region of the desktop to record.

My main change to achieve this is implementing a staging texture, and copying frame output to a render texture, which then has the mouse pointer drawn to it via render target, and then copy subresource region of that render texture to the staging texture which then gets written to sink writer.

This works fine, but the original project renders this using a swapchain with a window handle render target. I want to remove the use of this swapchain and render off-screen instead as I don’t want to display the output in window, I only want it written to mp4 file with sink writer, how do I achieve this correctly?

Here is the relevant code for getting the output frame:

private void AcquireFrame() {     SharpDX.DXGI.Resource frame;     OutputDuplicateFrameInformation frameInfo;     OutputDuplication.Value.AcquireNextFrame(500, out frameInfo, out frame);      using(frame) {         if(frameInfo.LastMouseUpdateTime != 0) {             Cursor.Visible = frameInfo.PointerPosition.Visible;             Cursor.Position = frameInfo.PointerPosition.Position;             Cursor.BufferSize = frameInfo.PointerShapeBufferSize;             ComputeCursorBitmap(); //Retrieves data related to mouse pointer and populates Cursor variables         }          //if(frameInfo.LastPresentTime != 0) {         //    swapChain.Value.Present(1, 0); //Removing use of swapchain         //}          if(RecordingState == DuplicatorState.Started && (frameInfo.LastPresentTime != 0 | (frameInfo.LastPresentTime != 0 && Cursor.Bitmap != null))) {             var renderDescription = new Texture2DDescription() {                 CpuAccessFlags = CpuAccessFlags.None,                 BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,                 Format = Format.B8G8R8A8_UNorm,                 Width = SourceSize.Width,                 Height = SourceSize.Height,                 OptionFlags = ResourceOptionFlags.None,                 MipLevels = 1,                 ArraySize = 1,                 SampleDescription = { Count = 1, Quality = 0 },                 Usage = ResourceUsage.Default,             };             videoFrame.RenderTexture = new Texture2D(OutputDevice.Value, renderDescription);              var stagingDescription = new Texture2DDescription() {                 CpuAccessFlags = CpuAccessFlags.Read,                 BindFlags = BindFlags.None,                 Format = Format.B8G8R8A8_UNorm,                 Width = RegionSize.Width,                 Height = RegionSize.Height,                 OptionFlags = ResourceOptionFlags.None,                 MipLevels = 1,                 ArraySize = 1,                 SampleDescription = { Count = 1, Quality = 0 },                 Usage = ResourceUsage.Staging,             };             videoFrame.StagingTexture = new Texture2D(OutputDevice.Value, stagingDescription);              using(var res = frame.QueryInterface<SharpDX.Direct3D11.Resource>()) {                 OutputDevice.Value.ImmediateContext.CopyResource(res, videoFrame.RenderTexture);                  if(Cursor.Visible) {                     using(var fac = new SharpDX.Direct2D1.Factory1()) {                         using(var dxDev = OutputDevice.Value.QueryInterface<SharpDX.DXGI.Device>()) {                             using(var dc = new SharpDX.Direct2D1.DeviceContext(new SharpDX.Direct2D1.Device(fac, dxDev), DeviceContextOptions.EnableMultithreadedOptimizations)) {                                 using(var surface = videoFrame.RenderTexture.QueryInterface<Surface>()) {                                     using(var bmp = new Bitmap1(dc, surface)) {                                         using(Cursor.Bitmap = new Bitmap(dc, Cursor.Size, Cursor.DataPointer, Cursor.Pitch, Cursor.Props)) {                                             dc.Target = bmp;                                             dc.BeginDraw();                                                                                          int captureX = ((Cursor.Position.X - Cursor.Hotspot.X) * SourceSize.Width) / SourceSize.Width;                                             int captureY = ((Cursor.Position.Y - Cursor.Hotspot.Y) * SourceSize.Height) / SourceSize.Height;                                             RawRectangleF rect = new RawRectangleF(captureX, captureY, captureX + (Cursor.Size.Width * SourceSize.Width) / SourceSize.Width, captureY + (Cursor.Size.Height * SourceSize.Height) / SourceSize.Height);                                                                                          dc.DrawBitmap(Cursor.Bitmap, rect, 1, BitmapInterpolationMode.NearestNeighbor);                                                                                          //dc.Flush();                                             dc.EndDraw();                                         }                                     }                                 }                             }                         }                     }                 }                  OutputDevice.Value.ImmediateContext.CopySubresourceRegion(videoFrame.RenderTexture, 0, new ResourceRegion(RegionLocation.X, RegionLocation.Y, 0, RegionSize.Width, RegionSize.Height, 1), videoFrame.StagingTexture, 0, 0, 0);                 videoFrame.RenderTexture.Dispose();             }              WriteVideoSample(videoFrame);         }         OutputDuplication.Value.ReleaseFrame();     } } 

If I remove the swapchain, drawing the bitmap of the mouse pointer to the render texture surface on the Direct2D1 device context breaks. It appears as if it stutters with drawing it, it doesn’t update smoothly each frame.

I found an answer related to my problem here but when I try to add dc.Flush(); after drawing the mouse pointer bitmap, it doesn’t seem to have any affect on the problem.

Trying to creating roads from a 2D texture into a 3D map using RoadArchitect

I need help with a complex task I’m trying to creating roads using RoadArchitect: https://github.com/FritzsHero/RoadArchitect/tree/RewrittenAPI I’m trying to detect all roads from a texture. I have created a model class to store every detected road using a little bit of recursion:

    [Serializable]     public class RoadNode : IPathNode     {         //[JsonConverter(typeof(RoadNodeConverter))]         public ConcurrentBag<int> Connections { get; set; } // TODO: IPathNode<T> + RoadNode : IPathNode<RoadNode> + Connections (ConcurrentBag<RoadNode>), but can't be serialized due to StackOverflow and OutOfMemory exceptions          public Point Position { get; set; }         public bool Invalid { get; set; }         public int Thickness { get; set; }         public ConcurrentBag<int> ParentNodes { get; set; }          public RoadNode()         {             //Connections = new List<RoadNode>();         }          public RoadNode(Point position, int thickness)         //: this()         {             Position = position;             Thickness = thickness;         }          public RoadNode(Point position, bool invalid, int thickness)         //: this()         {             Position = position;             Invalid = invalid;             Thickness = thickness;         }          public RoadNode(int x, int y, int thickness)             : this(new Point(x, y), thickness)         {         }          public void SetThickness(int thickness)         {             // TODO: Call this when needed and thickness == -1             Thickness = thickness;         }          public int GetKey()         {             return F.P(Position.x, Position.y, mapWidth, mapHeight);         }     } 
    public interface IPathNode     {         ConcurrentBag<int> Connections { get; }         Point Position { get; }         bool Invalid { get; }     } 

At every loaded chunk (I’m using SebLeague tutorial to create the chunks), I get all the road points for this chunk. Then, I try to iterare them. But I have problems interpreting the results.

I have part of the task done:

        public static IEnumerable<Road> CreateRoad(this IEnumerable<Point> points, StringBuilder builder)         {             if (points.IsNullOrEmpty())             {                 builder?.AppendLine($  "\tskipped...");                 yield break; // This chunk doesn't contain any road. Exiting.             }              //var dict = points.Select(p => new {Index = p.GetKey(), Point = p}).ToDictionary(x => x.Index, x => x.Point);             //var builder = new StringBuilder();             var roads = GetRoads(points, builder);             foreach (var list in roads)             {                 if (list.IsNullOrEmpty()) continue;                 //var first = road.First();                 //var backIndex = ((Point)CityGenerator.SConv.GetRealPositionOnMap(first)).GetKey();                  var road = CreateIndependantRoad(list);                 builder?.AppendLine($  "\t... finished road ({road.name}) with {list.Count} nodes.");                 yield return road;             }             //Debug.Log(builder?.ToString());         }          private static IEnumerable<List<Vector3>> GetRoads(IEnumerable<Point> points, StringBuilder builder)         {             var model = RoadGenerator.RoadModel;              var queue = new Queue<Point>(points);             int i = 0;              builder?.AppendLine($  "\tcount: {queue.Count}");              var dictionary = new Dictionary<int, List<Vector3>>();              while (queue.Count > 0)             {                 var list = new List<Vector3>();                  var pt = queue.Dequeue();                 var itemIndex = pt.GetKey();                 dictionary.Add(itemIndex, list);                  var node = model.SimplifiedRoadNodes[itemIndex];                  builder?.AppendLine($  "\troad iteration: {i}");                  //var conn = node.Connections;                 var nodes = GetRoadNodes(node, ptVal =>                 {                     if (ptVal.HasValue) queue = new Queue<Point>(queue.Remove(ptVal.Value));                     return queue;                 },                     parentNodeIndex => { return dictionary[parentNodeIndex]; },                     builder);                  foreach (var point in nodes)                     list.Add(CityGenerator.SConv.GetRealPositionOnMap((Vector2)point).GetHeightForPoint());                  yield return list;                 ++i;             }         }          private static IEnumerable<Point> GetRoadNodes(RoadNode node, Func<Point?, Queue<Point>> queueFunc, Func<int, List<Vector3>> parentFunc, StringBuilder builder, int level = -1)         {             if (queueFunc == null) throw new ArgumentNullException(nameof(queueFunc));             if (parentFunc == null) throw new ArgumentNullException(nameof(parentFunc));              var conn = node.Connections;             if (conn.IsNullOrEmpty())             {                 yield return node.Position;                 yield break;             }             if (queueFunc(null).Count == 0) yield break;              ++level;             builder?.AppendLine($  "{new string('\t', 2)}level: {level} -> {queueFunc(null).Count} items");              //if (conn.Count == 1)             //{             //    var firstNode = conn.First().GetNode();             //    ////var firstPoint = conn.First().GetPoint();              //    var list = parentFunc(firstNode.ParentNodes.First()); // TODO: parent nodes should be one...             //    list.Add(CityGenerator.SConv.GetRealPositionOnMap((Vector2)conn.First().GetPoint()).GetHeightForPoint());             //}             //else             {                 foreach (var item in conn)                 {                     var pt = item.GetPoint();                     if (!queueFunc(null).Contains(pt)) yield break;                     yield return pt;                     if (queueFunc(pt).Count == 0) yield break;                      var subnode = pt.GetKey().GetNode();                     var pts = GetRoadNodes(subnode, queueFunc, parentFunc, builder, level);                     foreach (var point in pts)                         yield return point;                 }             }         } 

You can see here the StringBuilder results: https://pastebin.com/1tW4V4BC

But as you can see I have problems with roads. Some roads only have one node, other roads only have 2 nodes…

I’m thinking that the current implementation I did isn’t right at all, because instead of grouping all nodes into one major road I’m splitting them into road chunks without sense.

By this reason I created this topic, in order to clarify myself because I’m not sure about what I’m doing and which way should I take.

Bind render result to texture id

I want to save the result screen of the rendering and then apply another shader on that result, the typical way is to read the screen using glReadPixels and then buffer that image to gpu and then apply my effect, so is there a way to bind the screen result to a texture id directory in gpu, instead of retrieving it back to CPU and then buffer it to GPU ?

How to generate a star onto a render texture with spherical warping

How would one proceduraly generate a star in a compute shader that looks like one of thes two at any size needed. Also any way transfer this into a spherical map, would be appreciated. Goal is to create a spherical skybox of stars, (stars a pre generated, and not just decoration).

So far got accurate positioning of the stars on the spherical skybox but lack the equations to get it looking like I want.

Preferably the one on the right.

enter image description here enter image description here

This below is what I currently have, ~~5 to 15ms processing time, little over 30k stars

enter image description here

Using Unity2019.3.1f1, needs to be compute shader compatible. (if not I will convert it somehow) Render Texture output.

OpenGL texture2d/image sampling issue. Strange artifacts in texture

I have an issue when using textures in OpenGL, strange artifacts occur where geometry overlaps, but not always. Video Reference. I am using a GL_TEXTURE_2D with GL_ARB_image_load_store to make a custom depth test shader that stores material data for opaque and transparent geometry. The video given shows the artifacts occur where the support structure for a table is occluded behind the top of the table, but strangely, not occurring where the base of the table is occluded by the support.

#version 450 core  in VS_OUT {     vec3 Position;     vec3 Normal;     vec2 TexCoords;      mat3 TanBitanNorm; } fs_in;  // Material data uniform sampler2D uAlbedoMap; uniform sampler2D uNormalMap; uniform sampler2D uMetallicMap;  // Material info out layout(rgba16f) coherent uniform image2D uAlbedoDepthOpaque; layout(rgba16f) coherent uniform image2D uNormalMetallicOpaque; layout(rgba16f) coherent uniform image2D uAlbedoDepthTransparent; layout(rgba16f) coherent uniform image2D uNormalAlphaTransparent;  // Depth info in/out layout(r8) uniform image2D uDepthBufferOpaque; layout(r8) uniform image2D uDepthBufferTransparent;  void main() {     vec3 n_tex = texture(uNormalMap, fs_in.TexCoords).xyz;     n_tex = n_tex * 2.0f - 1.0f;      ivec2 tx_loc = ivec2(gl_FragCoord.xy);     const float opaque_depth = imageLoad(uDepthBufferOpaque, tx_loc).r; // Stored depth of opaque     const float trans_depth = imageLoad(uDepthBufferTransparent, tx_loc).r; // Stored depth of transparent      // Depth processing     if (gl_FragCoord.z > opaque_depth) {         bool tran = false;         if (trans_depth > opaque_depth)             tran = trans_depth > gl_FragCoord.z;         else             tran = true;          // Transparent         if (texture(uAlbedoMap, fs_in.TexCoords).a < 1.0f && tran) {             imageStore(uDepthBufferTransparent, tx_loc,                 vec4(gl_FragCoord.z));              imageStore(uAlbedoDepthTransparent, tx_loc,                 vec4(texture(uAlbedoMap, fs_in.TexCoords).rgb, gl_FragCoord.z));             imageStore(uNormalAlphaTransparent, tx_loc,                 vec4(abs(length(n_tex) - 1.0f) > 0.1f ? fs_in.Normal : normalize(fs_in.TanBitanNorm * n_tex), texture(uAlbedoMap, fs_in.TexCoords).a));         }          // Opaque         else {             imageStore(uDepthBufferOpaque, tx_loc,                 vec4(gl_FragCoord.z));              imageStore(uAlbedoDepthOpaque, tx_loc,                 vec4(texture(uAlbedoMap, fs_in.TexCoords).rgb, gl_FragCoord.z));             imageStore(uNormalMetallicOpaque, tx_loc,                 vec4(abs(length(n_tex) - 1.0f) > 0.1f ? fs_in.Normal : normalize(fs_in.TanBitanNorm * n_tex), texture(uMetallicMap, fs_in.TexCoords).r));         }     }      if (opaque_depth == 0.0f) {         imageStore(uDepthBufferOpaque, tx_loc,             vec4(0.125f));     }      else {         imageStore(uDepthBufferOpaque, tx_loc,             vec4(0.125f + opaque_depth));     } } 

Render with overlapping geometry shows that artifacts still occur outside of reading from the texture. Also in the video, I move the camera back and forth (with orthographic projection) and the artifacts become brighter and darker. Render with overlapping geometry w/out depth processing shows that the brighter/darker values were from the depth test.

Any ideas on why this occurs, and how can I fix it?

How to get/set specific properties of a video texture in pixi.js?

I managed to get a video to play in Pixi using the following line:

    this._texture = PIXI.Texture.from(require("Images/video.mp4")); 

The problem is that I can’t find any properties to do things such as pausing it, forwarding/backwarding, adjusting volume, adjusting playing-speed etc.

Neither the PIXI.Texture or PIXI.Sprite seem to have any properties for this. Is this legitimately all the control PIXI gives you or am I missing something?