Smoothing random noises with different amplitudes

I have a function that returns a bounded noise. For example, let’s imagine that out input range is [-1, 1]. With my method I can return a bounded/in range noise (depending on the biome we are currently).

    /// <summary>     /// Converts the range.     /// </summary>     /// <param name="originalStart">The original start.</param>     /// <param name="originalEnd">The original end.</param>     /// <param name="newStart">The new start.</param>     /// <param name="newEnd">The new end.</param>     /// <param name="value">The value.</param>     /// <returns></returns>     public static float ConvertRange(         float originalStart, float originalEnd, // original range         float newStart, float newEnd, // desired range         float value) // value to convert     {         float scale = (newEnd - newStart) / (originalEnd - originalStart);         return (newStart + ((value - originalStart) * scale));     }      /// <summary>     /// Gets the bounded noise.     /// </summary>     /// <param name="value">The value.</param>     /// <param name="meanHeight">Height of the mean.</param>     /// <param name="amplitude">The amplitude.</param>     /// <returns></returns>     // [InRange(-.5f, .5f)] && [InRange(0, 1)]     public static float GetBoundedNoise(float value, float meanHeight, float amplitude)     {         return Mathf.Clamp01(ConvertRange(0, 1, -amplitude, amplitude, ConvertRange(-1, 1, 0, 1, value)) + (meanHeight + .5f));     } 

Check this to understand what is mean height and amplitude:

Note: Noise value is given by FastNoise library. (You can see it on Github)

The problem is that there is an height unmatch on each region border:

Normal region:


Noised region:


Black pixel is equal to y=0, and white pixel to y=1. (You can ignore yellow pixels)

But as you can see, there is diffent biomes with different amplitudes and mean heights (Water, Grass, Grass, DryGrass, Asphalt).

I have tried gaussian convolution, but there is a problem: Too many iterations for CPU (it would be optimal to be executed in GPU).

Why? Well, I apply a gaussian convolution for each region border pixel (I have an optimized method to get that). Imagine we get 810k points. And apply a convolution foreach pixel has 81 iterations (to get the mean of heights from that portion). But this is only for one pixel, now we have to do the mean for another 81 pixels (9×9) or 25 pixels (5×5) or whatever.

There are (in the best case) 1,640,250,000 iterations to do (to get a very small smoothed grid around each region border).

You can see my old code for this:

            // Smothing segments              //var kernel = Kernels.GAUSSIAN_KERNEL_9;              //int kernelSize = kernel.GetLength(0);              //if (pol != null && !pol.Segments.IsNullOrEmpty())             //    foreach (Segment segment in pol.Segments)             //    {             //        int d = segment.Distance;              //        for (int i = 0; i <= d; ++i)             //        {             //            Point p = (Vector2)segment.start + segment.Normal * d;              //            //if (d % kernelSize == 0) // I tried to get less itwrations by checking if the current d modulus from kernelSize was 0. But no luck.             //            Filters<Color32>.ConvolutionAtPoint(mapWidth, mapHeight, p.x, p.y, target, kernel, 1, pol.Center.x, pol.Center.y, true);             //        }             //    }             //else             //{             //    if (pol == null)             //        ++nullPols;             //    else if (pol != null && pol.Segments.IsNullOrEmpty())             //        ++nullSegments;             //} 

++ are debug counters, ignore them.

And Convolution at Point does the following:

     private static void ConvolutionAtPointFunc(int width, int height, T[] source, params object[] parameters)     {         float[][] kernel = (float[][])parameters[0];         int kernelSize = kernel.Length;         int iteration = (int)parameters[1];          int _x = (int)parameters[2];         int _y = (int)parameters[3];         int xOffset = (int)parameters[4];         int yOffset = (int)parameters[5];         bool withGrid = (bool)parameters[6];          for (int ite = 0; ite < iteration; ++ite)         {             Color c = new Color(0f, 0f, 0f, 0f);              for (int y = 0; y < kernelSize; ++y)             {                 int ky = y - kernelSize / 2;                 for (int x = 0; x < kernelSize; ++x)                 {                     int kx = x - kernelSize / 2;                      try                     {                         if (!withGrid)                         {                             c += ((Color)(dynamic)source[F.P(_x + kx + xOffset, _y + ky + yOffset, width, height)]) * kernel[x][y];                             ++FiltersDebug.convolutionIterations;                         }                         else                         {                             for (int i = 0; i < 81; ++i)                             {                                 int __x = i % 9,                                     __y = i / 9;                                  c += ((Color)(dynamic)source[F.P(_x + __x + kx + xOffset, _y + __y + ky + yOffset, width, height)]) * kernel[x][y];                                 ++FiltersDebug.convolutionIterations;                             }                              source[F.P(_x + kx + xOffset, _y + ky + yOffset, width, height)] = (dynamic)c;                         }                     }                     catch                     {                         ++FiltersDebug.outOfBoundsExceptionsIn;                     }                 }             }              if (!withGrid)                 try                 {                     source[F.P(_x + xOffset, _y + yOffset, width, height)] = (dynamic)c;                 }                 catch                 {                     ++FiltersDebug.outOfBoundsExceptions;                 }         }     } 

As you can see very unoptimized. The code is from this:

I can’t imagine any other way to do this approach. The best I could imagine to do this, is to do draw a perpendicular line (perpendicular to the current segment (I have a util to get edges of a polygon (segment is formed by a start point and an end point where segment start = current edge and segment end = previous edge))) (with a mean noise for each point in the line). But there is also a problem:


There is a gap (marked with yellow) between segments with a obstuse projection. And an overlapped noise gradient on segments with sharp projection.

Another approach I realized is to get an gradient contour from all region borders that need it.

Something like that:


I also saw Cuberite implementation (

But I don’t understand this part, and if I can extract something from this:

If we take the area of 9×9 biomes centered around the queried column, generate height for each of the biomes therein, sum them up and divide by 81 (the number of biomes summed), we will be effectively making a 9-long running average over the terrain, and all the borders will suddenly become smooth. The following image shows the situation from the previous paragraph after applying the averaging process.

Note: I already created a distorted voronoi function to get the current biome at a terrain point (following this guide, but I don’t fully understand what to do, because I don’t understand this approach and I neither can see any code relative to this text).

But I don’t where to start and neither how to resolve problem with an optimized algorithm. Also, I don’t know what to research. So I have a problen, by this reason, any help is welcome.