Matrix math in cascade shadow mapping

I am implementing cascade shadow mapping algorithm and currently stuck with matrix transformations – my AABBs, when projected in light space are pointing in the direction opposite to the light:

AABBs in light projection and view space


I was following the logic described in Oreon engine video on YouTube and NVidia docs.

The algorithm in my understanding looks like this:

  1. "cut" camera frustum into several slices
  2. calculate the coordinates of each frustum slice’ corners in world space
  3. calculate the axis-aligned bounding box of each slice in world space (using the vertices from step 2)
  4. create an orthographic projection from the calculated AABBs
  5. using the orthographic projections from step 4 and light view matrix, calculate the shadow maps (as in: render the scene to the depth buffer for each of the projections)
  6. use the shadow maps to calculate the shadow component of each fragment’ color; using fragmentPosition.z and comparing it to each of the camera frustum’ slices to figure out which shadow map to use

I am able to correctly figure out camera frustum’ vertices in world space:

camera frustum, sliced

The frustum extends further, but camera clipping distance… well, clips the further slices.

For this, I use inverse matrix multiplication of camera projection and camera view matrices and cube in normalized device coordinates:

std::array<glm::vec3, 8> _cameraFrustumSliceCornerVertices{     {         { -1.0f, -1.0f, -1.0f }, { 1.0f, -1.0f, -1.0f }, { 1.0f, 1.0f, -1.0f }, { -1.0f, 1.0f, -1.0f },         { -1.0f, -1.0f, 1.0f }, { 1.0f, -1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f }, { -1.0f, 1.0f, 1.0f },     } }; 

I then multiply each vertex $ p$ by the inverse of the product $ P_{camera} \times V_{camera}$

This gives me the vertices of the camera frustum in world space.

To generate slices, I tried applying the same logic, but using perspective projection with different near and far distances with little luck.

I then used vector math to calculate each camera frustum slice by taking the entire camera frustum vertices in world space and calculating the vectors for each edge of the frustum: $ v_i = v_i^{far} – v_i^{near}$ .

Then I simply multiply these vectors by the lengths of an entire camera frustum and multiply them by the corresponding slice fraction: $ v_i^{near} + v_i \cdot \|v_i^{far} – v_i^{near}\| \cdot d_i$ . Then I simply add these vectors to the near plane of the entire camera frustum to get the far plane of each slice.

std::vector<float> splits{ { 0.0f, 0.05f, 0.2f, 0.5f, 1.0f } };  const float _depth = 2.0f; // 1.0f - (-1.0f); normalized device coordinates of a view projection cube; zFar - zNear  auto proj = glm::inverse(initialCameraProjection * initialCameraView);  std::array<glm::vec3, 8> _cameraFrustumSliceCornerVertices{     {         { -1.0f, -1.0f, -1.0f }, { 1.0f, -1.0f, -1.0f }, { 1.0f, 1.0f, -1.0f }, { -1.0f, 1.0f, -1.0f },         { -1.0f, -1.0f, 1.0f }, { 1.0f, -1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f }, { -1.0f, 1.0f, 1.0f },     } };  std::array<glm::vec3, 8> _totalFrustumVertices;  std::transform(     _cameraFrustumSliceCornerVertices.begin(),     _cameraFrustumSliceCornerVertices.end(),     _totalFrustumVertices.begin(),     [&](glm::vec3 p) {         auto v = proj * glm::vec4(p, 1.0f);         return glm::vec3(v) / v.w;     } );  std::array<glm::vec3, 4> _frustumVectors{     {         _totalFrustumVertices[4] - _totalFrustumVertices[0],         _totalFrustumVertices[5] - _totalFrustumVertices[1],         _totalFrustumVertices[6] - _totalFrustumVertices[2],         _totalFrustumVertices[7] - _totalFrustumVertices[3],     } };  for (auto i = 1; i < splits.size(); ++i) {     std::array<glm::vec3, 8> _frustumSliceVertices{         {             _totalFrustumVertices[0] + (_frustumVectors[0] * _depth * splits[i - 1]),             _totalFrustumVertices[1] + (_frustumVectors[1] * _depth * splits[i - 1]),             _totalFrustumVertices[2] + (_frustumVectors[2] * _depth * splits[i - 1]),             _totalFrustumVertices[3] + (_frustumVectors[3] * _depth * splits[i - 1]),              _totalFrustumVertices[0] + (_frustumVectors[0] * _depth * splits[i]),             _totalFrustumVertices[1] + (_frustumVectors[1] * _depth * splits[i]),             _totalFrustumVertices[2] + (_frustumVectors[2] * _depth * splits[i]),             _totalFrustumVertices[3] + (_frustumVectors[3] * _depth * splits[i]),         }     };      // render the thing } 

According to the algorithm, the next part is finding the axis-aligned bounding box (AABB) of each camera frustum slice and projecting it in the light view space.

I am able to correctly calculate the AABB of each camera frustum slice in world space:

camera frustum slices' AABBs

This is a rather trivial algorithm that iterates over all the vertices from the previous step and finds minimal x, y and z coordinate of each vertex of a camera frustum slice in world space.

float minX = 0.0f, maxX = 0.0f; float minY = 0.0f, maxY = 0.0f; float minZ = 0.0f, maxZ = 0.0f;  for (auto i = 0; i < _frustumSliceVertices.size(); ++i) {     auto p = _frustumSliceVertices[i];      if (i == 0)     {         minX = maxX = p.x;         minY = maxY = p.y;         minZ = maxZ = p.z;     }     else     {         minX = std::fmin(minX, p.x);         minY = std::fmin(minY, p.y);         minZ = std::fmin(minZ, p.z);          maxX = std::fmax(maxX, p.x);         maxY = std::fmax(maxY, p.y);         maxZ = std::fmax(maxZ, p.z);     } }  auto _ortho = glm::ortho(minX, maxX, minY, maxY, minZ, maxZ);  std::array<glm::vec3, 8> _aabbVertices{     {         { minX, minY, minZ }, { maxX, minY, minZ }, { maxX, maxY, minZ }, { minX, maxY, minZ },         { minX, minY, maxZ }, { maxX, minY, maxZ }, { maxX, maxY, maxZ }, { minX, maxY, maxZ },     } };  std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;  std::transform(     _aabbVertices.begin(),     _aabbVertices.end(),     _frustumSliceAlignedAABBVertices.begin(),     [&](glm::vec3 p) {         auto v = lightProjection * lightView * glm::vec4(p, 1.0f);         return glm::vec3(v) / v.w;     } ); 

I then construct an orthographic projection from that data – as per algorithm, these projections, one per camera frustum slice, will be later used to calculate shadow maps, aka render to depth textures.

auto _ortho = glm::ortho(minX, maxX, minY, maxY, minZ, maxZ); 

To render these AABBs, I tried rendering the view cube, like with the camera frustum, but got some dubious results:

rendering view cube in orthographic projections

Both the position and the size of the AABBs were wrong.

I tried making the AABBs "uniform", e.g. left = ((maxX - minX) / 2) * -1 and rihgt = ((maxX - minX) / 2) * +1, which resulted in only centering the AABBs around the same origin point (0, 0, 0):

const auto _width = (maxX - minX) / 2.0f; const auto _height = (maxY - minY) / 2.0f; const auto _depth = (maxZ - minZ) / 2.0f;  auto _ortho = glm::ortho(-_width, _width, -_height, _height, -_depth, _depth); 

AABBs with unified params

I then used min / max values of each corresponding coordinate instead of +/- 1 in the view cube to get the correct results:

std::array<glm::vec3, 8> _aabbVertices{     {         { minX, minY, minZ }, { maxX, minY, minZ }, { maxX, maxY, minZ }, { minX, maxY, minZ },         { minX, minY, maxZ }, { maxX, minY, maxZ }, { maxX, maxY, maxZ }, { minX, maxY, maxZ },     } }; 

camera frustum slices' AABBs

Last step of an algorithm, though is not willing to cooperate: I thought that by multiplying each of the orthogonal projections by the light’ view matrix I will align the AABB with the light direction, but all I got was misaligned AABBs:

std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;  std::transform(     _aabbVertices.begin(),     _aabbVertices.end(),     _frustumSliceAlignedAABBVertices.begin(),     [&](glm::vec3 p) {         auto v = lightView * glm::vec4(p, 1.0f);         return glm::vec3(v) / v.w;     } ); 

multiplying camera slices AABBs by light view matrix

Only when I multiply it by both light projection matrix and light view matrix I get something similar to alignment:

std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;  std::transform(     _aabbVertices.begin(),     _aabbVertices.end(),     _frustumSliceAlignedAABBVertices.begin(),     [&](glm::vec3 p) {         auto v = lightProjection * lightView * glm::vec4(p, 1.0f);         return glm::vec3(v) / v.w;     } ); 

AABBs in light projection and view space

Ironically, seems the direction is opposite to the light’ direction.

Despite my light being pointed to origin (0, 0, 0), the AABBs seem to be projected in reverse order.

I am not entirely sure how to resolve this issue or even why this is happening…

spatial mapping with realworld texturing hololens 2

For a project I’m trying to texturise the spatial mapping of the hololens 2 with photographes.

I found a pretty good project witch made exactly what I want the problem is that is a Unity 5.6 / Holotoolkit project and I use Unity 2018.4 / MRTK 2.6

So, I’m trying to translate the project.

This is the original work

And what I’v done so far

For now, my problem is the texture that the project seems to merge each time I take a photo. I don’t know if the problem came from the shader, the code witch pass the Texture2DArray to the shader, or the merge when I take the photo.

Mathematica shorthand for mapping function over two lists simultaneously

I am still relatively new to Mathematica, so please bear with my question. I have the following statement that works perfectly well:

Table[findcvec[phif, phit, v1[[i]], v2[[i]], +1], {i, Length[v1]}] 

Now, I would like to make a Mathematica shorthand version of this statement using #, &, etc. But this attempt fails:

MapThread[findcvec[phif, phit, #1, #2, +1]] & {v1, v2} 

Could someone please tell me the correct way of doing this?

This might be a duplicate, but I couldn’t any previous question that addresses this specifically. Thank you!

mapping a digraph across sectors of countries

Given a (9,9) matrix representing three countries (for example, the US, China, Russia), each one of which has three sectors (vertices/sectors {1, 2, 3) for the US; {4, 5, 6} for China; and {7,8,9} for Russia), I like to create a map of the directed linkages between all 9 sectors using the actual world map.

SeedRandom[01]; g={{1.09738, 0.0440055, 0.113012, 0.0436654, 0.0550311, 0.0365684,   0.0990232, 0.0550859, 0.0629618}, {0.0850364, 1.05189, 0.0803699,   0.0999457, 0.0846171, 0.115742, 0.078153, 0.111992,   0.0828957}, {0.0982489, 0.0712597, 1.07401, 0.0723417, 0.0431498,   0.0824737, 0.126777, 0.0569532, 0.0808742}, {0.089248, 0.114673,   0.135009, 1.10033, 0.0743107, 0.107282, 0.133689, 0.0850109,   0.0467125}, {0.0921911, 0.0582554, 0.0937256, 0.0535134, 1.10261,   0.0882558, 0.0366383, 0.154662, 0.0893078}, {0.0835567, 0.0541454,   0.0971447, 0.0458107, 0.132431, 1.08961, 0.0726788, 0.108789,   0.118664}, {0.0747554, 0.150188, 0.139565, 0.0936757, 0.132907,   0.140158, 1.05936, 0.0964287, 0.116732}, {0.0667764, 0.0489254,   0.137437, 0.0962666, 0.0882702, 0.0704283, 0.0807027, 1.05768,   0.101375}, {0.0824603, 0.11258, 0.135069, 0.110204, 0.102288,   0.103722, 0.0453945, 0.0473116, 1.10549}}; n = 9; d = 0.12; G = RandomGraph[{Round[n], Round[n*(n - 1)*d]}, DirectedEdges ->True]; Ga = AdjacencyMatrix[G]*g; sa = SparseArray[Ga]; weightedG =    Graph[sa["NonzeroPositions"], EdgeWeight -> sa["NonzeroValues"],    DirectedEdges -> True, VertexCapacity -> {i_ :> i},    VertexSize -> .3]; SetProperty[weightedG,    VertexLabels -> {i_ :>    Placed[PropertyValue[{weightedG, i}, VertexCapacity], Center]}] 

What is the history of the offline networked mapping tool called Gametable?

Gametable was designed as a networked virtual table top for role playing use. It was comparable to tools such as MapTool, Roll20, Battlegrounds, and others. It’s popularity lay in the utter simplicity of use, virtually no learning curve, and decent if simple tool set.

But after the early 2010’s it fell out of popularity and practically disappeared from search results and reviews.


Thus, what is the history of the offline networked mapping tool called Gametable, as described in the Gametable tutorial link below?

https://www.roleplayingtips.com/articles/gametable_mapping_tutorial.html

Security loopholes while mapping .kube/config file on to docker as volume

I have a scenario where I have to install Kubernetes on a public cloud and access the Kubernetes via kubectl from a VM on my laptop.

Kubectl accesses .kube/config to connect K8S API-Server to do the required operation.

There is an application that is running as docker inside the VM and connects to K8S using .kube/config that is mapped as volume. That is -v $ HOME/.kube:/home/test/.kube

Is there any security loopholes should I be aware of?

Cache Blocks Direct Mapping

If the main memory address has 18 bits (7 for tag,7 for line and 4 for word) and each word is 8 bits. I found that the main memory capacity is 256-KBytes, total cache lines is 128 line, total cache words is 128*16(16 word per block/line) = 2048 words. Then what will be the size of cache words? I am very confusing on it. I can’t get the definition of cache words. Can anyone tell me what is the cache words? Thank you!