I am implementing cascade shadow mapping algorithm and currently stuck with matrix transformations – my AABBs, when projected in light space are pointing in the direction opposite to the light:

I was following the logic described in Oreon engine video on YouTube and NVidia docs.

The algorithm in my understanding looks like this:

1. "cut" camera frustum into several slices
2. calculate the coordinates of each frustum slice’ corners in world space
3. calculate the axis-aligned bounding box of each slice in world space (using the vertices from step 2)
4. create an orthographic projection from the calculated AABBs
5. using the orthographic projections from step 4 and light view matrix, calculate the shadow maps (as in: render the scene to the depth buffer for each of the projections)
6. use the shadow maps to calculate the shadow component of each fragment’ color; using fragmentPosition.z and comparing it to each of the camera frustum’ slices to figure out which shadow map to use

I am able to correctly figure out camera frustum’ vertices in world space:

The frustum extends further, but camera clipping distance… well, clips the further slices.

For this, I use inverse matrix multiplication of camera projection and camera view matrices and cube in normalized device coordinates:

std::array<glm::vec3, 8> _cameraFrustumSliceCornerVertices{     {         { -1.0f, -1.0f, -1.0f }, { 1.0f, -1.0f, -1.0f }, { 1.0f, 1.0f, -1.0f }, { -1.0f, 1.0f, -1.0f },         { -1.0f, -1.0f, 1.0f }, { 1.0f, -1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f }, { -1.0f, 1.0f, 1.0f },     } }; 

I then multiply each vertex $$p$$ by the inverse of the product $$P_{camera} \times V_{camera}$$

This gives me the vertices of the camera frustum in world space.

To generate slices, I tried applying the same logic, but using perspective projection with different near and far distances with little luck.

I then used vector math to calculate each camera frustum slice by taking the entire camera frustum vertices in world space and calculating the vectors for each edge of the frustum: $$v_i = v_i^{far} – v_i^{near}$$.

Then I simply multiply these vectors by the lengths of an entire camera frustum and multiply them by the corresponding slice fraction: $$v_i^{near} + v_i \cdot \|v_i^{far} – v_i^{near}\| \cdot d_i$$. Then I simply add these vectors to the near plane of the entire camera frustum to get the far plane of each slice.

std::vector<float> splits{ { 0.0f, 0.05f, 0.2f, 0.5f, 1.0f } };  const float _depth = 2.0f; // 1.0f - (-1.0f); normalized device coordinates of a view projection cube; zFar - zNear  auto proj = glm::inverse(initialCameraProjection * initialCameraView);  std::array<glm::vec3, 8> _cameraFrustumSliceCornerVertices{     {         { -1.0f, -1.0f, -1.0f }, { 1.0f, -1.0f, -1.0f }, { 1.0f, 1.0f, -1.0f }, { -1.0f, 1.0f, -1.0f },         { -1.0f, -1.0f, 1.0f }, { 1.0f, -1.0f, 1.0f }, { 1.0f, 1.0f, 1.0f }, { -1.0f, 1.0f, 1.0f },     } };  std::array<glm::vec3, 8> _totalFrustumVertices;  std::transform(     _cameraFrustumSliceCornerVertices.begin(),     _cameraFrustumSliceCornerVertices.end(),     _totalFrustumVertices.begin(),     [&](glm::vec3 p) {         auto v = proj * glm::vec4(p, 1.0f);         return glm::vec3(v) / v.w;     } );  std::array<glm::vec3, 4> _frustumVectors{     {         _totalFrustumVertices[4] - _totalFrustumVertices[0],         _totalFrustumVertices[5] - _totalFrustumVertices[1],         _totalFrustumVertices[6] - _totalFrustumVertices[2],         _totalFrustumVertices[7] - _totalFrustumVertices[3],     } };  for (auto i = 1; i < splits.size(); ++i) {     std::array<glm::vec3, 8> _frustumSliceVertices{         {             _totalFrustumVertices[0] + (_frustumVectors[0] * _depth * splits[i - 1]),             _totalFrustumVertices[1] + (_frustumVectors[1] * _depth * splits[i - 1]),             _totalFrustumVertices[2] + (_frustumVectors[2] * _depth * splits[i - 1]),             _totalFrustumVertices[3] + (_frustumVectors[3] * _depth * splits[i - 1]),              _totalFrustumVertices[0] + (_frustumVectors[0] * _depth * splits[i]),             _totalFrustumVertices[1] + (_frustumVectors[1] * _depth * splits[i]),             _totalFrustumVertices[2] + (_frustumVectors[2] * _depth * splits[i]),             _totalFrustumVertices[3] + (_frustumVectors[3] * _depth * splits[i]),         }     };      // render the thing } 

According to the algorithm, the next part is finding the axis-aligned bounding box (AABB) of each camera frustum slice and projecting it in the light view space.

I am able to correctly calculate the AABB of each camera frustum slice in world space:

This is a rather trivial algorithm that iterates over all the vertices from the previous step and finds minimal x, y and z coordinate of each vertex of a camera frustum slice in world space.

float minX = 0.0f, maxX = 0.0f; float minY = 0.0f, maxY = 0.0f; float minZ = 0.0f, maxZ = 0.0f;  for (auto i = 0; i < _frustumSliceVertices.size(); ++i) {     auto p = _frustumSliceVertices[i];      if (i == 0)     {         minX = maxX = p.x;         minY = maxY = p.y;         minZ = maxZ = p.z;     }     else     {         minX = std::fmin(minX, p.x);         minY = std::fmin(minY, p.y);         minZ = std::fmin(minZ, p.z);          maxX = std::fmax(maxX, p.x);         maxY = std::fmax(maxY, p.y);         maxZ = std::fmax(maxZ, p.z);     } }  auto _ortho = glm::ortho(minX, maxX, minY, maxY, minZ, maxZ);  std::array<glm::vec3, 8> _aabbVertices{     {         { minX, minY, minZ }, { maxX, minY, minZ }, { maxX, maxY, minZ }, { minX, maxY, minZ },         { minX, minY, maxZ }, { maxX, minY, maxZ }, { maxX, maxY, maxZ }, { minX, maxY, maxZ },     } };  std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;  std::transform(     _aabbVertices.begin(),     _aabbVertices.end(),     _frustumSliceAlignedAABBVertices.begin(),     [&](glm::vec3 p) {         auto v = lightProjection * lightView * glm::vec4(p, 1.0f);         return glm::vec3(v) / v.w;     } ); 

I then construct an orthographic projection from that data – as per algorithm, these projections, one per camera frustum slice, will be later used to calculate shadow maps, aka render to depth textures.

auto _ortho = glm::ortho(minX, maxX, minY, maxY, minZ, maxZ); 

To render these AABBs, I tried rendering the view cube, like with the camera frustum, but got some dubious results:

Both the position and the size of the AABBs were wrong.

I tried making the AABBs "uniform", e.g. left = ((maxX - minX) / 2) * -1 and rihgt = ((maxX - minX) / 2) * +1, which resulted in only centering the AABBs around the same origin point (0, 0, 0):

const auto _width = (maxX - minX) / 2.0f; const auto _height = (maxY - minY) / 2.0f; const auto _depth = (maxZ - minZ) / 2.0f;  auto _ortho = glm::ortho(-_width, _width, -_height, _height, -_depth, _depth); 

I then used min / max values of each corresponding coordinate instead of +/- 1 in the view cube to get the correct results:

std::array<glm::vec3, 8> _aabbVertices{     {         { minX, minY, minZ }, { maxX, minY, minZ }, { maxX, maxY, minZ }, { minX, maxY, minZ },         { minX, minY, maxZ }, { maxX, minY, maxZ }, { maxX, maxY, maxZ }, { minX, maxY, maxZ },     } }; 

Last step of an algorithm, though is not willing to cooperate: I thought that by multiplying each of the orthogonal projections by the light’ view matrix I will align the AABB with the light direction, but all I got was misaligned AABBs:

std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;  std::transform(     _aabbVertices.begin(),     _aabbVertices.end(),     _frustumSliceAlignedAABBVertices.begin(),     [&](glm::vec3 p) {         auto v = lightView * glm::vec4(p, 1.0f);         return glm::vec3(v) / v.w;     } ); 

Only when I multiply it by both light projection matrix and light view matrix I get something similar to alignment:

std::array<glm::vec3, 8> _frustumSliceAlignedAABBVertices;  std::transform(     _aabbVertices.begin(),     _aabbVertices.end(),     _frustumSliceAlignedAABBVertices.begin(),     [&](glm::vec3 p) {         auto v = lightProjection * lightView * glm::vec4(p, 1.0f);         return glm::vec3(v) / v.w;     } ); 

Ironically, seems the direction is opposite to the light’ direction.

Despite my light being pointed to origin (0, 0, 0), the AABBs seem to be projected in reverse order.

I am not entirely sure how to resolve this issue or even why this is happening…