A few years ago (at least 10 years I believe), I read in a magazine about a “revolutionary” camera which was able to take pictures such that from one picture you could, afterwards, choose the point you wanted to focus on (during the post-processing)
I do not remember the name of that camera, I just recall it was in the shape of a cuboid (a rather long one) and (very vague memory) that it was black or red.
I could not find anything online but would be interested to see what it has become (and read about the technology – if this was not snake oil).
So I had tried to implement the logarithmic depth buffer solution, but I realised that my camera is orthographic. So the math should be different as compared to the one used for perspective camera.
From Outerra’s article, The DirectX formula is z = log(C * w + 1) / log(C * Far + 1) * w
Does anyone know how to modify it for use with an orthographic camera?
Take the following mockup as our example case:
Field 1*: [ text input ]
Field 2 : [ text input ]
Field 3*: [ text input ]
We have noticed that when a screen reader (using default settings) first reads through the following page, it reads all the text content correctly. But the screen reader does not state if a given field is required on this first read through. When tabbing through to the fields after the fact, it will state if a field is required on first focus. Is this acceptable from an accessibility stand point?