The F number is a ratio calculated purely from the focal length of the lens and the size of the aperture. Any light meter talks in terms of F, Shutter, and ISO.
Different glass, or just different number of elements, is going to change the number of photons that get though the lens; we compensate for ND filters by adjusting F. What am I missing? Why aren’t t stops part of exposure calculations?
Using a long (10 second) exposure I found I was getting an odd “reflection” in the picture, where the brightest object leaves a residue in the opposite part of the frame.
You can see it here:
I was still getting it in 4/5 second exposures as well. NB: Using Canon 550d with a 11-16mm F2.8 Tokina lens, at 16mm.
I assume it is some physical thing like an internal mirror or lens thing, but does anyone know for sure?
I am photographing a scene with white and black elements in it. Starting at the highest f-stop, I decrease the f-stop one stop and decrease the exposure time by a factor of 2, take a picture, and keep doing this for all the f-stops on the lens. My expectation is that raw counts should stay the same inside a white region or a black region since halving the exposure time compensates for decreasing the f-stop. But when I select a white region and average its pixel raw counts for each image, there is variability between the images (the standard deviation of the raw counts is ~5% of the mean). Same thing if I select and average a black region. I am not changing anything else (illumination, camera position). What could be causing this variation: noise, or something more systematic?
I recently got a new off-camera flash (Nikon SB-700), and I’m having trouble thinking about the variables that go into proper exposure.
For example, without flash, I have a little mental decision tree that went something like this:
- If shooting very long exposure then use manual mode with camera on tripod. Choose aperture to suit desired DOF and/or choose shutter speed to suit desired exposure time. Try to use ISO 640 or lower. Do not use exposure compensation (because in manual mode it’s pointless).
- Else if shooting fast moving subject, use shutter priority and ISO auto. Tweak exposure compensation to prevent blown highlights or blocked out shadows.
- Else use aperture priority, and choose suitable DOF. Make sure that shutter speed is no slower than 1 / focal length. Compensate for slow shutter speeds by 1) Raising the ISO, or 2) Using a tripod, or 3) Bracing the camera or yourself against something. Tweak exposure compensation to prevent blown highlights or blocked out shadows.
For an amateur like me, the above algorithm covers just about everything I do. I could probably even make a flowchart out of it.
Now that I am trying to learn about flash photography, things are suddenly very, very complex, and I feel lost.
My question is: Is there a similar mental flowchart or algorithm that can I use as a guide for flash?
So looking into a first film camera and I’ve always heard good things of Leica M cameras. I am familiar with the Nikon FM and its successors, but those cameras all contain an exposure meter. Has Nikon made a camera, say like the FM, that doesn’t include an exposure meter?
I’m not sure why, but I’m getting, at times, images like the one below, while doing long exposure with the 15 big stopper.
I had this problem today for the first time, and not with all pictures. What could it be?
Note: I’m using a full-frame mirrorless camera.
Hi there, I’m getting lens reflection when I take a long exposure and use a filter. I’m assuming it’s because of the gap between the lens and filter so light gets in. See the images I’ve attached. I’m wondering if there is a way I can stop this happening. Thanks Phil
I like using Ilford film for street photography, and I’m starting now to use it for long exposure (mostly with large format).
Ideally, I would like to obtain fairly flat images (for post-processing in photoshop). The issue is that contrast increases with development time.
I’d like to know from those who have used Ilford before about their recipe for reducing the contrast (development time, speed rating etc..).
To add more information, lighting conditions are usually overcast/cloudy days, mid afternoon.
I’ve narrowed my choice down to (CPU clock speed in parens):
- HTC One A9 (10.8GHz total)
- Samsung S6 (14.4GHz total)
- LG G4 (9.2GHz total)
- Huawei P9 (17.2GHz total)
- Huawei Nexus 6P (14.2GHz total)
My primary concern is that when I use an exp. bracketing app such as Camera FV-5, and shoot RAW, it’s gonna take too long to store each exposure and the clouds will have moved too much in the meantime.
To avoid that problem, I’m thinking I should pick the one with the highest CPU clock speed. Or maybe I should pick the one with the fewest megapixels?
Note that, eyeing gsmarena.com sample low-light photos, I prefer the two Huawei phones to the other three.
If all 4 phones would take a really long time (like 5 seconds) per RAW exposure, then maybe I’ll drop the “RAW” requirement and look for phones that support a higher bracketing range. E.g. my current Samsung Galaxy J7 (2017) only from -2EV to +2EV, which is too little. But that’s gonna be another question.
So I have
.DNG files that I have taken from an iPhone and I’m trying to figure out how to stack them and then output the stacked photo as a
.DNG file as well (or any RAW format tbh).
I know how to code fairly well so I am able to stack them using python and the
rawpy module. The problem is that
rawpy currently has no way to output RAW files. So when I stack them, I can output as JPG, but that’s not what I want.
Going through astro photography forums, I’ve seen threads that say that Deep Sky Stacker can be used to read and stack RAW files, but I couldn’t find any guides that can tell me how to export an image as a
Does DSS output stacked photos as RAW? (format doesn’t matter, but DNG would be preferred) Is there a way to do this through Lightroom, maybe? Is there any way to do this programmatically?. I’m fairly new to all this so any help would be appreciated.