A camera’s image sensor has one job – to record light. However, sensors generally can only capture a limited range of light from shadows to highlights. When the actual range exceeds the sensor’s ability, that’s “high dynamic range” or HDR.
Here are two recent examples where the range of light exceeded my camera sensor’s ability. The first is a sunset. No surprise – the highlights are super bright. The second example is less obvious – the surface of a lake reflects blue sky in some areas and elsewhere the light simply falls off to black.
The solution is the same. Capture multiple exposures and then combine them together as a matter of post-processing. Many cameras have this post-processing and a built-in option. Even my smartphone camera includes that feature. The results may be disappointing. My own experience with camera built-in HDR processing is 50/50 at best. The end result is so commonly disappointing that I routinely don’t trust the camera to do it. Instead, I do HDR post-processing using software in a desktop computer.
This technique generally requires that the camera doesn’t move when capturing the separate exposures. The composition of the two captures is exactly the same. If the camera moves slightly, that is commonly not a problem because the two can be aligned during post.
This technique doesn’t work with video. When shooting video, the camera angle usually changes during the shoot; to shoot the scene a second time will result in a different video composition. Two captures will never align. For video, the solution to HDR is different – capture the shot just once but use a special camera mode that is very low-contrast, often referred to as DLog. Straight out of the camera, that shot looks truly awful. It must be post-processed, expanding the contrast range to something that appears correct.
Some cameras today are using memory cards that did not exist ten years ago. It is time to again survey the state of memory cards. The last time I wrote about memory cards was 2014.
While many online comments assume Compact Flash (CF) memory cards are antiquated simply because they are larger than SD cards, that’s not true. CF might be considered as antiquated because of limited speed of data transfer – how fast can data be written to the card. CF cards, like older SD cards (UHS-I), may be “slow” when compared to some other card technologies.
For the past ten years, SD (secure digital) cards have dominated the market for cameras and other electronics. Unfortunately, labelling on SD cards can be quite cryptic. A single card may state: 250MB/s, UHS-II, U3, Class 10, V60.
“C” is original speed class C2 (2 MB/sec), C4 (4 MB/sec), C6 (6 MB/sec), and C10 (10 MB/sec).
“V” is video speed class V6, V10, V30, V60 and V90.
Memory cards are a form of NVRAM (Non-Volatile Random Access Memory). That implies two things. When the card is removed from electrical power, the data stored on a memory card does not disappear. That data can be accessed randomly; reading and writing is Not limited to serial or linear order.
Faster is better … maybe
There are two reasons but possibly neither reason is important to you.
Capturing video
When capturing video, the data rate out to your memory card will vary depending upon which codec and configurable parameters available with that particular codec. Let’s vaguely consider two examples, assuming the picture resolution is Ultra-High Definition video (a.k.a. UHD or 4K) and 30 frames per second:
H.264 is maybe 4 MB/sec (32 Mb/s) write to your memory card
Apple Pro Res 422 can be more than 60 MB/sec (480 Mb/sec) write to your memory card
Capturing bursts of high-resolution photos.
If a camera is going to produce RAW images of file size 30 MB each and you hold down the shutter release, capturing ten frames per second, that’s 300 MB/sec. The camera buffers the images internally until they can be saved to the card. The question is then: how much time before that writing is complete and you can press the shutter release again?
Under the hood
The most important difference between memory card technologies is what you can’t see.
The foundation of SDXC is UHS bus
The foundation of CF is Parallel ATA (PATA) bus interface.
The foundation of CFast is SATA III bus interface.
The foundation of XQD is PCIe.
The foundation of CFexpress is PCIe.
Next Generation is here
CFast is quickly fading away in our rear-view mirror. Some contemporary cameras do still employ these cards, including Blackmagic URSA and the Canon EOS C700.
Second generation XQD 2.0 debuted in 2012. Jointly developed by SanDisk, Sony and Nikon, XQD apparently defeated CFast but has not gained wide adoption. While XQD has been employed in a handful of Nikon cameras, it surprisingly has not appeared in Sony cameras. Perhaps the only non-Nikon camera to use XQD was the XF IQ4 by Phase One.
CFexpress was developed by a broad consortium of companies and, unlike XQD, does not incur licensing fees paid to Sony. Second generation CFexpress type B has the same physical size as XQD but can transfer data faster. Cameras currently supporting CFexpress cards include Canon EOS R5, Nikon D6, Nikon Z9 and Sony α7S III.
XQD and CFexpress can support 6K video and 8K video recording. CFexpress and XQD share the same physical size and durable packaging. Some Nikon Z-series cameras support either in the same card slot.
Consumer cameras will likely continue to use SDHC/SDXC/SDUC cards for several reasons.
Average consumers do not require durability/ruggedness of XQD and CFexpress.
Average consumers are not shooting 6K or 8K video
SD UHS-II cards are far less expensive than XQD and CFexpress cards
The very brief list
SDHC (SD High Capacity): between 4 and 32 GB;
SDXC (SD Xtreme Capacity): up to 2 TB;
SDUC (SD Ultra Capacity): up to 128 TB.
Data Speed
SDHC/SDXC/SDUC UHS-I: 104 MB/sec
SDHC/SDXC/SDUC UHS-II: 312 MB/sec
SDHC/SDXC/SDUC UHS-III: 624 MB/sec (The only product I can find is Sony SF-G Series Tough SDXC, $188)
CF (Compact Flash): up to 155MB/sec
CFast: up to 600Mb/sec
XQD: up to 1000MB/sec
CFexpress type A : up to 1000MB/sec
CFexpress type B : up to 2000MB/sec (To date, the fastest card has max write 1600MB/sec and max read 1700 MB/sec)
CFexpress type C : up to 4000MB/sec
Physical size
SD card is 32.0 × 24.0 × 2.1 mm
XQD is 38.5 x 29.8 x 3.8 mm
CFexpress Type A is 20 x 28 x 2.8 mm
CFexpress Type B is same as XQD
CFexpress Type C is 54 x 74 x 4.8 mm
Card Readers
Card readers that support both XQD and CFexpress are very rare. I found one that cost $150.
CFexpress type-A and CFexpress type-B are physically different. Card readers likely support one of these, not both.
Some card readers have multiple slots to accept different card formats. Such readers may only recognize one card at a time; if you insert two cards at the same time, it may only recognize the first card inserted.
To mention a few
The top two brands I have trusted are Lexar and SanDisk. Second tier Transcend. Third tier Kensington. While PNY probably deserves a spot in the top five, I’ve never actually owned a PNY card.
In 2017, Micron sold the Lexar brand. And, according to multiple reports online, a new brand, ProGrade Digital, was founded by some of the old Lexar leadership team.
With all digital cameras, my general practice is to capture RAW images instead of JPEG. Particularly when photographing with DJI Mavic 3 aerial drone, I not only capture RAW but additionally I frequently utilize exposure bracketing and HDR post-processing.
The image shown here is the result of post-processing with Adobe Photoshop.
Having used the original DJI Mavic Pro, Mavic 2 pro, and Mavic 3, all have exhibited similar difficulty holding fine details in the highlights. This commonly occurs with architectural details under full sun; highlight details are easily lost. My solution is to use exposure bracketing and HDR post-processing; this means the original capture includes the best exposure, then two other exposures, one that is a bit brighter and one that is a bit darker. For the example, the photo here. I expected in advance that the highlights were at risk of getting lost; in retrospect, yes it was true.
The original three exposures are shown below. The best exposure is in the middle. As has been typical of Mavic 1,2, and 3, architectural details in white buildings have not been fully captured. The second problem is that the green trees are too dark. That second problem can be remedied in post-processing without much difficulty. However, if details in the highlights are blown out, recovering this can be difficult or impossible.
Initially, I perform basic adjustments in Adobe Lightroom and then open all three using “Open as layers in Photoshop”. That opens the three separate files as ProPhoto RGB (16-bit color depth) and with the adjustments made in Lightroom. Once opened in Photoshop, select all three layers and choose “Auto-align layers” in case the drone may have moved slightly between exposures. The best exposure I move to the bottom layer. From the other two exposures, I select specific parts of the image and these are overlayed over the bottom layer, effectively replacing problem areas.
Because the darkest exposure has retained all details in the highlights, I select the brightest areas from this exposure. Typically, this can be accomplished with either Photoshop’s built-in “Color range” selection. Once that selection is made, I often need to tweak it a bit, manually deselecting some areas that were selected but I don’t want those areas selected. Then feather the selection and convert it to a layer mask.
Using the brightest exposure, I similarly selected the green trees. That proved to be more difficult and I spent much time tweaking that selection. This selection is also converted to a layer mask. This replaces the overly dark trees in the base layer with a brighter version. Of course, you might use tools such as brightness or tone curve to lighten the trees in the base image, but the underexposed trees are more prone to luminance noise. Leveraging the brighter exposed trees does not suffer from noise, but is a bit more work to achieve.
There are alternative methods. I sometimes use Raya Pro by Jimmy McIntyre.
The final image is a composite assembled from three separate exposures of the same scene. I save this layered file as TIFF, but you can also save it as PSD (Photoshop format). I may merge all layers and export a JPEG file, but I keep the layered file. Commonly, I do return to this file and make further adjustments. For example, I may decide later that one or more of the exposures has noticeable noise or is not sufficiently sharp. The layered file allows me to make adjustments to the individual exposures.
The standard hand-held remote controller for several DJI aerial camera drones is RC-N1. (These drones include DJI Mini 2, Mini 3, Mavic 3, Mavic Air 2 and 2S.) While this controller includes a small USB cable that hides when not in use, the cable commonly cannot be attached to a smartphone if that phone has a protective case. It fits any phone with a USB-C port but not if there is a protective case on the phone.
Here are two products that solve that problem. One is an adapter and the other is a replacement cable.
I keep the adapter in my drone carry bag for when I might need it. As it is quick and easy to attach, there is no need to leave it attached at all times.
The replacement cable is a bit thicker and bulkier than the standard DJI cable; it does not store nicely in the folded controller; see the photo here. You can decide for yourself if this is acceptable.
This photo was carefully planned, for the time of year (trees are in bloom), the location, and a somewhat unusual downward angle. The human experience here (Boston Public Garden) includes sky and nearby skyscrapers. I chose to eliminate the sky and skyscrapers through use of a high camera position looking down. However, elevating the camera can be a difficult problem if there is nothing to stand upon.
One of my favorite photographic tools is a telescoping pole with a camera mount at the top. Combined with a camera equipped with wi-fi, the camera can be raised up to 20 feet and operated from a mobile app on a smartphone.
A telescoping pole is often the best choice for a camera height of ten to twenty feet. To photograph from a height of forty feet or two hundred feet, I can use a small aerial drone. While a drone can be used at altitudes of fifteen or twenty feet, that could readily be a distraction and a nuisance to people who are trying to enjoy the park.
On multiple occasions my photographic intentions have been thwarted by the presence of utility wires strung upon poles. While I could have flown an aerial drone above the wires, I instead chose to use a telescopic pole and place the camera twelve to eighteen inches below the wires. Personally, I don’t want to fly a drone that close to wires. Unlike a drone camera, a pole-mounted camera can’t move suddenly and potentially collide with wires.
For comparison’s sake, I shot the same scene with the camera at eye-level. The location I chose for my photo was occupied by a nine-foot-tall shrub. The pole-mounted camera enabled shooting over the top of this shrub in the foreground.
Every year, I see some images shared online that viewers believe to be real but are digital creations that are not real. In many cases, the digital artist wasn’t trying to fool anyone but the image is shared without stating that it is digital art.
A friend showed me a “photo” that impressed him … reported to be a blue whale passing under a cable-stay bridge. As the length of the whale was similar to the length of the bridge, I did not believe it and suggested this was not a real photo. My friend seemed offended and asked “why would you question this photo?” Even the largest whale on earth simply isn’t that big. Later looking up details online, an adult blue whale may grow to a length of 100 feet. The bridge in the photo is the Samuel De Champlain Bridge and the section of the bridge in the image amounts to a length of approximately 1800 feet.
On several occasions, friends have share photos online of a bright red owl, sometimes identified as a Madagascar Red Owl. Commonly people believe they are sharing a real “photo” and are stunned by the beauty of the bird. The immediate problem is that owls are birds of prey and will not be highly visible to their prey; an owl should blend into its environment. To this point, at least seven years ago, I modified one of my own images and declared it to be an Aquitane Owl with blatant caption explaining that the coloring isn’t real and should never be misrepresented as real.
Some tropical birds are brightly colored; as a general rule, owls are not.
I have seen a few images that raised doubts, but a little research told me that the colors are not untrue, but perhaps digitally amplified. For example a black leopard with distinctive spots (not entirely black). And then there is a brown zorse (zebra horse) – apparently completely real.
Although the Mavic 3 includes some groundbreaking new features, many reviewers will render their opinions about such things and I will not do so here. I am only analyzing the photo quality from Mavic 3 with comparison to the predecessor Mavic 2 Pro.
Mavic 3 includes two cameras. I am comparing the main camera to the camera of Mavic 2 Pro. The Mavic 3 main camera has a fixed-focal-length lens, 4/3 image sensor, and variable aperture.
The Mavic 3 supports capturing photos in either JPEG format or JPEG & RAW. While I almost always capture photos in RAW format and I do not need a JPEG, the initial release of MAVIC 3 will always save a JPEG. That could possibly change in a future firmware update.
See the end of this post for a link to my 2018 comparison of Mavic 2 Pro image quality, compared to original Mavic Pro.
The main camera of Mavic 3 uses a 4/3 image sensor; this has implications.
The image rectangle has an aspect ratio of 4:3, which is same as Mavic 2 Zoom but is different than Mavic 2 Pro and Mavic Air. For me personally, this implies that I must crop each image and discard some pixels to obtain a final image of 3:2 aspect ratio.
Four-thirds and Micro Four Thirds (MFT) are established standards. The diagonal measure of a 4/3 sensor can vary but is typically around 22mm. Compare this to Mavic 2 Pro and Mavic Air 2S, which each have image sensors with diagonal measure around 16 mm.
A larger sensor can allow for either more pixels or larger pixels. The Mavic 3 pixel resolution is not significantly different than Mavic 2 Pro. Likely the individual dot elements (pixels) are larger. Potentially that might translate to better ability to gather light, potentially reducing the signal-to-noise ratio. But that is theoretical. As the old saying goes, the proof is in the pudding.
Some online articles suggest that the larger image sensor “gives Mavic 3 higher resolution and dynamic range” but …. higher resolution is a dubious claim and higher dynamic range is theoretical.
DJI drones have historically employed Sony Exmore image sensors; DJI/Hasselblad cameras are no exception. I must guess that the Mavic 3 is using the Sony IMX472-AAJK, but I have not confirmed this. That sensor can capture all 20 megapixels at 120 frames-per-second. Notably, this sensor uses “stacked CMOS” technology and is the first ever stacked CMOS sensor in the 4/3 size. This sensor diagonally measures 21.77 mm.
The Mavic 3 user guide (available online) includes this disturbing note: “Before shooting important photos or videos, shoot a few images to test the camera is operating correctly.” I shudder to imagine what might have happened during initial product testing to warrant such a warning.
Pixel Resolution
If you want a final image to have3:2 aspect ratio, then any 3:4 image must be cropped and that includes Mavic 3. Technically, you end up with fewer pixels than Mavic 2 Pro and Mavic Air 2S.
Mavic Air 2S @ 3:2 aspect ……………… 5472×3648 = 19.9 million pixels Mavic 2 @ 3:2 aspect ……………………… 5464×3640 = 19.88 million pixels Mavic 2 @ 4:3 aspect (crop from 3:2)… 4852×3640 Mavic 2 @ 16:9 aspect (crop)…………… 5464×3070 Mavic 3 @ 4:3 aspect …………………….. 5280×3956 = 20.88 million pixels Mavic 3 @ 3:2 aspect (crop from 4:3)… 5280×3520 = 18.58 million pixels Mavic 3 @ 16:9 aspect (crop)…………….. 5280×2970
Color
Opening RAW images in Adobe lightroom, the color is a bit green. That’s correctable but really annoying; I’m guessing this problem is because Lightroom/Photoshop/CameraRAW do not yet include a camera profile for Mavic 3 (Hasselblad L2D-20c).
Looking at the JPEGs, the color looks good – not vibrant, but good.
Sharpness
Comparing images from Mavic 3 and Mavic 2 Pro, at aperture f\3.5 and f\4.0, the two are equally sharp at center of the lens. However, away from center, toward the edges of the image, Mavic 3 exhibits improved sharpness over Mavic 2 Pro.
Image noise
Considering all ISO 100 through 3200, Mavic 3 shows less luminance noise than Mavic 2 Pro. However, at any ISO, low light situations can result in considerable chroma noise in both shadows and midtones. It is worst at IS0 800, 1600, 3200. While it can usually be mitigated using noise-reduction in post-processing, the 4/3 image sensor should not exhibit this problem.
As the camera saves both RAW and JPEG, I looked at the JPEGs. Luminance noise is reasonably mitigated through ISO 1600; mitigation can be dicey at 3200. Chroma noise is essentially eliminated. However, not surprising, this noise reduction comes at a price – loss of sharpness.
Chromatic aberration
In some situations with high-contrast fine detail, Mavic 3 can suffer from chromatic aberration similar to the first-generation Mavic Pro. Although Mavic 2 Pro significantly reduced chromatic aberration, Mavic 3 is a step backward. This is observed with the clear DJI lens cover; I haven’t tried it yet with the naked lens.
Shadow detail
Considering detail in the darkest shadow areas, Mavic 3 has a slight advantage to reveal details that Mavic 2 Pro cannot. The difference is quite small.
DJI has stated that the Mavic 3 main camera has 12.8 stops of dynamic range, which is not significantly greater than Mavic Air 2S or Mavic 2 Pro.
Highlight detail
Both the original Mavic Pro and the successor Mavic 2 Pro often failed to resolve subtle detail in highlights. This commonly manifests in architectural details that are white,such as clapboard siding and trim mouldings. Mavic 3 does shows a slight improvement.
Images captured with Mavic 2 Pro – particularly images that include architecture – have commonly required a great deal of effort to safeguard highlight details. At the time of capture, exposure bracketing saves an additional exposure wherein the highlights are rendered with reduced brightness. In post-processing, that exposure is developed carefully and specifically for hightlight details. Then those highlights are manually blended into the other exposure. Only time will tell if Mavic 3 eliminates the need for that extra work.
Remote control
Apart from the camera itself, I must mention the remote control. With the Mavic 2 Pro, I have very commonly used the camera control dial under the right index finger. With Mavic 3, the RC-N1 remote controller has no such control dial; exposure settings can only be controlled via touch-screen. The expensive RC Pro controller includes a dial for right index finger, which I vaguely believe controls camera zoom and I do not know if it can be used for exposure purposes. I did not spend the extra $1000 to get an RC Pro.
Here is my investigation of the Mavic 2 Pro, back when that was released in 2018:
On multiple occasions, my intended drone flight was defeated because my DJI drone refused to spin-up the propellers. Although the flight is authorized by the FAA, the drone refuses to launch. With proper planning, this problem is avoidable.
DJI drones include a safety feature known as Geofencing, which is intended to prevent flying in areas that are could be unsafe, particularly near airports. There are different systems of understanding the airspace and the DJI system is entirely different that the system employed by the FAA.
Local airport facility grid
Anywhere around controlled airspace, maximum flight altitude is determined a grid layered across a map; each grid-square indicates maximum altitude. As this is local to the facility/airport, it is commonly referred to as the facility grid.
A flight plan that does not exceed the stated maximum altitude can often be approved in seconds by a computer, without need for review by a person. This is made possible by a computerize system called LAANC (low altitude authorization). Submit your flight plan via a mobile app that supports LAANC. If you succeed in receiving authorization, you may need to export that information and then submit it to DJI to unlock your drone.
For years, I used a LAANC-enabled app called AirMap. Recently, that failed (and I found some rumors online why that might be true). Ultimately, I was forced to switch to a different LAANC-enabled app and I now use Aloft (https://www.aloft.ai/), formerly known as Kittyhawk.
If your drone is locked and will not launch, flight authorization from the FAA – by itself – does not unlock a DJI drone.
DJI Geofencing
Based upon the current GPS location, the drone automatically is aware of local flight restrictions. Potentially, it can refuse to take off. In some cases you may be able to unlock it from your flight controller; this is called self-unlocking. In other cases, self-unlocking is not allowed and you must request unlocking DJI Fly Safe (https://www.dji.com/flysafe). FAA flight authorization is a prerequisite.
In advance of your flight, always check the DJI Fly-Safe geofencing map. If you’re flight is either fully or partly in a blue zone or red zone, you will need to manually unlock the drone. DJI GEO system shows approach paths to airport runways and it is these areas that are likely to be considered no-fly zones. DJI did not invent this system; it is based upon LATAS (Low Altitude Traffic and Airspace Safety), which I have read was pioneered by PrecisionHawk.
Do not wait until you arrive at your launch location before checking that your drone will be able to launch. Research in advance: weather, FAA controlled-airspace restrictions, NOTAMs, and DJI GEO restrictions.
In specific geographic locations, your drone controller may display “NFZ”, which means “No-fly Zone”. When locked due to a NFZ, the drone can only be unlocked via the DJI Flysafe website; it cannot be unlocked via self-unlocking.
(I remember a conversation with a police officer in Boston when he asked to check my flight authorization. When I told him the drone would not launch in a specific location, he suggested that the pilot can simply unlock it and I told him that is not always true. Clearly this guy does not have personal experience flying DJI drones within the class-B airspace of Boston Logan airport.)
On mobile devices (e.g. smartphone), the Fly-Safe website reports that unlocking is not supported on the mobile website. The solution is to use a full-screen computer. Do this at home before you drive off to your launch site.
For custom unlocking via the DJI Flysafe web page, you probably need a computer.
Upon submitting your request, two things happen. You will receive an email that states “Unlock application is created.” The web site shows you that the request is “Pending review.”
If all goes well, two things will happen. You will receive a subsequent email within less than 10 minutes stating “Unlock application is accepted.” The website shows you that the request is “Accepted.”
Your login username must match: Flysafe web site & the mobile app.
Import the unlock certificate to the aircraft
What happens next is not entirely obvious and requires a bit of care. You launch the app for piloting (e.g. DJI Go 4 or DJI Fly) and find the menu item “Unlocking License” (DJI Go 4 app) or “Unlock GEO Zone” (DJI Fly app). The app retrieves any unlocking authorizations via the Internet. This requires two things. You must be connected to Internet data (e.g. Wi-Fi or cellular data network). Whenever you launch the app, you must be logged-in and the username must match the username that you used when requesting the unlock. (I have once stumbled because I had inadvertently used a different login, the unlocking license could not be found, and my intended flight did not happen until after I solved the mystery.)
If all goes well, your unlocking authorization will be listed. You’re not quite done yet; there are two more steps to unlock the drone. Though the license is recognized by the remote controls, the license must be copied to the drone. Look for “Import to Aircraft”. Do that and the app will then show that the drone has the license and it appears with a on-screen enable/disable switch. As the default is “disabled”, you must slide it to “enable” before the drone will finally unlock the NFZ. The display on remote controller will change from “NFZ” to “Ready”.
NOTAMs (Notice to AirMen)
Mobile apps that support LAANC will show you both the boundaries of controlled airspace (class-B, class-C, class-D, class-E ground level) and all local facility grids. However, most of these apps do not show active NOTAMs. To see active NOTAMs, simply look at Skyvector.com. Active NOTAMs appear as red circles.
To stabilize a camera for video filming, we have seen several types of stabilization: (1) Large Steadycam body-mounted on a vest. (This was invented around 1975.) (2) Hand-held stabilizers that rely upon counter-balance weight (3) Computerized gimbal operated with two hands and support cinema cameras such as RED, Sony, Canon, etc (4) Computerized gimbal that can be held with just one hand and support smaller cameras (5) Very small devices including camera and gimbal with a total weight of 16 oz or less. (6) In-camera mechanical techniques for stabilzation, either lens-shift or sensor shift (7) Digital image stabilization
Computerized gimbals have been a game changer, invented around 2012. The larger 2-handed category has been dominated by Freefly MoVI series and DJI Ronin series of products. These systems do not include camera, video monitoring, or follow-focus.
Today, DJI introduced another game-changer – the new DJI Ronin 4D. This is no longer just a stabilization device, rather it is a complete system, including the camera. Surely many film-makers will not readily abandon their trusted cameras and lenses, but at first glance, Ronin 4D does seem to be a game-changer.
Cost: The complete system is less than $10K. Compare this to assembling a comparable system from separate components. Either a RED Komodo or Canon C300 Mark III will set you back more than $8K and that does not even include any lens. The built-in ND filters is a pretty big deal; can potentially eliminate need for a bulky matte box. The LIDAR system looks truly amazing.
The Ronin 4D Cinematic Imaging System includes:
Cinema camera: 6K @ 120 FPS, or 8K @ 75 fps, or
10-bit Pro-Res
Six built-in ND filters
Computerized 6-Axis stabilization gimbal
7″ touchscreen video monitor, detachable and wireless
LIDAR focusing system
long-range wireless 1080 video transmit (with encryption and frequency hopping)
In addition to visiting Gloucester this weekend, I also ported all my photography and tools to a new computer. As I imported new images from a camera drone, I took the new computer on a test drive to verify that my tools were all in good order.
This scene had both very bright highlights and very dark shadows; I doubted that a single exposure could contain both the highlights and shadows. As you likely know, such situations are known as high dynamic range (HDR). I captured a bracket of three exposures. In retrospect, it was a wise choice. The middle exposure was spot on, however the foreground was nearly black and some background highlights were blown out – white boats and white buildings. The darker exposure provided correction for the blown-out highlights. The lightest exposure was used to replace the black foreground water with dark-blue water.
Initially, I processed each of the three in Lightroom and then combined them together using Photoshop. From Lightroom, open the three images using “Edit In” -> “Open As Layers In Photoshop”. Once opened in Photoshop, select all three layers and choose “Edit” -> “Auto-Align layers”. Here, there are six Projection options; I chose “Reposition” because the three images were identical composition that varied only by exposure.
A selection of the highlights was applied as a layer mask on the darkest layer, such that only the highlights are used from that layer. A selection of the foreground dark water was applied as a layer mask to the brightest layer such that the foreground is lightened. The resulting image is shown here on the right.
From there, I applied three image filters by Alien Skin. First was Bokeh, to blur the image – except for the schooner. Then I used two different variations of Snap-Art. All this was done through Photoshop. Upon saving all of this (TIFF file), I was back in Lightroom. Judicious use of brightness, clarity, and color saturation enhanced the simulated brush strokes. The end result is shown here on the left.