Amazon has the Canon EOS 80D 24MP Digital SLR Camera with the 18-135mm f/3.5-5.6 Image Stabilization USM Lens for $1169 with free shipping. This kit normally goes for $1500+.
via Ben’s Bargains – Most Recent Cameras Offers http://ift.tt/2lQv7k2
Amazon has the Canon EOS 80D 24MP Digital SLR Camera with the 18-135mm f/3.5-5.6 Image Stabilization USM Lens for $1169 with free shipping. This kit normally goes for $1500+.
via Ben’s Bargains – Most Recent Cameras Offers http://ift.tt/2lQv7k2
Two recurring questions that we often see from photographers are: “I have color management properly set up on my computer; why is it that the color is different between an out-of-camera JPEG and, say, Lightroom (substitute with your favorite 3rd-party converter)?” and “Why is it that the particular color on a photo is different from the actual color?”. In this article, we will go over why color from images is reproduced differently on camera LCD screens and monitors, and the steps you can take to achieve more accurate colors.
Though these two questions seem to be different, they still have much in common. That’s the part where we need to consider the stages it takes to get from captured data to color, as well as the limitations of color models and output media (monitor and print, mostly) when it comes to color reproduction.
The goals of this article are twofold: the first is to demonstrate that out-of-camera JPEGs, including in-camera previews, can’t be implicitly, with no checking, used to evaluate color (as we already know, the in-camera histogram is misleading, too). The second is to show that it isn’t necessary that the camera manufacturer-recommended converter be specifically tuned to match the out-of-camera JPEG.
Let’s start with an example.
Recently, I got an email from a photographer asking essentially the same question we quoted in the beginning: how is it that the color on an out-of-camera JPEG is nothing like the color on the original subject of the shot? The photographer was taking shots of blue herons and hummingbirds, relying on the previews to evaluate the shots, and was rather confused: the camera was displaying strongly distorted blues in the sky and on the birds. One can say that camera’s LCD and EVF are “calibrated” to an unknown specification, so this “calibration” and viewing conditions might be what causes the color issue. However, the color on a computer monitor also looked wrong. Naturally, the photographer decided to dig deeper, and to take a picture of something blue to check the accuracy of the color. The test subject was a piece of stained glass, and …(drumroll, please)… the out-of-camera JPEGs looked off not just on the camera display, but (as expected from examining the shots of birds and sky) on a computer monitor as well.
Here is the out-of-camera JPEG (the camera was set to sRGB, and the photographer told me that setting it to Adobe RGB didn’t really make much of a difference). The region of interest was the pane of glass in the middle, the one that looks cyan-ish-colored.
The photographer said it was painted in a much deeper blue. Obviously, I asked for details and got a raw file. I looked into the metadata (EXIF/Makernotes) and ruled out any issues with the camera settings – they were all standard. Opening the raw file in Adobe CameraRaw with “as shot” white balance, I got a much more reasonable blue, and the photographer confirmed that it looked much closer to the real thing, maybe lacking a tad of depth in the blue, as if it is from a slightly different palette. So, this is not a problem with white balance. Moreover, the default conversion in ACR proved that the color could be rendered better than on the out-of-camera JPEG, even with a 3rd-party converter.
The shot was taken with a SONY a6500, so my natural impulse was to recommend to the photographer to use the “SONY-recommended” converter, which happens to be Capture One (Phase One).
One thing to keep in mind as you’re reading this: this is in no way an attack on any specific product. You can check for yourself and find if this effect occurs with your camera and preferred RAW converter. The reason we’re using this as an example is just because it happened to fall into our lap. That said, we certainly wouldn’t mind if SONY and Phase One fixed this issue.
Back to our image. Here comes the unpleasant part.
The first thing I see upon opening the raw file in Capture One is the blue pane of glass covered with an “overexposure” overlay. Easily enough fixed, change the curve in Capture One from “normal film simulation” to linear and the overexposure indication is gone. Next, I move the exposure slider to +0.66 EV: overexposure doesn’t kick in (takes 0.75 EV for faint over-exposure overlay spots to appear). Here is the result; it’s distinctly different in color from what we have in the embedded JPEG, in spite of the fact that white balance was left at “As Shot”, but it’s still wrong, having a color shift into purple:
Let’s have a closer look at the region of interest:
So, let’s reiterate the two points we made at the beginning:
When we say “render things differently”, we mean not just the obvious things like different color profiles (color transforms; icc, dcp, or some other) used in different converters, and / or different contrast (tone) curves; not (minor) differences in sharpening and / or noise reduction: we also mean how white balance is applied.
Somehow it is often forgotten that the methods of white balance calculation and application are also different between various converters, leading to differences in color renditions.
In a lot of cases we see that discussions of white balance operate with color temperature and tint values. However, white balance is not measured as color temperature and tint – it is only reported this way. This report is an approximation only, and there are multiple ways to derive such a report from white balance measurement data. If you compare color temperature and tint readings from the same raw file in different converters, most probably you will find the readings are different. That’s because the methods of calculating color temperature (actually, correlated color temperature, CCT) and tint vary, and there is no exact or standard way to calculate those parameters from the primary data recorded in the camera (essentially, this primary data is ratios of red to green and blue to green for a neutral area in the scene, or for the whole scene, averaged, see “Gray World” method and its variations).
Consider Canon EOS 5D Mark II. For different camera samples the preset for Flash color temperature in EXIF data of non-modified raw files varies from 6089 K to 6129 K. Checking the color temperature for a range of Canon camera models, the color temperature for Flash preset varies from 6089 K on Canon EOS 5D Mark II to 7030 K on Canon EOS 60D. For Canon EOS M3 it reaches 8652 K. Meanwhile, Adobe converters have 5500 K as Flash preset, for any camera. If you dig deeper, the variations of tint are also rather impressive.
Quite often the color temperature and tint reports differ between the converters when you establish white balance in the converters using “click on gray” method.
Some converters calculate white balance back from the color temperature and tint, calculated using various methods; some (like Adobe) re-calculate color transform matrices based on color temperature and tint; while some apply the white balance coefficients (those ratios we mentioned above). Obviously, the neutrals will look nearly the same, but overall color changes depending on the method in use and the color transform matrices a converter has for the camera.
Of course, it is rather strange that Capture One is indicating overexposure in the default mode. If one were to open the raw file in RawDigger or FastRawViewer, it becomes clear that the raw is not even close to overexposure; it’s easily 1 1/3 EV below the saturation point [the maximum value on the shot is 6316 (red rectangle on the left histogram), while the camera can go as far as 16116: log2(16116/6316) = 1.35]. If the exposure compensation is raised to 1.5 EV, only 194 pixels in the green channels are clipped (red rectangle at the “OE+Corr” column of Exposure Stat panel), as the statistics in FastRawViewer demonstrate.
So, Capture One is indicating “overexposure” for no good reason, effectively cutting more than 1 stop of dynamic range from the highlights in the default film simulation mode, and about 1/3 EV in linear mode.
Now completely hooked, I downloaded a scene from Imaging Resource that was shot with the same SONY a6500 model, but of course a different camera sample was used. Let’s look at embedded vs. default Capture One, both sRGB, embedded first:
Now, to Capture One’s “all defaults” rendition:
I’m left completely mind boggled: comparing side-by-side, it is easy to not only see the differences in the yellows, reds, deep blues, and purples; but also the different level of color “flatness”; for example, compare the color contrast and amount of color details on the threads:
For Figure 10, we are not suggesting to pick the best, or the sharpest rendition; just pointing out how different the renditions are. Look, for example, at the crayon box. SONY JPEG is very cold yellow, nearly greenish, like Pantone 611 C, while Capture One rendered it warm yellow, slightly reddish, like Pantone 117C. Red strips on the label “Fiddler’s” of the bottle: JPEG – close to Pantone 180C, Capture One rendition – close to Pantone 7418C. Deep purple hunk (eight from the right): JPEG – Pantone 161C, Capture One rendition – Pantone 269C. Another portion on the box with crayons, the strip that is supposed to be green: on the JPEG, it is muted green, while Capture One rendered it into a more pure and saturated variety.
Finally, I took a scene from DPReview, put it through PatchTool, and came up with the following color differences report for the embedded JPEG vs. Capture One’s version (I used dE94 metric because I think there’s too much of a difference for dE00 to be applicable):
The question we’re left with: how is it that the color is so different and so wrong?
How does it happen that different converters render the same color differently and incorrectly?
The real problem is a combination of:
Because of that non-uniformity the hue angle is not constant, and substituting an out-of-gamut color with a less saturated color of the same hue number (we need to decrease saturation in order to fit into the gamut) results in hue discontinuity. “Blue-turns-purple” and “purple-turns-blue” are quite common problems, caused by exactly this color model’s perceptual inaccuracies. Another hue twist causes a “red-turned-orange” effect (we suggested an example in the beginning of this article). With certain colors (often called “memory colors”), the problem really catches the eye. The problem also causes a perceived change in color with any brightness changes.
One of the things that we have come to expect from color management is consistent and stable color. That is, if the color management is properly set up, we expect the color to be the same on different monitors and printers, less the constraints of the color gamut of those output devices.
Color management maintains color in a consistent manner by mapping the color numbers in one color space to the color numbers in a different color space, taking corrective measures when the source color space and the destination color space have different gamuts (those measures depend on colorimetric intent stated for the conversion, and on particular the implementation of that intent). In short, color management is guided by a strict and rather unambiguous set of rules.
Why is it that we do not enjoy the same color consistency and stability when converting RAW, even when the utmost care is used to apply the correct white balance?
If you have been reading carefully, you may be asking why we are not limiting this to cross-converter consistency. The answer is: a color model used in a converter may be very good in general, but not very suitable for some particular shooting conditions or particular colors. Worse, some types of lights, like mercury vapor lamps used for street lighting, certain fluorescent bulbs, some “white” LEDs have such strong light spectrum deficiencies that color consistency is out of question. Oddly enough, some not-so-good color models behave better when dealing with low quality lights.
And while we are discussing consistency, there is another problem. The question “why do my consecutive indoor sports shots have different color/brightness” is also among the recurring ones. The reason for this is unrelated to RAW processing and equally affects both RAW and JPEGs: some light sources flicker. That is, for the same white balance and exposure set in the camera, the result depends on what part of the light cycle you are capturing. For example, ordinary fluorescent lights tend to flicker each half-period of the main supply frequency. Because of that flicker, it is safe to shoot at shutter speed = X/(2 *frequency of mains power), X being 1, 2, 3,…n, as the full bulb cycles are captured this way; if it is 60 Hz mains, safe speeds are 1/120, 1/60, 1/40 if you have it on your camera, 1/30,…, while for 50 Hz it is 1/100, 1/50,… You can test the lights for flicker, setting your camera to a fixed white balance, like fluorescent, and shooting at different shutter speeds, say, 1/200 and 1/30. If the color changes between the shots, it is the flicker. Of course, nearly the same is true when it comes to shooting monitor screens and various LCDs. If the refresh rate is 60 Hz, for consistent results try shooting with a shutter speed of 2 *X/60; again, X being 1, 2, 3,… Some modern cameras help reduce this problem, synchronizing the start of the exposure with the light cycle. However for lights with a non-smooth spectrum that changes during the cycle it is not a complete solution; the shutter speed still needs to be set according to the frequency of flicker.
When we attempt to apply familiar color management logic to digital cameras, we need to realize that color management, somewhat tautologically, manages colors; and it can’t be applied directly to RAW numbers – there is no color to raw image data to begin with. Raw image data is just a set of light measurements from a scene. Those measurements (except, for now, for Foveon sensors) are taken through color filters (color filter array, CFA), but the regular in-camera filtration (and this includes Foveon sensors) does not result in something one can unambiguously map to a reference color space, hence such measurements do not constitute color. But again, color management deals with color, and for color management to kick in we need first to convert the measurements contained in raw image data to color.
Filtrations that result in color spaces that can be mapped to reference color spaces do exist, but currently they work acceptably well only with smooth, continuous, non-spiky spectrums – that is, many sources of light and many artificial color pigments will cause extreme metameric errors. On top of that, such filters have very low transmittance, demanding such an increase in exposure that isn’t acceptable for general-purpose cameras. However, CFA is not the only possible filtration method, and 3-color filtration has alternatives.
So, what’s the problem? Why can’t we just … convert raw data numbers to color? Well, we can, but it is not an unambiguous conversion. This ambiguity is among main reasons for the differences in output color.
Why is it ambiguous? Because we need to fit the measurements made by the camera into the color gamut of some “regular” color space: the profile connection space, working color space, or the output color space. That’s a bit of alchemy that we need here; we’re performing a transmutation between two different physical essences. To better understand the problem, we need to take a short excursion into some of color science concepts.
The term “color gamut” is commonly abused; in many cases we hear chatter discussing “camera gamut” and such. Let’s try to address this misconception because it’s an important one for the topic at hand.
Color gamut is defined as the entire range of colors available at an output, be it a TV, a projector, a monitor, a printer, a working color space. In other words, a color gamut pertains to working color spaces and devices that render color for output. Digital cameras, however, are input devices. They just measure light, and the concept of the color gamut is not relevant for such measurements: gamut means some limited range, a subset of something, but a sensor responds in some way to all visible colors present in a scene. Also, sensors are capable of responding to color at very low luminance levels, where our ability to discriminate colors is decreased or even absent. More than that: the range of wavelengths a sensor records is wider than the visible spectrum and not limited by the CIE color space chromaticity diagram; that’s why UV and IR photography is possible even with a non-modified camera. As you can see, the term color gamut does not apply to RAW. Only the range of relative lightnesses of colors poses the limitations to the sensor response, and that’s the whole different matter – dynamic range.
Thus, a sensor doesn’t have a gamut, and there is no single, standard, or even preferred set of rules defining how we map larger into smaller, raw data numbers to color numbers, nothing like what we have in color management. One needs to be creative here, making trade-offs to achieve agreeable, expected, and pleasing color most of the times.
– OK, and what happens when we set a camera to sRGB or Adobe RGB? Those do have gamuts!
– Well, nothing happens to the raw data, only a tag indicating the preferred rendering changes in metadata, and the out-of-camera JPEGs, including embedded into RAW JPEG preview(s) and external JPEGs are rendered accordingly. Whatever the color space you set your camera in, only JPEG data and, consequently, the in-camera histogram are affected. Here is a curveball: pseudo-raw files, like some small RAW variants (sRAW), which are in fact not raw but JPEGs, have white balance applied to them.
Color is a sensation, meaning color simply does not exist outside of our perception, and so we need to map measurements to sensation. In other words, we need a bridge between the compositions of wavelengths (together with their intensities), which our eye registered, and colors that we perceive. Such a bridge, or mapping, is called color matching function, CMF; or observer. It tries to emulate the way we humans process the information our eyes gather from a scene. In other words, observers model typical human perception, based on experimental data.
And here comes yet another source of ambiguity: the spectral response functions (SRFs) of the sensors we have in our cameras do not match typical human perception.
From Figure 12 it is pretty obvious that there is no simple transform that can convert camera RGB output to what our perception is. More, the above graph is based on data at nearly the hottest exposure possible (white with faint texture is close to 92% of the maximum). When the exposure is decreased (say, the blue patch in ColorChecker is about 4 stops darker than the white one), the task of restoring the hue of the dark saturated blue becomes more problematic because the red curve flattens a lot and small changes in the response in red are now comparable to noise – but we need to know that red to identify the necessary hue of the blue. Now, suppose you are (mis-)led by the in-camera exposure meter, in-camera histogram, and / or “blinkies” into underexposing the scene by a stop; and there are surely darker blues in the real life than that blue patch on the ColorChecker… That’s how the color becomes unstable, and that’s how it depends on exposure.
This difference between SRF and LMS leads to what is known as metameric error: wavelength / intensity combinations that look the same to a human (that is, we recognize them as having the same color) a camera records as different and separate, with different raw numbers. This is especially the case with colors on the both sides of the lightness scale, dark colors and light colors; as well as with low saturated, close to neutral pastel colors. The reverse also happens; colors that are recorded the same in raw data look different to a human. Metameric error can’t be corrected through any math, as the spectral information characterizing the scene is absent at the stage when we deal with raw. This makes exact, non-ambiguous color reproduction impossible.
What follows from here is that instead of talking about some vague “sensor color reproduction” deficiencies we can operate with metameric error, comparing sensors over this defined parameter. Incidentally, this characteristic can be calculated independent of any raw converters, as a characteristic of the camera per se; but it can also be used to evaluate mappings produced by raw converters. However, measuring metameric error by shooting targets is a limited method. To quote the ISO 17321-1:2012 standard, the method based on shooting some targets (the standard refers to it as to method B) “can only provide accurate characterization data to the extent that the target spectral characteristics match those of the scene or original to be photographed”, that is, it is mostly suitable for in-studio reproduction work.
To reiterate: what immediately follows from sensors having no gamuts and their spectral response functions differing from what we have as our perception mechanism is this: raw data needs to be interpreted to fit the output or working color space gamut (sRGB, Adobe RGB, printer profile…), and some approximate transform between a sensor’s spectral response functions and the human observer needs to be applied.
There are multiple ways to perform such an approximate transform, depending on the constraints and assumptions involved. Some of those ways are better than others. By the way, “better” needs to be defined here. When it comes to “optimum reproduction”, it can be an “appearance matching”, a “colorimetric matching”, or something in between. That is, “better” is pretty subjective, it is a matter of interpretation and quite often it is an aesthetic call on the part of a camera or raw converter manufacturer, especially if one is using default out-of-camera or out-of-converter color. It’s actually the same as with film; accurate color reproduction was never a goal for most popular emulsions, but pleasing color was.
Earlier, we mentioned that there are two major reasons for output color differences. We discussed the ambiguity, and now let’s get to the second one, the procedure and the quality of the measurements that are used to calculate color transforms for the mapping of raw data to color data.
Imagine you are shooting one of those color targets we use for profiling, like a ColorChecker. What is the light that you are going to use for the shot? It seems logical to use the illuminant that matches the one the future profile will be based upon. However, standard color spaces are based on synthetic illuminants, mostly D50 and D65 (except for two: CIE RGB, based on synthetic illuminant E, and NTSC, which is based on the illuminant C that can hardly be used for studio lighting – one needs a filter composed of 2 water-based solutions to produce it). It is rather problematic to directly obtain the camera data for a D-series illuminants simply because they are synthetic illuminants and it is very hard, if even possible, to come by studio lights matching accurately enough, for example, the D65 spectrum.
To compensate for the mismatch between actual in-studio illuminant and synthetic illuminant, profiling software needs to resort to one of the approximate transforms from studio lighting to standard illuminants. The accuracy of such a transform is very important, and the transform itself is often based not only on accuracy, but also on perceptional quality. Transforms work over rather narrow ranges; don’t expect to shoot a target under some incandescent light and produce a good D65-based profile. This, of course, is not the only problem responsible for color differences while obtaining source data to calculate color transforms, others being problems with light and camera setup, as well as the choice of targets and accuracy of target reference files.
This is in no way to say shooting ColorChecker does not lead to usable results. We provide an example of its usefulness towards the end of this article. Yet another (but minor, compared to the two above) consideration is that color science is imperfect, especially when it comes to describing the human perception of color (remember those observers we mentioned earlier?). Some manufacturers are using more recent/more reliable models of human perception, while others may be stuck with older models and/or using less precise methods of calculations.
To sum up, the interpretations differ depending on the manufacturer’s understanding of “likeable” color, processing speed limitations, the quality of the data that was used to calculate the necessary color transforms, the type of transforms (they can be anything starting from simple linear matrix to complex 3D-functions), the way white balance is calculated and applied, and even noise considerations (matrix transforms are usually smoother compared to transforms that employ complex look-up tables). All of these factors together form what is informally called the “color model”. Since the color models are different, the output color may be quite different between different converters, including the in-camera converter that produces out-of-camera JPEGs. By the way, you can see it is not always the case that an in-camera converter produces the most pleasant or accurate color.
And thus we feel that we have proved both statements what we’ve made at the very beginning of this article:
So, we definitely know how we feel about it, but what can we do about it?
What can we do to ensure that our RAW converter renders the colors close to the colors we saw?
A custom camera profile can help with such issues. We calculated camera profile for SONY a6500, based on RAW data extracted with RawDigger from DPReview Studio Scene, and used this profile with our preferred RAW converter to open the source ARW. That’s how we obtained the right part on the figure below:
Here is the report of profile accuracy:
Looking at profile accuracy report on Figure 14, one may notice that though the accuracy is pretty good, reds are generally reproduced with less error compared to blues, and that the darker neutral patches E4 and D4 exhibit larger error than the others. The main reason behind irregularity over the reproduction of neutrals would be that I was forced to use a generic ColorChecker reference, as DPReview does not offer the reference for the target they are shooting. Profiling offers an approximation, a best fit, and it might be that E4 and D4 patches on the target they use deviate from the generic reference in a rather significant way. BabelColor web site offers a very good comparison on the matter of variation of the targets.
The imbalance between error in reds and error in blues can be attributed to 2 factors, mainly, first being the use of the generic reference we just mentioned, and the second is sensitivity metamerism that we discussed earlier in the article.
There are also some secondary factors, too, to watch. It is difficult to make a good profile if the spectral power distribution of the studio lighting is not measured; flare and glare can reduce the profile quality significantly, and so can light non-uniformity, be it just intensity or different spectral composition of lights on the sides of the target. However, flat field mode in RawDigger can help to take care of light non-uniformity, please have a look at “How to Compensate for Non-Uniform Target Illumination” chapter in our Obtaining Device Data for Color Profiling article. You can use RawDigger-generated CGATS files with free ArgyllCMS, MakeInputICC, our free GUI over ArgyllCMS (Windows version, OS X version), or basICColor Input 5 (includes 14-day fully functional trial).
As we can see, there is certainly good value in ColorChecker when it comes to creating camera profiles. Color becomes more stable, more predictable, and overall color is improved. Even when the exact target measurements and light measurements are unknown, ColorChecker still allows to create a robust camera profile, taking care of most of the color twists and allowing for better color reproduction. Of course, you can use ColorChecker SG or other dedicated camera targets, but due to their semi-gloss nature you may find those to be more difficult to shoot. So, before going to the next level, using more advanced targets, figure your shooting setup first to have as little flare as possible, use a spectrophotometer to measure your ColorChecker target and your studio lights – it often proves to be more beneficial in terms of color profile quality than jumping to using a complex target.
via Photography Life http://ift.tt/2mhp09E
As photographers, we heavily rely on memory cards, because they store images captured by our cameras and we use them to transfer images to our computers / main storage. In some cases, photographers even rely on memory cards to be their secondary or tertiary backups when shooting in the field. The role of memory cards in a photography workflow should not be underestimated – a failed card may not only lead to many problems and frustrations, but can also create bigger problems, especially when dealing with commercial clients who could make the photographer liable for loss of their images. In this article, I will share some tips on how to properly use memory cards and how to take care of them based on my many years of experience, both as a photographer and as an IT professional.
In my opinion, nothing is worse than telling a newlywed couple that their entire wedding was lost due to a bad memory card. While a commercial photo shoot can be re-shot, even at a great cost, it is nearly impossible to re-shoot a whole wedding. Therefore, it is important to understand that a memory card is not just a simple storage accessory; its role as a reliable storage media tool should never be overlooked. Unfortunately, there is too much conflicting information on the Internet in regards to how one should use and treat memory cards, with very little evidence, which sadly leads to misunderstanding and misuse of memory cards in the field. So aside from standard recommendations, I will also go through such topics and explore them in detail, which will hopefully clear out some of the confusion.
With so many different memory card brands out there at varying pricing levels, it might be tempting to go for a much cheaper, no-name brand card. However, before you make your purchasing decision, you should seriously decide if you are willing to deal with potential failures and problems of such cards in the future. Also, factor in the cost of replacement – if you buy a no-name brand card and it fails, you will most likely be stepping up to get a much higher quality branded card, so your initial investment becomes a waste. If you start out with a good brand card and anything happens to it, you can count on the manufacturer’s warranty to get you a working replacement. Lastly, don’t forget to value your time as well! If a card fails, you might spend many hours trying to recover the content. Considering how cheap high quality memory cards have gotten nowadays, why even take the risk of choosing a no-name brand? Why not go with a good, reliable brand to begin with?
When it comes to memory card brands you can trust, I personally consider cards from SanDisk, Lexar, Samsung, Sony, Transcend, Kingston and PNY to be often quite good in quality. Over the years, I have used memory cards from all of these brands and found them to work quite well in cameras, with fairly solid performance and reliability. Although there is no statistical data on failure rates of different memory card brands, my personal bias has always been with SanDisk – although it is probably one of the most expensive brands out there, I have never seen a SanDisk memory card fail on me. I mostly own SanDisk Extreme Pro series CF and SD cards and they have always been the cards I pick for important photo shoots. Lexar CF cards have been pretty reliable, but I stay away from their SD cards. They gave me a lot of headaches, as documented in my Lexar SD card review – pins broke off on practically every memory card I used and when I sent the cards for replacement, I got refurbished cards that were heavily used before. Since then, I have been buying SD cards from different manufacturers when they would go on sale to probe them out. After testing out the higher-end Samsung Pro+ SD cards, I realized that they are also excellent cards that rival the SanDisk Extreme Pro series cards in terms of performance and reliability, so they became my second most favorite brand for SD cards. Sony has been a mixed bag – I bought a bunch of Sony pro-grade SD cards that can do 94 MB/s speeds and although they were very cheap, their reliability has been pretty bad: two cards out of eight have failed within a year of use. If I need cheap memory cards for different needs like backup, I might also look at brands like Transcend, Kingston and PNY, which often sell for great prices, especially during holidays when inventories need to be cleared up. This is obviously my personal experience with these brands and cards – your mileage might certainly vary and you might find one brand to be more reliable than another based on cards that have failed you in the past. It is also worth pointing out that memory card specifications and features change every year, so if you have experienced a problem with one particular model, it does not mean that the next model will be as bad. Based on my research and my past experiences, failure rates among different memory card manufacturers vary greatly and it is impossible to say that one brand will always be better than another. There are too many brands, too many models, too many features and too many memory card sizes out there to make meaningful statistical data that could compare different memory card brands for reliability. For me, brands like SanDisk and Samsung are trustworthy, because these two companies have been known to make their own memory chips and their quality control is excellent. Every computer I build has a Samsung SSD or PCIe NVME storage, so my trust in Samsung goes beyond memory cards. In comparison, many other brands simply slap their stickers on OEM memory cards…
So if you want to have less headaches in the future, make sure to buy memory card brands you can trust!
Once you know which brand of memory cards you want to buy, make sure to buy those memory cards from authorized sellers. This one is even more important than #1, because there are too many fake memory cards out there! Remember, all SD cards more or less look the same, so if someone slaps on a SanDisk label on an OEM card, you would have no idea that you are dealing with a fake memory card. Some Chinese manufacturers find ways to not only fake the memory cards themselves, but also closely imitate the original retail packaging, making the card look pretty authentic. So if you found a card that you like at B&H Photo Video or Adorama but the price looks a bit too steep for your liking, don’t fool yourself if you find something much cheaper on eBay. Companies like SanDisk dictate their pricing with retailers, so if their pricing changes at one retailer, it should be mirrored with another (unless the sale is a one-off exclusive, such as the Deal of the Day at Amazon). If you shop for memory cards at Amazon, make sure that the card is sold and shipped by Amazon – there have been reported cases of fake memory cards being sold by third party sellers there.
Even if you find a good deal on used or refurbished memory cards, I would highly discourage you from buying those. The problem with used and refurbished memory cards, is that you don’t know how frequently those cards were used before you. If the photographer used those cards heavily, it means that the memory card cells have less life left in them, so you might start encountering issues with those cards sooner than you would expect, especially if you are a busy photographer. Unlike most cameras that can show you total number of actuations, memory cards do not keep a track of how many times write operations took place. So if someone tells you that their card was “barely used”, there is no way for you to check if the seller is telling you the truth. In addition, you don’t know how well the photographer took care of those cards and if there is any warranty left on the cards, you most likely will not be able to transfer the warranty under your name. Memory cards are incredibly cheap nowadays, so I would not try to save a buck or two by attempting to get used ones.
Sadly, for most memory card manufacturers, it is all about the labels and the numbers they can slap on their memory cards to boost their sales, which means that you should not expect real, honest information on those labels. If a memory card says that it can do 95 MB per second speeds, it does not mean that the memory card is actually going to be able to get to those speeds. In fact, did you know that the published transfer numbers on memory cards often only reflect the read speed, but not the write speed? So if you see something like 95 MB/sec transfer rate on a memory card, that only shows the potential speed of the transfer from your memory card to your computer. The word “potential” is key here, because those advertised speeds are the maximum theoretical speeds a memory card can reach when doing sequential writes of large files.
Over the years, I found that most memory cards cannot reach their maximum advertised speeds, which is disappointing. So if you buy a memory card that claims fast read and write transfer rates, try to copy large-size files to and from the memory card to see if those numbers reflect the reality. But do make sure that you have a fast enough reader that can actually take advantage of those speeds (see #6 below for more information). I have tried to test out my SanDisk Extreme Pro cards that claim to have up to 95 MB/sec read and 90 MB/sec write rates and I have never been able to reach those speeds. At most I was able to get to was around 85 MB/sec read and 73 MB/sec write speed on those cards and they were the best of the bunch – others were even worse in comparison.
So when you are evaluating a memory card for purchase, don’t just look at the label – pay close attention to detailed specifications that show not just the maximum read speed, but also tje maximum write speed. Then once you receive the card, make sure to test it out. If transfer speeds are incredibly slow, you might be dealing with a fake memory card.
Another tip for memory cards is not to buy the super large capacity ones. If you average a few hundred shots on your memory card, that’s good enough – you don’t need a memory card that can accommodate thousands of pictures (exception would be wildlife photographers, who shoot a lot of frames). Why? Because if that super large capacity memory card fails, you will lose everything on it. This is especially important for those who shoot critical projects and events. If you have a trip of your lifetime, it might sound appealing to just use one single memory card and not worry about changing memory cards. But if anything happens to that memory card and you don’t have backups, then you will lose all the pictures from that trip. Unless you have a very solid backup workflow, where you make sure to back up images after each shoot, you should not stash on those large capacity memory cards.
Now if you have an advanced camera that has multiple memory card slots, having a single large capacity memory card could be useful. Many photographers use the second memory card slot as a “backup” and use a large capacity memory card in that slot without taking it out – they just replace the first memory card as needed. John Bosley does this for his wedding work and he is yet to lose files from important photo shoots and weddings he is involved in. If you follow a similar practice, large capacity memory cards are not necessarily evil. Just don’t save all the precious work in one memory card, without any backups!
Also, if you do shoot with multiple memory card slots in overflow mode (one memory card fills up and the camera starts recording to the second one), try not to delete images using your camera! When shooting in overflow mode and deleting images, you never know which particular memory card contains which photos. The camera will automatically place images in the first card that has the available space. This was a hard lesson learned for me, since I was stupid enough to delete pictures from the main memory card and I kept on shooting the best pictures to it without any backups. I then managed to lose that memory card when traveling, which contained the most valuable pictures from the trip! I wrote about my experience in this article back in 2011 and even promised a reward if someone found my card and handed it back to me, which sadly never happened.
How big should the capacity of memory cards be? Well, it all depends on the size of individual files your camera creates. If you shoot 14-bit uncompressed RAW, your files are going to take a lot of space and you might need a bigger card to accommodate a few hundred of those images. If you shoot losslessly compressed RAW with a low to medium resolution camera, you might get away with smaller cards. When shooting with my Nikon D750 and D810 cameras, I often shoot with 32 GB cards. But when doing video work, shooting with a higher resolution camera or shooting a lot of panoramas, I might use a 64 GB card. I only own a single 128 GB card, which I bought primarily for shooting wildlife and for keeping it as a backup.
Keep in mind that requirements for memory cards will change overtime. With the increase of resolution and bit depth in cameras, you might need to start moving up to larger capacity memory cards in the future.
While it is a myth that you must only buy memory card readers from reputable brands like SanDisk and Lexar, chances of having a more reliable card reader from a better brand are higher simply because of better quality control. Over the years, I have used many different memory card readers from Lexar, SanDisk and other third party manufacturers. I have never managed to damage a memory card because of a bad memory card reader, but there is always a potential to damage a card, especially during write operations. If a memory card reader fails at the time of writing, the chance of killing a card completely is pretty high. So as long as the memory card reader is not a really bad knock-off, it should do just fine. Most SD card readers built into laptops have the same OEM chips you will find on many other standalone memory card readers. So for the most part, the underlying technology is the same in most memory card readers. I personally prefer Lexar card readers over SanDisk ones, because I found them to be reliable and fast. SanDisk just has not paid much attention to its card readers – they are often inferior compared to Lexar ones in terms of technology and features.
My favorite is the Lexar Professional Workflow, which in my opinion is the best card reader on the market thanks to its versatility – I can use the unit to transfer multiple memory cards at the same time and I could even add SSD storage to the unit for backups. If you don’t need such a setup, the much simpler and smaller Lexar Professional USB 3.0 Dual-Slot Reader is also superb and works out very well for both laptop and desktop use. I usually take the latter when I travel with my Surface Pro, since it does not have an SD card reader. But if your laptop already has an SD card reader, then you don’t need to get an external unit, unless your memory card reader is very old and it cannot support fast transfer speeds of the latest memory cards.
When using memory cards, it is always a good practice to format those memory cards in your camera, and preferably, the camera brand you are going to be shooting with. While it is not necessary to format memory cards in cameras and you could go through the same process on your computer, I find it simpler and faster to do it in my camera. When shooting with my Nikon DSLRs, all it takes to format a memory card is holding two buttons with red labels on the camera, then confirming the process by pressing those two buttons again. I could format all cards from my memory card holder in a minute or two, which is very convenient. The other reason why you might want to format memory cards in your camera, is because some camera brands like Sony create a small database / index of files on memory cards after formatting them, so if your memory card does not contain those files, the camera will complain that the database does not exist and it will attempt to create the file structure and the database before the memory card can be used. Instead of going through these hassles, I find it better to just format all the memory cards you have in the same camera you are planning to shoot with. This way, you just pop a new memory card in and you are ready to keep on shooting.
If you decide to format memory cards in your computer, make sure that you check the “Quick Format” option as seen below:
You do not want your computer to do a low-level format of your memory cards. In fact, performing a low-level format, where the computer will go through each memory block on your card and fill it with zeros is bad for the overall health of your memory card, especially if you do it often, since those are write operations and each memory cell on your memory card only has so many writes to it before it becomes unusable. Plus, low-level formatting takes forever to complete, and if you ever want to retrieve files in case of accidental formatting, you will never be able to do so. When formatting memory cards in your camera or performing a Quick Format on your computer, the formatting process simply re-creates the index table that stores where files are physically located on the memory card, so existing files are simply written over when a write operation takes place. You don’t see those files on your camera or your computer, but they are still there. That’s why it is possible to restore images from a memory card, even if the card is formatted. It is important to note that memory cards under 32 GB are typically formatted with FAT32 file system as seen above, whereas cards with larger capacities will be formatted with exFAT file system, due to capacity and file size limitations. Keep this in mind when manually formatting cards on your computer.
Some people choose to move contents of memory cards instead of formatting them. That’s a perfectly fine practice and there is nothing wrong with doing that, but I personally stay away from delete and move operations on my memory cards. Reading contents of a memory card is always going to be faster than read + delete. Also, I do have a practice of not using the same memory cards when shooting in the field. If anything happens to my computer when traveling, I still want to be able to have all of my images safe in another location – see my notes under #13 below on best practices for field use of memory cards and their potential value as secondary / tertiary backup.
Keeping your camera batteries charged is obviously a no-brainer. However, some of us are guilty of pushing cameras to their limits until batteries fully die. I never thought that a dead battery could cause a memory card card to fail completely until last year. John and I were recording our Level 1 Post-Processing and Workflow Course in the studio and I did not pay attention to the battery level on the primary camera. The battery died while recording and after I popped a new battery in, I saw the dreaded “ERR” message. I knew something was going on with the memory card, so I took it out of the camera, put it on my computer and it was not recognized. The failed write operation completely killed the memory card to the level that I could not even format it anymore! Needless to say, we had to reshoot the whole section the next day because of this and it was not a pleasant experience. Ever since, we have been paying close attention to the battery level – the moment the camera flashes with a red battery sign in live view, we have been swapping the batteries out. Better safe than sorry!
Another obvious mistake to avoid is unplugging a memory card while any read or write operations are taking place. While a read operation might do no damage to the card, an interrupted write operation often causes memory card corruption, as it happened in the above-mentioned situation. The same goes for interrupting write operations while your camera is taking a picture. If you want to stop a long exposure, don’t just remove the memory card or worse, your camera battery! Powering off your camera should stop the long exposure and safely complete write operations, which is what you want. If something happens to your camera and it seems to be stuck, you always want to wait for the memory card light to turn off before deciding to do anything drastic, like removing camera batteries. There have been cases with some cameras being incompatible with particular memory cards and write operations would take excessively long as a result. Some photographers were not patient enough to wait for the light to turn off and they would take the battery out, which often resulted in memory card corruption. If you take a picture and it takes over 5 seconds for the memory light to go off, you might want to stop using that memory card and replace it with a different one.
The same goes for unplugging memory cards from computers – you never want to just remove a card while data is being read from or written to the memory card (again, write operations are particularly evil). The best practice is to safely eject the memory card via your operating system before removing it, which can be easily done in both Windows and Mac operating systems.
If you are in a very dry environment and you wear clothes that gather a lot of static, you might want to avoid touching memory cards. While the exterior shell of CF and SD cards is made out of plastic, the pins that connect to devices are made from copper and other materials that conduct electricity. And since electrical components can easily get fried with a static charge, you want to avoid touching them. If I know that I might be carrying a static charge, I usually find a piece of metal, such as a doorknob, that I can use to discharge the built-up static before touching any electrical devices, including cameras and memory cards.
Memory card failures are pretty random. Some failures are temporary, with your camera reporting an error but then allowing you to continue shooting, while other errors are permanent – when there is more serious damage to the memory card. If you ever see an error on your camera while shooting to a particular memory card, stop shooting! The last thing you want is make matters worse by adding more images to the card and potentially corrupting the card even more. The moment you see an error, replace the memory card with another one. If the error persists, it might not be related to the memory card. But if the error disappears, your card might be failing.
The best course of action in such situations is to insert your memory card into a memory card reader as soon as possible and try to copy all the files. If the files are not corrupted and you are able to successfully copy all the files, you need to know whether it was a temporary failure on your camera, or the start of a memory card failure. Once the transfer is done, perform a low-level format of the memory card on your computer. If formatting fails or you see errors during the process, it is time to either send the memory card to the manufacturer for a replacement, or if you are outside the warranty period, it is time to trash it. If the low-level format completes and you see no errors, then you should be safe to use it again – just monitor the card and if you ever see another error, it might be safer to get rid of it than to continue using it.
If you cannot retrieve files from a failed memory card, chances of being able to get those files back are very slim. You can try using recovery software to get the files back, but recovery typically only works on deleted files and formatted cards, not on failed cards. If the data on the memory card is extremely important, you might want to reach out to your local data recovery company and ask if they can help. Keep in mind that such companies often charge a lot of money to restore your data!
If you want to prolong the life of your memory cards, always protect them from extreme weather conditions. Never leave out memory cards under direct sun – keep in mind that memory card shells are made out of plastic and plastic can easily melt when exposed to the sun. In addition, by leaving memory cards under direct sun, you might damage electrical components of the cards, which might cause them to fail. Another enemy of electronics is moisture and water containing minerals. While pure H2O is not harmful in any way, saltwater and drinking water with minerals can cause an electrical short, which will certainly cause the device to fail. If you accidentally drop a memory card in water, always make sure to fully dry it out before attempting to read data from it. Make sure that the SD card is not just dry on the outside, but also fully dry inside the plastic cover.
My preferred way to keep my memory cards safe from damage is to always keep them in a protected case. I have been using the Pelican 0915 Memory Card Case for years now and I love it, because it has a water resistant seal when I close the card holder, which keeps my memory cards protected from potential water damage. For only $25, it is a cheap way to protect those valuable images!
When traveling, I always try to keep a backup of my images in at least two different locations. If all I have is a laptop with me, then the laptop becomes primary storage and the memory cards I have used become secondary storage. If I have a laptop and an external drive, then those two become primary and secondary, while my memory cards become tertiary storage. As soon as I fill up a memory card, I put it back into my memory card case backwards, as shown in the below image:
This way, I know exactly which memory cards have been used and which ones remain for me to use. I never format used cards until I get home and safely transfer everything to my main storage. I can only remember one case when I shot so much that I ran out of cards on a three week-long trip and only after making sure that both my laptop and my external drive contained all the images, I finally formatted the largest capacity memory card to use on that trip. Since then, I bought a few more cards, so that I don’t run into this issue again.
Some people argue that a memory card should never be filled completely, that doing so will either make it slower or increase the potential of memory card failure. This is a big myth from people who don’t know what they are talking about. First of all, I have never seen a case of a memory card filling up to the level where there is zero space left. When shooting images, if the camera sees that an image will not be able to fully fit on a memory card, it simply stops shooting and shows a “full” note. So the chances of fully taking up space on a memory card are very slim. Second, memory cards don’t work like other types of storage that might slow down when there is little space left. Third, filling up a memory card does not increase chances of its failure. I have been shooting with digital cameras for over 10 years now and I never had to worry about stopping shooting when the number of frames left is low. I always shoot until my cameras tell me that the memory card is full. Even when shooting video, I have managed to fill up cards and yet I have never seen a card fail as a result.
Another myth is that deleting images from your camera can cause problems with corruption and potential memory card failure. Again, I am not sure where such claims come from, but they have no scientific backing to them. There is no harm in deleting images from your camera, just like there is no harm in deleting images on your computer. I have been doing this for years and I have never seen a card fail because of it! If I shoot a blurry or a badly exposed image, I get rid of it as soon as I see it on the LCD. Why go through the hassle of culling and potentially importing unwanted images? There is no reason at all – feel free to delete images on your camera or on your computer and stop worrying about any potential harm.
The only case where you might want to avoid deleting images is when you have two memory cards setup in overflow mode. As I have pointed out earlier, your camera will place images in the first memory card that has available space, so if you keep on deleting images from one card after the camera already started putting images on another, you might create a mess. Also, if you do decide to delete images from your camera, make sure that you take your time and only delete what you need – some cameras are designed to continue asking if you want to delete images and if you are not careful, you might also accidentally delete previous images.
Some of us stash memory cards in large sizes, thinking that we could use them forever. With any storage type, it is not about the question of “if”, it is the question of “when”. Memory cards fail and the more you use them, the more likely they are to fail at some point of time. So make it a good practice to replace memory cards every so often. Maybe every 3-4 years, maybe sooner or later, depending on how often you shoot. Also, keep in mind that newer memory cards are most likely going to be much faster and potentially even more reliable compared to the really old memory cards – just make sure to check their specifications before buying them. You do not want to use a newer generation memory card that might have compatibility issues with your older camera, or your memory card reader.
via Photography Life http://ift.tt/2muTWPF
Ever since Sigma decided to revamp its line of lenses with its “Art”, “Contemporary” and “Sport” editions, we have seen a number of innovative lenses from the company, some of which claimed the “world’s first” title. Sigma has been working hard on producing fast, high-performance and durable lenses for Nikon, Canon and Sigma mounts at very attractive price points, allowing the company to quickly grow and establish itself as a reputable lens manufacturer. Today, the company revealed yet another amazing set of lenses in the form of Sigma 14mm f/1.8 DG HSM Art, 135mm f/1.8 DG HSM Art, 24-70mm f/2.8 DG OS HSM Art and 100-400mm f/5-6.3 DG OS HSM Contemporary – four lenses designed specifically for full-frame cameras.
The Sigma 14mm f/1.8 DG HSM Art claimed yet another “world’s first” title, because it is the first 14mm ultra wide-angle lens ever made that has a very wide f/1.8 aperture. Looking at the specifications and the MTF chart of this lens, I can see how it will quickly become a favorite lens among astrophotographers – with its optical formula optimized for extreme sharpness, which boasts the same aspherical element as on Sigma’s 12-24mm f/4 Art lens, along with the largest glass mold in the industry, it is supposed to deliver outstanding performance at wide apertures, something astrophotographers always long for.
Sigma claims the lens the lens to have virtually no distortion, flare or ghosting, which is very impressive. The optical formula of the lens is pretty complex, featuring three FLD (“F” Low Dispersion) glass elements and four SLD (Special Low Dispersion) glass elements, which not only reduce chromatic aberration, but also help achieve excellent edge-to-edge sharpness. Just like other Art-series lenses, the Sigma 14mm f/1.8 DG HSM will also feature a fast and quiet hyper sonic motor.
It looks like Sigma is taking advantage of Nikon’s timing with its 135mm lens update with this new Sigma 135mm f/1.8 DG HSM Art. Not only is this lens slightly faster than the Nikon 135mm f/2 DC, but it also promises better edge-to-edge sharpness with high resolution cameras up to 50 MP. In addition, Sigma says that the lens features a newer large hyper sonic motor (HSM) that provides more torque for quicker and more reliable autofocus acquisition and the diaphragm is now electromagnetic, similar to what we have been seeing on the latest generation Nikon “E” type lenses such as the Nikon 105mm f/1.4E.
With an optical design featuring a total of two FLD, two SLD and no aspherical elements, the lens should be a superb candidate for portrait photography, delivering excellent bokeh and sharpness at its wide open aperture. In fact, sharpness-wise, if you take a look at its MTF chart in our lens database for the Sigma 135mm f/1.8 Art, it looks like this lens will be a resolution monster. Its MTF chart looks even more impressive than on the Sigma 85mm f/1.4 Art, which is supposed to be the sharpest 85mm lens on the market right now…
Sigma has been making 24-70mm lenses for many years and the new Sigma 24-70mm f/2.8 DG HSM OS Art is a brand new refresh of that line which now adds better overall performance and image stabilization features to the lens. In addition to the new look and feel, the fourth generation 24-70mm f/2.8 DG HSM OS features a completely rehashed optical formula comprised of three SLD and four aspherical lens elements to deliver high level of sharpness across the focal range of the lens. Sigma promises better coma, chromatic aberration and distortion performance on the lens compared to its predecessor and says that the optical formula of the lens was optimized to yield pleasing bokeh.
The Sigma 24-70mm f/2.8 DG HSM OS will feature a similar HSM motor as the Sigma 135mm f/1.8 DH HSM Art with a better torque performance, in addition to the latest generation optical stabilizer (OS). The lens has a dust / splash proof design and the external moving parts feature thermally stable composite to resist thermal expansion and contraction, so it should hold up really well in challenging weather conditions. Just like the Sigma 135mm f/1.8 DG HSM Art, the lens will have an electromagnetic diaphragm.
Sigma calls the new Sigma 100-400mm f/5-6.3 DG OS HSM Contemporary “the light bazooka”, since it is supposed to be a versatile lens with a rather simple push/pull zoom mechanism, compact size and a total of weight of 1,160 grams. Sigma says that the lens is a great all around performer when it comes to sharpness, focus speed and image stabilization performance, which should be comparable to the Sigma 150-600mm f/5-6.3 DG OS HSM, minus the size, the bulk and the weight.
Featuring a fairly complex optical formula comprised of 21 elements in 15 groups, four of which are SLD glass elements, the 100-400mm f/5-6.3 DG OS HSM is not just categorized as a lens with solid optical characteristics – thanks to its impressive minimum focusing distance of 160cm (63 inches), it is also supposed to be a fairly good choice for macro photography. Just like the other two lenses, the 100-400mm f/5-6.3 DG OS HSM will feature an electromagnetic diaphragm.
Sigma has not yet announced pricing for these four lenses. We will publish pricing information as soon as it becomes available.
via Photography Life http://ift.tt/2mgYX2q
Many photographers enjoy exploring the world around them with macro and close-up photography. The basic difference between these similar genres of photography is the amount of magnification achieved, with a 1:1 magnification generally accepted as an example of macro photography. Images at this level of magnification also have more details than are achieved with close-up photography. The camera gear used for macro photography can be quite specialized and costly which can be a barrier for many photographers. This article features a small selection of close-up photography images all of which were shot hand-held in available light using a set-up that cost about $875 CDN including camera body, lens and extension tubes.
The purpose of this article is certainly not to suggest that a set-up in this price range and with a small sensor can match the image quality of a full-blown macro photography set-up…it can’t. Hopefully what this article will demonstrate is that it is possible to have a lot of fun creating pleasing close-up images with a modest investment in equipment. First, let’s have a look at the gear that I used to capture the images in this short article: Nikon 1 J5, 1 Nikon 30-110mm f/3.8-5.6 VR zoom lens, 21mm and 16mm MOVO extension tubes for Nikon 1.
I love using this gear set-up for close-up photography as it is small, light and easy to handle. The flip screen on the J5 comes in very handy, especially when shooting down on subjects with the camera raised over my head – the second butterfly image in this article is an example.
I went to the Niagara Butterfly Conservatory for a few hours with the original intent of testing a new B+W NL-5 Close Up Lens that I recently purchased. While it is still early in my assessment I quickly discovered that I don’t really like using a close-up lens, so I switched to extension tubes to capture the images presented here.
Since many photographers who engage in close-up photography enjoy capturing images of plants and insects this article features a small selection of each.
You’ll notice in the EXIF data that I captured all of the images in this article using an aperture of f/8 at ISO-3200. I decided to risk some potential image softening from diffraction shooting at f/8 with my small sensor Nikon 1 J5 in order to get some additional depth-of-field.
My choice of ISO was made to help ensure reasonable shutter speeds as to avoid potential image blur when shooting hand-held. In most cases I used shutter speeds of at least 1/100th of a second, although you will notice a few images captured at shutter speeds slower than that – some as slow as 1/30. Since I shoot in RAW I never hesitate at all to capture images with my Nikon 1 gear using a camera setting of ISO-3200 as I can deal with potential noise in post.
All of the images presented in this article are full frames as captured without any cropping. I wanted to demonstrate the amount of detail that can be achieved with an inexpensive gear set-up as well as provide examples of hand-held image framing using a non-EVF camera. A tip when framing a close-up photography image is to choose subjects and shooting angles that will incorporate smoother, less busy backgrounds. It can also be helpful to look for subjects that have some direct light on them with shadows in the background. Offsetting your subject to one side can also add some visual interest rather than always having your subject positioned in centre frame. These composition approaches can help accentuate the subject in your photograph, and add a touch of drama.
I used single point auto-focus for all of the images in this article. My Nikon 1 J5 allows me to place a single focusing point virtually anywhere on the rear screen. I find this to be extremely helpful for close-up photography as it eliminates the need to focus and recompose an image when shooting hand-held. I’ve always found using ‘focus and recompose’ technique distracting.
There is a trade off when extension tubes are used in terms of a loss of light. On the positive side extension tubes do not have any glass elements in them that could negatively affect image quality which, for example, is often the case with tele-converters.
Macro lenses are specifically designed for their purpose and will provide photographers with better overall image quality and detail than when using extension tubes with lenses not specifically designed for macro photography. Having said that, a decent amount of detail can be captured when using an inexpensive set-up with extension tubes as you can see with the debris on the butterfly’s right eye in the image above.
Accepting this image quality limitation, one can still have a very enjoyable time creating pleasing close-up images using a minimum of photography gear as demonstrated in this article. I love to go out with my Nikon 1 J5 set-up and shoot close-up photography subjects as it is an excellent photography practice exercise. I find that this style of photography heightens my awareness of unique lighting situations. It also allows me to pick out small details when scanning an area and see potential compositions in my mind much faster. This helps to reduce the time that it takes me to actually frame and capture an image. Practicing to work quickly and efficiently is a benefit for my industrial client work.
I much prefer capturing these types of images hand-held as I find using a tripod is far too restrictive for my rather fluid shooting style.
To try to achieve as much of the subject in-focus as possible it is advisable to use a shooting angle that puts as much of the subject parallel to your camera’s sensor as seen with the butterfly image above. I purposely captured this image shooting upwards toward the butterfly, rather than head-on, so I could get its head and more of its legs and body in focus. Knowing your gear is an important factor in achieving decent results, as is your hand-holding technique.
My Nikon 1 J5 does not have an EVF. This is not any kind of an impediment for me to capture close-up images as I always compose from the rear-screen of a camera when doing this type of photography. The photograph above of an Atlas Moth was one of my favourite images captured during the 2 1/2 hours I spent at the Niagara Butterfly Conservatory creating images for this article. My blog has over 20 articles featuring examples of close-up photography for folks who are interested.
All images in this article were produced from RAW files using my standard process of DxO OpticsPro 11, CS6 and Nik Suite.
Article and all images are Copyright 2017 Thomas Stirr. All rights reserved. No use, adaptation or reproduction of any kind including electronic or digital is allowed without written permission. Photography Life is the only approved user of this article. If you see it reproduced anywhere else it is an in-authorized and illegal use. Calling out individuals who steal intellectual property by having Photography Life readers posting comments on offending websites is always appreciated!
via Photography Life http://ift.tt/2mvaB5D
I appreciate your feedback and comments! If you wish to contact me for any reason feel free to send me a Flickr mail or message me on any other social media and I’ll reply as soon as I can.
If you like this or any of my other images, prints are available from my site at http://ift.tt/2aSv0xW.
This photo link was provided by the RSS Feed:Daily most interesting photo – Flickr http://ift.tt/2mgGeUI
Previously, when you and your clients purchased a variety of products from Mpix the order was sometimes split into multiple shipments (and with that, you paid multiple shipping charges). Now, no matter what you order from Mpix—prints, frames, magnets, calendars, greeting cards or even color correction—all your items will be shipped together for a single shipping fee if they are ordered at the same time. Nice!
This currently applies to products and prints ordered from Mpix. Products ordered from separate labs such as Millers and MpixPro will still ship separately.
Read more about split shipping charges in this support article.
via ZenBlog: Blog http://ift.tt/2lOOCd0
I have a confession: I have a love/hate relationship with seascapes. Sometimes I feel like they are easy, monotonous and a little bit of a sham — an easy way to impress others without having to put in the requisite amount of work. On such days I vow never to shoot a seascape again. And then there are moments when I cannot resist the pull of the ocean. When the sky looks like it will go up in flames at sunset casting it’s glow on everything the light touches — the sand, the rocks, the water. When the tide is high and the waves crashing violently into the surf promise dramatic foregrounds. ‘This will be my greatest seascape,’ I tell myself as I pick up my gear and head for the beach, bracing for the challenge ahead.
By now the reader has probably picked up on the contradiction. How can something be easy and boring and challenging and exciting at the same time? I would argue that it can be.
Truth is that the seascapes do simplify the most critical aspect of photography — the composition. A typical ocean-front vista is so inherently interesting and appealing and the shapes of the constituent elements (shoreline, rocks, sand, waves) so undefined that the photographer has tremendous freedom and opportunity with how to frame the shot.
The photo above taken on a day when the sunset had complete fizzled out. The thick clouds had flattened the dynamic range and so all I had to do was point-and-shoot to capture this image. I returned home disappointed but out of curiosity, posted this image on social media to see how it would be received. To my surprise it garnered far more accolades than images that I believed were far more creative and challenging to execute.
At the same time, the most interesting seascapes often involve getting up close and personal with those nemeses of cameras and lenses: water & sand.
Soon after I took this picture, a rogue wave came, swept right past me and my gear, slamming into the rocks behind me leaving me drenched on the rebound.
While depth-of-field is a challenge for all types of landscape photography, it is especially difficult in seascapes. So much of what’s in the frame is in motion, making techniques like focus stacking impossible and forcing the photographer to get it right on the spot.
More often than not, a very promising sunset will fizzle out, forcing the photographer to get creative in other ways, using wave motion and light to create interest.
Pier shots are quite often a safe choice, with the pier providing a convenient, frame-filling foreground for the dramatic sky. It is not wholly without challenges however, as most compositions have a significant portion of the pier covering the background making it difficult to use graduated filters to address reduce contrast. Also, in my experience, piers offer limited composition opportunities.
Quite often seascapes involve slowing down the shutter speed to create that silky water motion, thereby requiring the use of tripods. It is not quite that easy though, because conditions are often windy and the tripod will need to be placed in moving water for the most interesting compositions. Therefore, the tripod needs to be stable as well as durable (salt water and sand are not kind to tripods).
A low tide is not all bad. When the water recedes, it reveals a diverse, alien landscape below that can make for some interesting photography.
Sometimes, all the stars align perfectly and I manage to make the kind of capture that is going to motivate me to keep going back for more.
At the end of the day — photography or not photography — sunsets at the beach are one of nature’s most remarkable spectacles and I realize what a privilege it is to be able to partake on a regular basis — something for which I will be eternally grateful.
via Photography Life http://ift.tt/2l92ACr
This photo link was provided by the RSS Feed:Daily most interesting photo – Flickr http://ift.tt/2mqze3p
Why is it so difficult to capture the cozy ambiance of a cafe in a picture? Or the casual atmosphere of a warm bonfire with friends on a summer night? Learning how to capture mood and atmosphere of a scene is a skill that is elusive for many photographers.
mobile_slots.push( googletag.defineSlot( “/1005424/_dPSv5topic-rhs(300×250)”, [300, 250], “pb-ad-124309” ).addService( googletag.pubads() ) ); } );
This is because the finished product isn’t only about getting the technical settings and composition correct. The image needs to evoke something in the senses; it has to capture the visceral aspects of a scene, the sights, sounds and smells so that every time you look at the picture, you are brought right back into the moment.
As always, rules in photography are made to be broken. So this list is meant to help you explore the creative aspect of how to capture mood rather than a firm lecture on how x will help you accomplish y.
Here’s a rundown of some of the things to consider when you’re trying to capture the mood, atmosphere, and emotion of a setting. Your goal; looking at the picture later brings you right back into the moment.
Photography is artificial. That little black box that you use to take pictures necessarily is always between you and the subject. That’s why it is really impressive to see photographers who can take incredibly natural pictures – almost as if a camera wasn’t even involved in the process.
When capturing a moment, your goal should be to take a candid photo, where your subject(s) are unaware of the camera. This helps to create a final image where the viewer feels like a fly on the wall. A picture where everyone is staring straight at the camera, on the other hand, pulls the viewer out of the moment and draws attention to the artificiality of the process.
Walk into a room with a camera and you can see how everyone changes the way they smile, their posture, etc. Everyone wants to look good for the camera. But by being super aware of the camera, the mood of the moment is lost.
Of course, it’s not always an option to take a candid photo. This is where you need to have the skill to make a natural picture by giving direction or helping the subject feel comfortable to the point that the shot looks real, rather than staged.
Lighting always plays a huge role in your image. To capture the atmosphere of a specific moment, your goal should be to emphasize that lighting as much as possible. Typically, a warm or cozy setting will involve soft lighting. For example, with a summer evening comes soft orange light and a radiant glow outlining people lit by the sun.
So how can you show this? Experiment with shooting with the sun behind your subjects. A camera on automatic mode will struggle with this and will make your overall exposure too dark. Try either adjusting your exposure compensation to shoot a brighter picture, or go full manual and explore the creative possibilities!
Shooting into the sun also often results in lens flare – and you can use this effect to your benefit as well. Lens flare can help add a real mood of summer and warmth to a picture.
Low light pictures can also really stand out. The soft glow of a bonfire or candlelight often throws deep and intriguing shadows. To capture this, you need to consider the direction of the light. Someone looking away from the light source will have their face in deep shadow – and it likely won’t make for a very interesting image. But, by turning them back towards the light, you can really bring out texture and personality.
In low light, your camera will often tell you there isn’t enough light and will flip on the pop-up flash. What should you do then?
Using the flash on your camera is a sure way to add an unnatural feeling to an otherwise warm and cozy atmosphere. The main reason for this is because there are different temperatures of light. Some types of light look warmer; some look colder.
The light from your flash is balanced to match the type of light you’d find under the midday sun (daylight). Light from a bonfire or candle, however, contains a lot more orange. The light from your flash will look very blue in comparison, and this mismatch of colors is easy to recognize in the finished image.
Light from the flash is also on nearly same the angle as the image. Since we don’t normally view people or objects with light coming from the same angle as our eyes, this looks strange. This also has the effect of removing the shadows and textures that give the image a sense of dimension.
Of course, the reason your camera will want to use flash is because there isn’t much available light. This brings us conveniently right to the next point…
If you can’t add light with flash, you’ll need to find another way to collect enough light to capture the image. This can be done by opening up the camera’s aperture. Aperture is measured by f-stops, with a lower f-stop number (like f/4) meaning that the lens is opened wider to let in more light.
Prime lenses, or lenses that don’t don’t zoom, can typically open to a wider aperture. For this reason, they are an ideal choice for capturing the atmosphere of a setting when there isn’t much light to work with.
Besides just gathering more light, a wide aperture will give your image a more precise point of focus (shallow depth of field). Whether the focus is on a person or a detail, the viewer gets a sense of being close and intimate with the scene.
The bokeh, or out of focus area created by using a wide aperture, also throws the background into a creamy blur, which both helps to remove any clutter from the shot and lets our imagination wander to fill in the blanks.
Whether you are using a wide aperture or not, you’ll want to show the setting the get a clear sense of content. Capture the details that make the setting memorable and put everything into context.
A technique I like to use is to include an object or person in the foreground of the shot. By framing the shot with foreground elements, I can create the illusion of being a participant in the event. This technique also gives a strong sense of depth to the image, which can help make it a more memorable photo.
More often than not, our fondest memories are closely tied together with the people we experienced them with.
For this reason, a good way to capture the essence of a moment is to get a shot of people interacting with each other. It can be through buoyant smiles, a tight hug, or a tear of joy rolling down a cheek.
It isn’t always so easy to spot these little moments, and they also tend to disappear quickly. Likewise, it takes a bit of observation and creativity to find the moments that really bring out the drama or happiness of a scene.
Maybe you can’t capture sound and smell with a photo – but you can appeal to those senses by bringing attention to details that are familiar and remind us of a distinct sound or smell.
The sharp texture of stone or the gritty feeling of sand are very familiar to us, so having those textures prominent in a picture helps us experience the image more strongly.
Often, you can really bring out the mood of a shot during the editing process. Whether you are using Photoshop or a simpler editing program, here are some tips for emphasizing the style you want in your final image.
Color is important for establishing the mood of an image. Muted or darker colors can give a feeling of reflection, sadness, or calm. Brighter and vibrant colors, on the other hand, suggest happiness.
A picture’s white balance can be set or adjusted to bring make an image feel hotter or colder. The difference between a warm summer evening and a cool winter’s night should be evident in your pictures.
White balance works on a sliding scale from yellow to blue. Experiment to find the right setting for your image. If you shoot in RAW, you will be able to freely adjust your white balance without any quality loss in your picture. If you shoot JPG, there won’t be nearly as much leeway.
Some editing styles can help invoke a sense of nostalgia. The “film look” adds a feeling of timelessness to a picture, even to those who are too young to remember the days of taking and developing pictures on film.
If you want to play around with this style, there are many different presets and filters that can get you started. This style will typically desaturate colors, remove some contrast, and add some grain.
Converting your image to black and white can also give your photos this sense of nostalgia. Play around with your edit and see what you can come up with!
So good luck with your practice of taking images that capture the mood, atmosphere, and emotion of a scene. Until scientists invent a time-machine, it’s the best way we have to travel back and experience a friendly place or memory once again.
tablet_slots.push( googletag.defineSlot( “/1005424/_dPSv4_tab-all-article-bottom_(300×250)”, [300, 250], “pb-ad-78623” ).addService( googletag.pubads() ) ); } );
mobile_slots.push( googletag.defineSlot( “/1005424/_dPSv4_mob-all-article-bottom_(300×250)”, [300, 250], “pb-ad-78158” ).addService( googletag.pubads() ) ); } );
via Digital Photography School http://ift.tt/2mcPg4S