On Comparing Full Frame and
APS-C DSLR Camera Formats
by Sarah Fox

CONTENTS:

INTRODUCTION: "Should I upgrade to a full frame camera?” I’ve seen this question posed countless times, and the advice received is usually a mix of good information, bad information, and curious myths. There seems to be no authoritative treatment of this subject, so I thought I'd give it a whirl. Heck, I’m driving cross-country right now , pedal to the metal, currently on Oklahoma’s I-40, so I have nothing better to do! (Don't worry. I'm in the passenger seat.)

This discussion will have an emphasis on comparing the two most popular DSLR formats -- APS-C and “full frame” (FF) – but also with general information for comparing other formats. I hope to include everything you might want to know about comparing formats, so if I’m missing anything, please let me know. I try to keep this article updated as new comments come in and as I read discussions of factors I hadn't included. (Please note I will include a discussion of every factor that is brought to my attention, but I do reserve the right not to agree with every comment offered.) My discussion will be a bit Canon-centered (e.g. with 1.6 crop factors), because I am a Canon photographer, but Nikonians and Sonians can extrapolate my arguments to their respective camera lineages.

Finally, this article is a bit technical, and I only do a little bit of hand holding to walk people through the ugly details. Sorry about that. This article is best suited for someone with an intermediate to advanced knowledge of photography. Of course a discussion of differences between FF and APS-C is hardly for beginners anyway.

THE FORMATS: Some formats have names that describe their sizes, like 4”x5” or 8”x10”. Other formats have names that almost describe their sizes, like 35mm. The 35mm format actually utilizes a 35mm wide strip of film, but part of that width is taken up with sprockets for advancing the film. The actual image area is 24mm wide by 36mm long. This is the same format size as the “full frame” (or FF) digital.

The most popular DSLR sensor format is the APS-C, which is approx. 15 x 22.5mm in the Canon lineage and 16 x 24 in the Nikon lineage. For various reasons this format is much easier and cheaper to produce and is therefore more in reach to the amateur/enthusiast. APS-C and FF both have their loyal followings. Looking into my crystal ball, I don’t see either format disappearing from the digital landscape.

RELATIVE COST: A full frame sensor has 156% more area than an APS-C sensor. One might therefore think it should have a 156% higher production cost. In fact the difference is much greater than that. That is because larger chips (e.g. sensors) have a higher rejection rate than smaller chips in the manufacturing process. Chips are manufactured on circular silicon wafers. The wafers are very pure, but they have occasional defects in their crystal structure that will compromise the functioning of a chip. Assume an average wafer has 10 defects. (I have no idea how realistic this might be.) If you’re going to lay out a thousand tiny chips on the wafer, you’ll end up rejecting about 10 of the 1000 chips (1%) because of these defects. If you’re going to lay out 10 chips, you’ll probably end up rejecting at least a few of the chips. If you were hypothetically able to lay out a chip occupying the entire wafer, your reject rate would be nearly 100%. Therefore, as the chip size increases, both the yield per wafer and the rejection rate drop. This, by itself, makes a 156% larger chip much more than 156% more expensive to manufacture. However, that’s not the only difference in manufacturing cost.

APS-H (1.3 crop), APS-C and smaller sensors can be manufactured with single lithographic projections; whereas, FF sensors cannot. At one time, FF and larger formats had to be manufactured with multiple connecting images. This required more steps in the manufacturing process and also required very exacting alignment of the lithographic images. So besides being more labor intensive to manufacture, FF sensors were more likely to end up in the reject pile because of misalignment of images. More recently FF sensors are imaged using stepper projection. The equipment used for this process is more costly, but the labor savings and reduced reject rate result in lowered production cost. Even so, with an increased reject rate, decreased yield, and higher manufacturing equipment cost, full frame sensors are still very substantially more costly to produce than APS-C sensors and will surely remain so. That, of course, is why FF cameras are so much more costly. What does this more substantial investment get you?

SENSITIVITY AND NOISE: The most universally understood and oft cited difference between digital formats concerns noise and low-light sensitivity. People with even basic understandings of photographic equipment will tell you that larger sensors are more sensitive and can operate at higher ISO settings with similar or less noise than smaller sensors. In general this is correct, but why is it so?

Light travels in discrete packets called photons. Photons are a bit like rain drops. A single raindrop in the middle of a downpour might not seem like much. It gets lost in the deluge and might become summarized as an infinitescimal part of “1.5 inches of rainfall.” However, when the storm clouds first appear, and when those first few drops fall from the sky, each drop becomes more significant. There’s a big difference between 1, 2, or 3 drops at the beginning of a storm. This is the difference between light representation in bright light and light representation in extremely dim light.

How much do the individual photons really matter in the real world? Can we actually see an individual photon? Can a camera sensor actually detect a single photon? Actually, yes, but just barely. Consider the spotty, green images from a night vision scope. We can actually barely see what a night vision scope can see. The night vision scope just makes it easier to see. The individual spots in a night vision image are indeed from individual photons, amplified greatly.

In digital imaging at high ISO settings, there is quite a lot of shadow detail that arrives in the form of individual photons. It is not unusual for the faintest parts of an image to be formed by the capture of only a few photons per pixel. When so few photons are captured per pixel, each photon becomes much more significant, and the difference between 3 photons and 4 photons captured at neighboring pixels is seen as noise in the image. By contrast, the difference between 3000 and 3001 photons would hardly be noticed.

Obviously the way to reduce noise in an image is to maximize the number of photons captured. There are at least a few ways this can be done. One is to increase the efficiency of the sensor, so that the probability of capturing a photon is higher. Progress will slowly and progressively be made in this area as technology marches on.

A more useful approach, however, has been to increase the size of each pixel’s photosite (the active area over which light is gathered). There are at least a few approaches along this line. One approach is to minimize wasted space between photosites, so that as large a percentage of the sensor area as possible is devoted towards light gathering. CCD sensors are actually 100% covered in active photosite areas. CMOS sensors, by contrast, have gaps between photosites. Advancing technology has resulted in the reduction of these gaps. At the same time, the gaps get larger as pixel density (e.g. the megapixel count) increases.

A related technique is to gather light over a larger area per pixel and then to focus it onto the smaller photosite. This is done with microlenses that hover over the photosites, increasing the photosite coverage much closer to 100%. Extraordinarily high efficiency can be achieved this way, making the most modern crop of high-end DSLR cameras capable of previously unheard of ISO settings.

Finally, and most importantly to this discussion, larger photosites are achieved with larger sensors. I suppose that’s obvious. Just like a larger bucket will collect more raindrops in a rainfall, a larger photosite will collect more photons. If you have twice as large a photosite, you’ll collect twice as many photons. Looking at it another way, if you operate a sensor with double the photosite size at one stop higher an ISO (and therefore half the exposure), you’ll collect the same number of photons. Therefore, you’ll have similar noise at a 1 stop higher ISO. How big is this advantage between APS-C and FF? comparing two cameras of the same number of megapixels and similar sensor architecture, the advantage would be 256%, or 1.36 stops (with the 1.6 crop factor of Canon APS-C cameras).

I should mention here that more megapixels means smaller photosites, and that means more noise and lower sensitivity (lesser high-ISO capability). This might or might not be a problem, as eyes and printers can also smoodge together a plethora of microscopic pixels, making the higher noise less significant. On the other hand, color noise remains a significant problem and can result in patchiness in the color representation. On a personal note, I rather like 12 MP sensors. Their resolution is sufficient for my needs, and the photosites are sufficiently large to yield relatively higher sensitivity and higher-quality shadow detail.

DYNAMIC RANGE: Comparing two cameras of the same number of megapixels, the same generation of sensor and processing algorithms, and the same sensor operating voltages, but of differing formats, the larger format camera will be capable of (but might not necessarily deliver) higher dynamic range than the smaller format camera. That's because the larger photosites will have a greater electronic "well depth" and will be able to capture more photons before saturating. Looking at this another way, the noise of the larger pixel will be lesser (see the previous section), so the sensor will be able to register better shadow detail, hence larger total dynamic range.

RESOLUTION: Many claims are made that FF cameras yield higher resolution than APS-C cameras. Do they? Well, that depends on the optics. This is a really squishy subject, but perhaps I can discuss a few of the more relevant factors. It mostly has to do with the difference between EF (Nikon = FX) and EFS (Nikon = DX) lenses. EFS lenses are smaller lenses with smaller image circles, specifically designed for APS-C cameras. They cannot be used on FF cameras. (Even though they may sometimes mount up, the image will not fill the focal plane.) EF lenses, on the other hand, are designed for FF cameras and can be used on either FF or APS-C cameras. They form a larger image circle that is sufficient for a FF camera and overkill for an APS-C camera.

Many will argue that there is an advantage to putting an EF lens on an APS-C camera, because the “sweet spot” of the image fills the focal plane. They argue on that basis that APS-C cameras will deliver higher resolution and sharper images than FF cameras. This is a flawed argument if one considers two cameras of differing formats but with the same megapixel count. The pixel density on the APS-C camera will be higher. If the EF lens is able to outresolve the sensor, then pixel density shouldn’t matter too much. Either image should appear about as sharp. However, if the resolution of the lens is less than that of the sensor, the size of the blur in a “sharp” edge will be larger in the APS-C image, with respect to the total frame.

Looking beyond this factor, what does it really mean for the “sweet spot” of an EF lens to fill the focal plane of an APS-C camera? For those unfamiliar with the “sweet spot” concept, lenses form better images at the image center than at the edges/corners. The relatively large area with higher image quality is called the “sweet spot.” One would think that there would be an advantage to be had from using just the sweet spot of an EF lens, rather than using the entire image from an EFS lens. After all, the EFS lens has a smaller sweet spot, and the less-than-sweet spots are included in the image.

However, it’s not as simple as that. Lens design involves a large number of trade-offs and compromises. Engineers must rob a lens design of a lot of sharpness in order to create a larger image circle, so I would argue that the sweet spot of an EF lens is probably not as sweet as the overall image of an EFS lens. Don’t believe me? Then consider an extreme example. Imagine placing the microsensor of a typical compact digital camera in the sweetest part of the sweet spot of an EF lens’ image. With its absolutely crazy pixel density, it’s going to try to resolve detail that the EF lens can’t even begin to produce, and the image will look rather blurry. Now put the equivalent focal length EF lens on a full frame body. (It will probably be some big telephoto.) With even the crummiest Optika brand telephoto lens, you’ll still see a much more usable image.

The reason the micro-sensor wouldn’t perform well with an EF lens is that so many compromises are made to the lens design to create that large image circle that the micro-sensor won’t ever see. If, instead, the lens were redesigned to project a very small image circle of very high quality, then the resulting lens/sensor combination would yield a much higher quality image.

In the end, are there resolution differences between cameras of different formats? Well, not really, for the most part, provided certain conditions are met: (1) The cameras have lenses mounted up that designed for (and therefore well matched for) their respective formats. (2) Both cameras have similar megapixel count. (3) Both sensors have similar architecture. (4) Cameras and lenses are of similar quality between formats. In summary, I’ll go on record here to say that APS-C cameras with EFS lenses can produce images of similar resolution to FF cameras with EF lenses, all other things being equal. Please focus on the phrase, “CAN produce.” This is important, as will be discussed in sections to follow.

SHORT BACK FOCUS (CANON EF-S) OPTICS: Unlike other manufacturers, Canon has a different mount standard for their APS-C lenses. A Canon EF-S mount (with the "S" designating "short back focus" -- or shorter clearance between the rearward element and the focal plane) will not physically mount to a Canon FF camera body, but an EF lens will physically mount to either an FF or APS-C body. The reason for this is that the EF-S lenses are designed with rear lens elements that extend farther back in the mirror box. They can extend further back because of the smaller mirror of the APS-C body and hence the more permissive mirror clearance distances. The "short back focus" aspect of Canon's EF-S lenses is very important when considering wide angle lenses. These lenses must incorporate retrofocus designs that provide for distances between lens and focal plane that often far exceed their focal length. Although engineers are able to create space for mirrors and other mechanical elements with retrofocal designs, it is not without a cost to the complexity, size, weight, cost, and performance of the lenses.

The short back focus specifications of Canon's EF-S lenses minimize the retrofocus requirements of the optical design, particularly with regard to extreme wide angle lenses. This results in a true "native-format" design that is properly optimized for the Canon APS-C cameras. Other manufacturers, most notably Nikon, do not use this sort of dual-mount standard. Nikon DX lenses, for instance, will physically mount up to a Nikon FF camera, and the large mirror will clear the rear element. However, the lens will still be incompatible, producing an image circle that does not cover the focal plane. While a DX lens might sometimes be useful on a FX camera, most photographers would choose to use an FX lens instead. The mount compatibility of the DX lens to the FX body comes at a cost, as it forces the DX optics to incorporate stronger retrofocus designs that come with the above costs.

One would think that Canon photographers, for instance, would not be hurt by the excessive back focus distances of other manufacturers' APS-C format lenses. This would be true within the context of Canon lenses. However, third-party lenses, such as from Sigma, Tamron, Tokina and others, must accommodate the larger mirror clearance distances of FF cameras even in their APS-C lenses. They are not likely to produce an EF-S compatible lens design for a Canon that will not also work with a Nikon. As a result, third party APS-C lenses share the Nikon DX disadvantage with respect to shorter back focus Canon EF-S lenses.

Does this mean that Nikon DX lenses and third party APS-C lenses don't have a native advantage over FF lenses for the APS-C format? No. The retrofocus requirements of a lens are only one "compromise" in its design. Image circle size is yet another, and there would be no differences between Canon's short back focus designs and Nikon's longer back focus designs in this regard. Moreover, the retrofocus requirements of a lens are problematic mostly at wider angles. Canon's EF-S standard probably yields little advantage on the telephoto end.

For an excellent, more in-depth discussion of FF vs. APS-C format optics and, more specifically, of digital-specific lens properties (not discussed here), please see Joseph Wisniewski's excellent discussion here.

L OPTICS (FOR CANON PHOTOGRAPHERS): As of this date, Canon has yet to introduce a professional-quality, L-series EF-S lens. That's not to say that they won't introduce this line or that they don't already have EF-S lenses with excellent optics comparable with L-series EF lenses. However, the EF-S lenses they produce do not have the same heavy-duty build quality (i.e. mechanics), nor do they have the moisture/dust seals of the L lenses. With this in mind, Canon's very best lenses (their L lenses) on their native-format full-frame bodies, would probably yield the highest possible resolution in general. Thus the resolution equivalency I suggested in the above section may be misleading at this time. If comparing consumer EF lens on a full frame camera to consumer EF-S lens on an APS-C camera, there may be resolution equivalency. However, there are generally no EF-S comparisons to L-series EF lenses on full frame bodies. Of course this whole argument presupposes that L lenses are optically superior to non-L lenses, which is another whole area of debate. I feel this is usually true, but many consumer lenses do deliver L-quality images, simply lacking other L-series attributes. Anyway, if L lenses are as special to you as they are to me, just know that they are not optimized for APS-C bodies and may therefore be at a slight disadvantage on that format.

DEPTH OF FIELD: One of the more commonly understood differences between full frame and crop format cameras is the depth of field they yield for a given field of view and aperture. This difference is humorously reflected by the various reportings of ghosts or orbs in flash pictures taken with compact digital cameras. The so-called orbs are actually specks of dust brightly illuminated in very close proximity to the lens, but much more mystical explanations are often attributed. For some reason ghost hunters are unable to determine, these paranormal phenomena can only be photographed with direct flash and tiny-sensor cameras! Seriously, larger format camera yield shallower depths of field, when shooting the same scene from the same perspective and with the same lens aperture, so they are much less prone to these occurrences. In fact for every doubling of the size of the format (in linear dimensions), depth of field becomes two stops shallower. But why does this happen? To understand this, it is first necessary to understand the concepts underlying Depth of Field (or DoF). This will almost certainly make your head hurt (as it sometimes does mine). You can either trust me on this point and skip to the next section, or you can wade through the arguments as to why this is so. I'll only add that a solid understanding of DoF and blur will help you to make some very shrewd focus and aperture decisions in the field, particularly if you shoot film and have no image playback ability.

Most people with a rudamentary knowledge of photography have already been introduced to the basic concept of Depth of Field, and I'll assume the reader has that starting point. What most people do not know is that the concepts of "in focus" and "out of focus" are sort of fuzzy, so to speak. There seems to be a common presumption that a depth of field from 8 to 10 ft, for instance, means that everything from 0 to 8 ft will be very fuzzy, everything from 8 to 10 ft will be crisp and sharp, and then everything beyond 10 ft will be very fuzzy again -- as if by magic. In truth there is very little difference between the sharpness at 7'11" and 8'1". It's just that the sharpness at 8 ft is deemed to be just sharp "enough," most often by someone else's standard of acceptable sharpness. The "so-what" of this is that the DoF markings on the side of some lenses (usually the old, manual focus ones) are a reflection of someone else's concept of acceptable sharpness. Moreover, they are specific to format, which I'll discuss.

The very most intuitive way to look at DoF is through the "object field method" discussed by Harold Merklinger. I strongly suggest reading his excellent articles here. I will take the liberty here of adapting and somewhat modifying his arguments to my approach. Consider the following ray path diagram, loosely derived from Merklinger's Merklinger's Part III, Figure 1:

The diagram shows how light from three points will be focused on the focal plane. The white dot to the right of the lens is perfectly focused, and light from that dot coverges perfectly into one point on the camera's focal plane on the left. The red and green dots, on the other hand, are behind and forward of the focal point, respectively. The upper diagram shows the formation of disc-shaped "virtual images" from the red and green dots at the position of the white dot where the lens is focused. (These discs are diagrammed in profile.) The bottom figure then shows in yellow ray paths how light from the virtual discs would be focused onto the camera's focal plane. Interestingly, if a film camera were to take a slide photograph of the three dots, and if that slide were projected from the "focal plane" to a screen located at the "focal point," the disc-like projections of the out-of-focus red and green dots would be identical to the original virtual images.

I won't go into a blow-by-blow description of how these virtual images would be formed (although it should be obvious with some reflection). It suffices to say that these diagrams illustrate the geometry of the size and distance relationships of the various objects, images, and virtual images. The thing I most want the reader to understand from my diagram (and similarly Merklinger's) is this: Given a three-dimensional scene and a fixed camera location with a given perspective, the size of the virtual image (the blur pattern) of every object and its projection on the focal plane is directly proportional to the effective diameter (or aperture) of the lens. This point is central to Merklinger's DoF articles and forms the basis of the methods he espouses.

Depth of field is really a very simple extension of this point. It is arbitrarily defined on the basis of what is called the Circle of Confusion (CoC), which has to be one of the most ironic terms in photography. ("Come, join the circle of confusion! Let me melt your brain!") The CoC is the disc-shaped image area on the focal plane formed by light from an out of focus point of light. When defining the limits of the DoF, we have to specify how large we are willing to allow our CoC to be before we call the image "out of focus." Obviously the smaller the maximum CoC we specify, the shallower our DoF is going to be. If on our diagram above, we define a maximum circle of confusion corresponding to the spread of the yellow ray path on the focal plane, then the depth of field would extend between the positions of the red and green spots. See how that works?

In practice, the CoC relates mostly to the we might see in a print, as extrapolated to the focal plane projection. This bears an important relationship to format. Obviously images of smaller formats must undergo greater enlargement to create the same size of print, so the criterion CoC must be smaller for a smaller format, in order to yield a print of similar appearance. In fact that criterion value should be directly propertional to the dimensions of the format.

Now let's apply what we know to find the solution to a specific problem: We have a full frame camera and an APS-C camera, both perched on tripods right next to each other (as close as possible), both looking at the same 3-dimensional scene. We want the depth of field to appear the same for both cameras in the hypothetical 8x10 print that we're going to make. How do we make that happen? Well, we learned above that the appearance of the blur (the virtual image) will be the same if the lens aperture is the same. Let's say, hypothetically, we get the blur pattern we want with an aperture of 10 mm in diameter in both cameras. OK, that's a start.

Now the projections appear the same in their blur patterns, we must also make them equivalent in size. This is where the "crop factor" of the camera comes into play. Canon APS-C cameras have a 1.6 crop factor, which means that the sensor is 1.6x smaller than a full frame ("35mm") sensor, and any lens you mount on it has an "equivalent" focal length that is 1.6x its stated focal length. So a 40mm lens on an APS-C camera yields the same perspective and field of view as a 40 x 1.6 = 64 mm lens on a full frame camera. Fine, then. We'll just put a couple of zooms on our two cameras. We'll set the lens on the APS-C camera to 40 mm and the lens on the full frame camera to 64 mm. And remember, to keep our blur patterns the same, we're going to have a 10 mm diameter aperture on each lens. Then we'll be done. The field of view will be the same, and so will the blur pattern. Both setups will yield identical 8x10 photographs.

But how do we set our 10 mm aperture? We have to express this diameter in terms of the aperture ratio. In the case of the full frame camera, the aperture ratio would be 64 / 10 = 6.4. In the case of the APS-C, it would be 40 / 10 = 4.0. now we dial in our f/6.4 and f/4.0, and we're done.

What have we learned from this mental exercise? Very simply, to replicate depth of field in a larger format camera, we must use a higher F number -- or a smaller aperture, relative to focal length. In fact this difference between full frame and APS-C is about 1 1/3 stops. Looking at this another way, we might ask what happens if we use the same aperture number on both cameras. Let's set both lenses to f/4. Obviously the DoF remains the same on the APS-C camera, but we have opened up the full frame camera's aperture to 16 mm, rather than the 10 mm we used before. This will result in 1.6 times larger blur patterns in all out of focus objects.

All of this indicates that full frame cameras are much better than APS-C cameras for achieving shallow depth of field. But what if we want a lot of depth of field instead? Are these cameras a worse choice? Would we be better served with micro-sensor cameras that show dust specks as mysterious orbs? That's the subject of the next section.

DIFFRACTION LIMITS AND DEPTH OF FIELD RANGE: This is a subject of enormous controversy, because not even all the photography gurus fully understand or agree about diffraction effects. However, they’re there, and they’re important. All photo gurus agree about that. I'd go so far as to say this is one of the most important defining differences between larger and smaller formats. I'm going to state, right up front, that larger-sensor cameras have more relaxed diffraction limitations; consequently smaller apertures may be used. In fact for every doubling of format size, two additional stops may be used on the small end of the aperture range. For reasons you'll have to read (below) to understand, this means that larger aperture cameras are capable of shallower depth of field than smaller format cameras, while the broad depth of field capabilities (on the small aperture end) are equivalent across formats. You can either trust me on this or read the gory details below (which will also make your head hurt). However, unlike the Depth of Field discussion, this diffraction section has fewer practical implications in daily photography.

If you're still reading, then give yourself a gold star for being my favorite variety of nerd! With that formality out of the way, let’s first discuss what diffraction is and how it limits what a photographer can do. Simply put, diffraction is the tendency of a wave to bend around an obstacle. Those more astute readers might remember that I described light as a “packet” which might also be considered a “particle.” Well, it is. However, it’s also a wave. I don’t know whether it’s conceptually accurate, but I think of photons as spinning particles flying through space that generate sinusoidal electromagnetic fields about them. Think of putting a bar magnet in a baseball and throwing it with a spin – something like that. Anyway, light has wave-like properties, and it diffracts around barriers. The most noted barrier in the camera, aside from the shutter, is the lens diaphragm. Light that almost grazes an aperture blade will bend just a tiny bit towards the shadow behind the blade. This results in a tiny bit of softness in the image.

Let’s delve into this a bit further: The severity of diffraction is a function of the proximity of the ray path to the margin of the aperture -- the closer the proximity, the greater the diffraction. Obviously all ray paths through a large aperture are much farther from the aperture margin than the ray paths through a small aperture. Therefore a large aperture will yield less diffraction than a smaller one.

Diffraction also depends on the distance of the aperture from the focal plane. If you think about a single "off-course" photon en route from the aperture to the focal plane, the error in target location is directly proportional to the distance (or time) the photon has been traveling along the erroneous ray path. One might think, "Ah ha! Longer lenses, with their apertures located farther from the focal plane, will exhibit more diffraction!" (If this occurred to you, give yourself a gold star.) However, apertures in photography are specified as a fraction of focal length, so that at f/4, for instance, a 100 mm lens has an aperture diameter of 100/4, or 25 mm. This fraction (or aperture number) is used to normalize between lenses and express how much light will reach the focal plane. A longer lens requires a larger aperture to pass as much light to the focal plane. In fact a 400 mm lens requires an aperture of a 100 mm diameter to pass as much light to the focal plane as our 100 mm lens with the 25 mm diameter aperture. Both lenses would be at f/4.

Getting our discussion back on track, diffraction would occur over approx. 4 times the distance from the 400 mm lens than from the 100 mm lens; however, the aperture in our example is also 4 times the size, and all the ray paths are 4 times the distance from the margins of the apertures, on average. The end result is that these factors cancel out. Longer lenses have a lesser rate of diffraction over a longer distance than shorter lenses at the same aperture number. The result is the same amount of total diffraction. That is why diffraction is most easily described as a function of aperture number, so that the focal length of the lens can be disregarded. Viewed this way, all lenses of a given aperture number, and used with respect to the same format, will yield the same amount of diffraction.

For the geeks out there: OK, OK, the diffraction blur pattern from the 400 mm lens in the above example is really 4 times larger, true. However, the margins of the blur pattern are likewise dimmer (inverse square) and therefore less significant. In truth the diffraction blur is a fuzzy bell curve sort of thing. When I talk about the size of the pattern, I'm talking about the apparent size, where the bulk of the light energy falls, rather than the total size (which includes the very faint "skirt" areas).

Now what happens when we compare two different formats? Let's consider a lens of any focal length and aperture projecting an image on either a FF or Canon APS-C focal plane. The physical size of the diffraction blur pattern will obviously be the same on either focal plane. However, this is a bigger issue for the APS-C format, because the blur pattern is exactly 1.6 times larger in relation to the total image size, as compared to the FF image. To correct for this APS-C disadvantage, a 1.6 times larger aperture must be used -- approx. 1.36 stops.

Diffraction is not a significant problem at larger apertures, but it becomes increasingly problematic at smaller apertures. The non-geeks among you can just consider the small-aperture blur patterns as being larger than the large-aperture blur patterns. The techie geeks among you can think about it more as a matter of relative light concentration, where a lesser proportion of the light is approximately on-target, and a greater proportion of the light is scattered. Either way, an image will be a bit fuzzier when a small aperture is used. The system is said to be diffraction limited at the point on the aperture scale where diffraction starts robbing the image of its sharpness.

In practical, useful, everyday terms, I’d say diffraction limitation is reached when diffraction becomes objectionable. I would argue, for instance, that diffraction limits matter less to someone making 4x6 prints than someone making 20x30 prints. (Geek note: Similar arguments are invoked when discussing depth of field.) One can use a much smaller aperture for making a 4x6 print and not notice ill effects from diffraction (even though they’re technically there). Diffraction limits depend on so many factors that it is hard to make blanket statements that apply to all equipment, uses, and situations. They are a very individual thing. Anyway, although it is important to bear these things in mind, they do not help us to compare formats. Let me just speak for myself, for my purposes, for the prints I make and the equipment I use. If I compare what I consider my own personal diffraction limits across formats, you can extrapolate to your own. At least you’ll have a useful comparison of formats.

The best way to understand the impact of diffraction limits is to compare cameras of very different formats. Along these lines, let me compare my Canon EOS 5D (12.8MP full frame DSLR, 24 x 36mm sensor) with my little Canon Powershot G11 (10MP compact digital, 5.7 x 7.6mm sensor). The 5D’s sensor is approx. 4 times the height of the G11’s, so this is a substantial size difference. I’m not really interested in making 4x6 prints of the images from either of these cameras, so I’ll start worrying about diffraction limits whenever I see that diffraction is starting to rob me of critical sharpness. Where are these limits on the two cameras? Diffraction starts becoming a real issue on the 5D at about f/16 for me.

The diffraction limit is very different for my G11. The maximum sharpness for that camera seems to be achieved at f/4.0. There’s slight degradation from diffraction at f/5.6 and marked degradation at f/8. In fact the only aperture where diffraction limitation does not occur is f/2.8, both because diffraction effects take place over a smaller area than the sensor can resolve, and because other factors degrade the lens’ sharpness at f/2.8. Thus I would say empirically that the diffraction limit is reached between f/4 and f/5.6, probably closer to f/4.

Note that the ratio of relative aperture sizes (f/4 to f/16) is very similar to the ratio of relative sensor dimensions (approx 4). This relationship is generally applicable across formats. To apply it to a less extreme difference, the ratio between the diffraction limited aperture of an APS-C camera and a full frame camera of similar megapixel resolution would be approx 1.6, which is very close to the 1.4 (square root of 2) aperture ratio that would define 1 stop. Indeed, diffraction limits start kicking in on my APS-C EOS 40D (10 MP) at about f/11.

What are the real-world consequences of these diffraction limits? Among other things, they place real limits on the depth of field one can achieve. But are there differences between formats in maximum depth of field before diffraction limits are reached? Not really. Depth of field for any given field of view is determined by the apparent physical diameter of the aperture. For a G11 sized camera to yield approximately the same depth of field as a 5D sized camera at the same field of view (same “equivalent focal length”), the aperture would have to be the same size, and therefore the aperture ratio would have to be 4 times as large (because the focal length would be about ¼ the value. See how that works?

Now consider the diffraction limited aperture values of the G11 and 5D. They are in fact in a 1:4 ratio. In other words, the G11 and 5D, at their diffraction limited apertures, yield the same depth of field at a given field of view (or equivalent focal length). I won’t go into the mathematics of why this is the case but will merely state that this relationship is hardly coincidence. Now, drawing all of this together, it’s plain to see that my G11 and my 5D both achieve approximately the same depth of field before diffraction limits kick in.

So what happens on the other end of the scale? Quite frankly, my G11, as wonderful a little compact camera as it is, cannot achieve a very shallow depth of field. Its shallowest depth of field at f/2.8 is equivalent to f/11 on my 5D, for any given equivalent focal length. I am forced to forget achieving any shallow depth of field shots with that camera. I shoot at f/4 whenever that is feasible, and if I need greater depth of field, I can shoot at f/5.6. There are practically speaking no usable apertures outside of this very narrow range. In contrast, I have about 4 stops shallower depth of field at my fingertips when shooting with my 5D if I want it.

With this in mind, it is plain to see that a larger format camera gives a person far greater latitude in depth of field and aperture range. The depth of field advantage is on the shallow end, though, not on the deep end (where all cameras are equal). Not surprisingly, the advantage of a full frame camera over an APS-C camera is about 1 1/3 stop in this regard.

Got all that? OK, now take a well deserved Coke break. You deserve it.

AUTOMATIC LENS UPGRADE: Perhaps the least understood benefit of a full frame camera over an APS-C camera is the instant apparent upgrade in all of one’s optics. For instance, if one has a collection of f/4 optics in a full frame outfit, it is the equivalent of a collection of f/2.8 optics in an APS-C outfit. How does this work?

Remember that a full frame camera with an f/4 lens achieves the same depth of field as an APS-C camera with an f/2.8 lens at the same equivalent focal length. However there is a key difference in such a comparison, namely that the full frame camera is introducing less light to the sensor. However, remember that the full frame sensor is a bit larger and exhibits less noise. In fact the ISO of the full frame camera can be bumped up by one stop to compensate for the 1 stop difference in light through the lens, and when this adjustment is executed, both cameras will produce images of roughly equivalent noise. Thus a full frame camera at ISO 200 with an f/4 lens acts the same with regard to noise and depth of field as an APS-C camera at ISO 100 with an f/2.8 lens at the same equivalent focal length

Why is this important? It can form the basis of economy of weight and cost in one’s camera outfit. Smaller aperture lenses are generally somewhat sharper than larger aperture lenses, they are substantially less expensive, and they are also smaller and lighter. A full frame camera with a bag full of f/4 lenses can actually be lighter and cheaper than an APS-C camera with a bag full of f/2.8 lenses. That’s food for thought!

Where does this advantage disappear? Remember that APS-C cameras can use EFS lenses, which are smaller, lighter, less expensive, and better suited for that format (with regard to image quality). The down-side to EFS lenses is that there are not as many of them, and few of them are of a truly professional grade. If a broad selection of lenses is of paramount importance to a photographer, a full frame outfit may be the better fit. If there are plenty of EFS lenses to suit one’s needs, an APS-C outfit with EFS lenses might be the better fit. However, this presumes that differences in depth of field (on the shallow end) are not important.

WIDE ANGLE VS. TELEPHOTO FACTOR: Probably the biggest difference one will notice between FF and APS-C is the behavior of one's full frame lenses. When a FF lens is mounted on an APS-C camera, only a portion of the image circle is used, and the angle of view is therefore not as wide. A FF camera opens up all of the wide-angle potential of a FF lens. There is also a bit of difference in lens availability on the wide end for FF vs. APS-C. Although manufacturers have expanded their APS-C lens lines to include some extraordinarily wide angle coverage in both formats (e.g. Sigma's FF 12-24 and their largely equivalent APS-C 8-16), there are still far more lenses that will render a wide angle of view on FF cameras than on APS-C cameras. This will logically continue to be the case until such time as the APS-C wide angle offerings exceed the FF offerings by a considerable margin. If you are considering jumping into the APS-C format, just be careful that the more limited wide angle offerings are to your suiting.

On the other hand, when you’re shooting birds, you’re going to have a hard time filling the frame of a full frame camera with your subject without dropping a lot of money on some very expensive optics. By comparison, some rather common optics on an APS-C camera can deliver some pretty impressive reach. Some will argue that you could simply crop a full frame image, and they are correct. However, that’s a rather egregious waste of pixels in my book. I’d rather have a bit more pixel density where I want it and not toss pixels into the trash.

(ACOUSTIC) NOISE: This one’s pretty simple, but most people overlook it: Full frame cameras are much larger than APS-C cameras, and bigger devices make bigger noises. In fact since the moving parts would probably be roughly 1.6^3 times (approx 4 times) more massive and would have to move at 1.6 times the velocity, total energy dissipated would be approximately 1.6^5 or 10.5 times as great (1/2 mv^2). That would mean approximately 10 dB more noise when the shutter trips. Although I haven’t actually pointed SPL meters at cameras, this number would seem about right. When would this matter? Noise could be a problem during ceremonies or when photographing children, pets, or wildlife.

VIBRATION: With the more massive moving parts and the noise also comes vibration. It takes a firmer grip or a sturdier tripod to stabilize a full frame camera and to neutralize mirror slap. APS-C cameras are definitely smoother.

DUST BUNNIES: Dust is the bane of digital photography. Fortunately for full frame photographers, the dust bunnies that do land on the sensor are 1.6 times smaller, relative to sensor size, than the dust bunnies that settle on APS-C sensors. FF sensors are also easier to clean (at least for me).

ADAPTING ODD LENSES: Because the mirror of a full frame camera is so large, it can actually crash into the backside of a non-native lens adapted to the camera. If you’re like me and like to adapt occasional manual focus lenses, you’ll have to be very careful about mirror clearance. Conversely, APS-C cameras have smaller mirrors, and mirror crashes are seldom, if ever, a problem.

MAXIMUM FRAME RATE: APS-C cameras tend to have faster maximum frame rates than FF cameras. This is probably not due so much to limitations imposed by the format. There have been some extraordinarily impressive frame rates achieved by 35mm SLR film cameras, which could burn up film even as fast as a motion picture camera. It's true that a larger mirror and larger shutter take longer to move, but even so, frame rate in digital cameras is much more information-processing-limited than mechanically limited. So does it take longer to process the information from a 12 MP full frame sensor than from a 12 MP APS-C sensor? Absolutely not. The difference in frame rate probably has more to do with shooting styles than anything. Sports photographers often like the "telephoto advantage" of the smaller APS-C format (or other crop formats, such as the APS-H), and they need for their cameras to have high maximum frame rates. As a result, there are a lot of APS-C cameras with very impressive frame rates. My 40D, for instance, cranks along at about 6 frames per second.

WEIGHT: Because of its larger and heavier mechanicals, and sometimes because of a heavier chassis, a full frame camera will be heavier and often larger than an APS-C camera. Sometimes these differences are small. For instance, my 40D and my 5D are approximately the same size and form factor. However, my 5D does wear on me a bit more when I'm carrying it around with me long enough. The 1-series digitals, by comparison, are much larger and much heavier. Some people call them "bricks." Judge for yourself whether this is a good thing or a bad thing. Many photographers feel that a heavier body is essential for achieving a comfortable balance between a long lens and the body. Fewer photographers (mostly women) like the comfort of a smaller and ligher body (which is why I prefer the form factor of my 5D over the form factor of a "brick"). Larger hands handle larger cameras with more comfort.

BUILD: Perhaps because full frame cameras are more expensive than APS-C cameras, they are more often used by professionals than by amateurs. While some prosumer cameras like the 5D and 5D Mark II seem to fill the needs of both advanced amateur and pro alike, flagship cameras like the 1Ds Mark III are well beyond the reach of any but the rare amateur/enthusiast and are even hard for most pro photographers to justify. Because full frame cameras are fodder mostly for pro usage, they tend to be of better build, generally speaking. Certainly a 5D and a 50D are of similar build quality, but there is no APS-C 1-series professional body, and there is likewise no plastic-bodied full frame camera. Not surprisingly, full frame cameras, in general, have the expert-friendly user interfaces that pros and advanced amateurs prefer, and APS-C cameras are more user friendly, on the whole, for casual photographers.

SO WHAT’S THE BOTTOM LINE? Which is better? Which do I recommend? I recommend that the well equipped photographer have both formats. Why not? Having two formats of camera body greatly augments the utility of the lens collection and offers the best of both worlds. I shoot in both formats myself. I carry an APS-C when I’m trying to travel small and light and when I want to leave my more expensive equipment at home (e.g. in questionable weather). I pull out the full frame when weight doesn’t matter, but high light sensitivity, versatility, and/or shallow depth of field do matter. As a professional, I need at least two cameras anyway, so that if one fails, I have the other to use as a backup. My FF and APS-C cameras are as different in their capabilities as I could make them, so that they give me the greatest possible versatility.

What if you can only afford one camera? Well, choose wisely according to your needs. An APS-C gives you cheaper entry but might actually cost you more in the long run if you want to have lots of fast lenses to go with it. If fast lenses don’t matter to you, you can save a lot of money and produce images of similar quality if you collect EFS lenses with an APS-C camera.

If you can stomach the high cost of a full frame camera, you’ll make the most of your EF lens collection and may actually save money in optics in the long run (provided you won’t be compelled to upgrade to each new full frame model that hits the market). You’ll have shallower depth of field capabilities, more working room in lens aperture, more sensitivity, less sensor noise, and wider fields of view.

It’s a good thing if your head hurts a bit right now. It means you’ve at least thought about the subject and haven’t simply swallowed the dogma that others are handing you. Please don’t feel humbled by any of this material. It’s all quite complicated when you dig through it all and especially when you delve into the mathematics behind it. It’s frankly something the gurus argue between each other, so I don’t think many people fully understand the issues. I’m pretty sure I have a good understanding, and my technical background helps me with that. However I freely admit that I, too, have to pause to scratch my head about some of of the issues I raise. Anyway, it is my hope that you have learned something and will weigh the right factors when selecting the best format for you.

ACKNOWLEDGMENTS: At various times other technically oriented photographers have offered commentary on and input into this article. I won't mention them by name, because there is enough controversy in the multitudes of issues I raise that they might not agree 100% with everything I assert. However, I want them all to know I have valued their input and appreciate their taking the time to offer it. Thanks!


Links: Home Galleries About Us Photoediting Services On-Location Services Portraiture Architectural Photography Commercial Photography Special Events Web Design Articles Projects FAQ Contact Site Map Notice: All images and web content are copyrighted by Sarah Fox, Earline Thomas, and/or Graphic Fusion, will all rights reserved.
Printing or distribution of this material is prohibited.