The optical rangefinder remains a most – some would maintain the most – accurate means of focusing. Single-lens reflex focusing introduces several variables, not least human error – and nowadays, autofocus error.

It is known that visual focusing accuracy falls off steadily – it’s tiring if exercised continuously as a press photographer may do, working all day. On the other hand, the ability to superimpose two images of a subject area or align a horizontal split does not deteriorate in the same way.

Furthermore, according to the eyesight of the user, there can be a tendency to ‘see through’ the small focusing screen on a miniature camera, so that focus goes behind the desired plane. This has been a recurring problem with users of twin-lens reflex cameras who are short sighted.

The miniature screen on a 35mm film SLR, as with its full-frame, APS-C sensor – and smaller – successors, is clearly inadequate if unaided.

Much improvement has been made through techniques such as laser etching of the focusing screen, which gives the light-diffusing surface finer and more evenly distributed components.

The microprism, a centre circle populated with tiny prisms, has been very popular. Only when the image is in focus does it clarify across the microprism area. But the difficulty is that the angle of the tiny prisms needs to be optimised for the focal length of the lens in use. Outside the medium focal length range, say 35-105mm, they tend to blacken over. That is why top system SLRs offer a number of interchangeable focusing screens. Some are dedicated to use with very wideangle and telephoto lenses. But swapping screens in the field is not a recommended exercise.

To give something of the CRF’s advantage of independence from eyesight errors, screens have been developed with an annular, or ring, of microprisms, with crossed-wedges in the centre as an additional aid. This gives a split-image to align, but is even more focal-length dependent.

Ultra-sonics and infrared

Before dealing with the system that truly paved the way ahead for the SLR – including the DSLR – two others should first be mentioned: the use of ultra-sonics and infrared.

Some Polaroid Land camera models use an ultra-sonic pulse whose return time determines the subject’s distance and so the focus position of the lens. An infrared subject scanning means is used in several compact cameras.

Deep red and IR light are also used in some SLRs as an aid to autofocus in poor light. It projects a random pattern, having sufficient contrast for the AF optical system to pick it up and set focus.
 

Active and passive

The simplest method of machine detection of focus is that used by Canon in its 1963 photokina demonstration of a prototype camera. It depends on the fact that output from a CCD peaks when an in-focus image is beamed onto it.

Later, the principle was developed further into the form now used, with variations, by many compact viewfinder cameras.

Using three sensors works most efficiently, though the system can work with only two. The central sensor is sited at the exact distance equivalent of the camera image plane. One of the others sits beside it, a little in front of the image plane. The third – on the other side – is similarly behind the image plane. The output from the three is analysed by the camera’s computer.

There is a difference between the fingerprint of blur from focus in front of the subject and that from focus behind it. With this data it is able to move the lens focus until the peak output comes from the central sensor. When using two sensors, one is placed just ahead of the optimum focus, the other just behind.

Some less expensive cameras do not have continuous distance setting. Instead, there are a number of focus zones between closest and infinity – anything from 8-30. The autofocus then brings the lens focus to the appropriate zone, relying on depth of field to give satisfactory sharpness.

Focus rangefinding

Focus rangefindingImage: The two sensor arrays replace the human eye. The moving mirror linked to the focusing movement of the lens reflects the image via a mirror and lenslet. When the image on one sensor exactly matches that from the other fixed mirror on the other sensor, focus is achieved and the focusing movement of the lens ceases

Contrast

‘Contrast’ is the focus point criterion, whether you use the human eye or let advanced technology do it for you.

The only methods that do not depend on it are the ultrasonic and IR pulse systems. They enable a focused flash shot to be taken in total darkness without warning – although not through glass if using ultrasonics. Systems that involve the emission of a pulse towards the subject are termed ‘active’ systems.

Those that use light reflected from the subject are termed ‘passive’. Standard AF in an SLR is passive. Deep red, IR AF assist systems are ‘active’.

The method of subject area or motif focusing universally used in modern SLRs is known as phase detection, or matching. In essence, it is related to visual split-image coupled-rangefinding.

The rays coming from the exit pupil of the lens are split into two beams. Small lenses focus these on two photosensors containing numerous – at least 128 – separate detection sites. They are located in the base of the camera and receive light via a secondary mirror behind the reflex mirror. A semi-silvered section allows this to happen. Just as in crossed-wedge visual focusing, the base separation of the rangefinder ‘windows’ is the diameter of the lens exit pupil – effectively at its maximum f/stop. That is why most AF systems need an aperture of f/4 minimum to work.

Only when the image of the motif to be focused occupies the same central position on both sensors – when the two are ‘in phase’ – will it be sharply focused in the camera image plane.

Once that phase matching is achieved, the motor driving the lens focus movement stops.

There are several variants of the basic method. A smaller or greater number of AF zones can be provided by increasing the AF sensor’s area and allowing particular groups of photosites to be user-selected.

A number of zones is necessary for predictive or follow focus AF to allow the speed and direction of a moving subject to be assessed. In order to detect focus on motifs at any angle from horizontal to vertical, additional sensors are provided, often in an ‘H’ pattern.

Nowadays, the data transmission from detectors to the focus drive motor is digital and the subject distance information may be passed on to assist the exposure setting system. This application was pioneered by Nikon.

Phase detection

Phase detection

Film and digital SLRs use the phase-detection system of automatic focusing.

A proportion of the rays coming through the lens pass through a semi-silvered section of the reflex mirror and are deflected down into the base of the camera. There, two lenses, sited just behind the geometrical equivalent of the image plane, form separate images on a two section sensor.

If the current focus of the lens is in front of the required subject plane, the two images will be displaced outside the centre lines on their sensor sections. If focus is currently nearer than the required plane, both images will be within the centre lines.

The camera computer rapidly calculates the focusing movement needed to bring both images to their sensor centre lines and activates the AF motor to drive the lens to this focus.

All this happens in a fraction of a second

Anti-hunting

In practice, to prevent overshoot and hunting, two measurements are made.

The first predicts from the degree of perceived unsharpness how far the focus movement will have to be driven. The motor starts and a second check is made at the predicted focus point. If all is well, the focus locks and – if in single shot mode with shutter linked – the release button can be fully depressed to make the exposure.

The speed of autofocus is assisted by designing lenses to have low mass focusing sections with a short travel over the distance span. In practice, this means having ‘internal’ focusing: the movement of an optical group inside the lens.

Conclusion

Automatic focusing systems are not infallible, as I have found when testing lenses for AP.

Like the eye, they depend on the subject being of sufficient contrast to determine a sharpness point in the lens focus travel. This may occur in bright light, not only in poor light.

 The failure point also depends on the lens design, not only on maximum aperture but also on how rapidly the out-of-focus planes lose shape. The smooth loss of shape that bokeh enthusiasts like can make AF lock on just either side the optimum. The defocus is not visible in the finder, but is apparent when later enlarging the image.

Another source of error occurs when the camera’s finder frame is displaced relative to the photographic one. Then, focusing on a small area located at a zone in the finder can result in another plane – usually behind it – being sharp. In lens testing I use a large cross to focus on either by AF or manually before making the exposures on the target.

Despite the limitations of ‘machine’ focusing, viewed overall and in experienced hands it is likely to produce a greater number of hits than the human eye. It does not tire and is independent of eyesight defects and variations.

But that does not mean we should stop focusing visually when more appropriate, as in close-up work. I am very enthusiastic about Live View, and the ability to see the precise frame, focus point and field depth that will be recorded. Coupled to a digital magnifier facility, it brings miniature cameras – full-frame and smaller – much nearer to the kind of control that users of medium and large-format cameras have always enjoyed.

Pages: 1 2 3