Single-Slit Diffraction

Suppose that the opaque screen contains a single slit of finite width. In fact, let the slit in question be of width $\delta$, and extend from $x=-\delta/2$ to $x=\delta/2$. The associated aperture function is

\begin{displaymath}F(x)=\left\{
\begin{array}{ccc}
1/\delta&\mbox{\hspace{0.5cm}...
...elta/2\\ [0.5ex]
0 &&\vert x\vert> \delta/2
\end{array}\right..\end{displaymath} (10.54)

It follows from Equation (10.52) that

$\displaystyle \bar{F}(\theta) =\frac{1}{\delta} \int_{-\delta/2}^{\delta/2} \co...
... {\rm sinc}\left[\pi\,\frac{\delta}{\lambda}\,(\sin\theta-\sin\theta_0)\right],$ (10.55)

where ${\rm sinc}(x)\equiv \sin(x)/x$. Finally, assuming, for the sake of simplicity, that $\theta,\,\theta_0\ll 1$, which is most likely to be the case when $\delta\gg \lambda$, the diffraction pattern appearing on the projection screen is specified by

$\displaystyle {\cal I}(\theta) \propto {\rm sinc}^2\left[\pi\,\frac{\delta}{\lambda}\,(\theta-\theta_0)\right].$ (10.56)

According to L'Hopital's rule, ${\rm sinc}(0)=\lim_{x\rightarrow 0}\,\sin x/x = \lim_{x_\rightarrow 0}\, \cos x/1=1$. Furthermore, it is easily demonstrated that the zeros of the ${\rm sinc}(x)$ function occur at $x=j\,\pi$, where $j$ is a non-zero integer.

Figure 10.10: Single-slit far-field diffraction pattern calculated for $\delta /\lambda = 20$.
\includegraphics[width=0.9\textwidth]{Chapter10/fig10_10.eps}

Figure 10.10 shows a typical single-slit diffraction pattern calculated for a case in which the slit width greatly exceeds the wavelength of the light. The pattern consists of a dominant central maximum, flanked by subsidiary maxima of fairly negligible amplitude. The situation is shown schematically in Figure 10.11. When the incident plane wave, whose direction of propagation subtends an angle $\theta_0$ with the $z$-axis, passes through the slit it is effectively transformed into a divergent beam of light (the beam corresponds to the central peak in Figure 10.10) that is centered on $\theta=\theta_0$. The angle of divergence of the beam, which is obtained from the first zero of the single-slit diffraction function (10.56), is

$\displaystyle \delta\theta = \frac{\lambda}{\delta};$ (10.57)

that is, the beam effectively extends from $\theta_0-\delta\theta$ to $\theta_0+\delta\theta$. Thus, if the slit width, $\delta$, is very much greater than the wavelength, $\lambda $, of the light then the beam divergence is negligible, and the beam is, thus, governed by the laws of geometric optics (according to which there is no beam divergence). On the other hand, if the slit width is comparable with the wavelength of the light then the beam divergence is significant, and the behavior of the beam is, consequently, very different to that predicted by the laws of geometric optics.

Figure 10.11: Single-slit diffraction at oblique incidence.
\includegraphics[width=0.8\textwidth]{Chapter10/fig10_11.eps}

The diffraction of light is an important physical phenomenon because it sets a limit on the angular resolution of optical instruments. For instance, consider a telescope whose objective lens is of diameter $D$. When a plane wave from a distant light source of negligible angular extent (e.g., a star) enters the lens it is diffracted, and forms a divergent beam of angular width $\lambda/D$. Thus, instead of being a point, the resulting image of the star is a disk of finite angular width $\lambda/D$. (See Section 10.9.) Suppose that two stars are an angular distance ${\mit\Delta}\theta$ apart in the sky. As we have just seen, when viewed through the telescope, each star appears as a disk of angular extent $\delta\theta=\lambda/D$. If ${\mit\Delta}\theta>\delta\theta$ then the two stars appear as separate disks. On the other hand, if ${\mit\Delta}\theta < \delta\theta$ then the two disks merge to form a single disk, and it becomes impossible to tell that there are, in fact, two stars. It follows that the minimum angular resolution of a telescope whose objective lens is of diameter $D$ is

$\displaystyle \delta\theta \simeq \frac{\lambda}{D}.$ (10.58)

This result is called the Rayleigh criterion. (See Section 10.9 for a more accurate version of this criterion.) It can be seen that the angular resolution of the telescope increases (i.e., $\delta\theta$ decreases) as the diameter of its objective lens increases.