Quantum Field Theory (QFT)

1Quantum field theory (QFT) is often defined as the result of a union between special relativity and quantum mechanics. As such, it is said to form the foundation of the standard model, which is the theoretical framework in which all known particles and interactions are described with the exception of gravity. Quantum field theory (QFT) may also be described as the overarching framework of mathematical and conceptual ideas, which represents the evolution of pre-war quantum mechanics (QM), which branched into three distinct developments:

  • Quantum Electro-Dynamics (QED)
  • Quantum Chromo-Dynamics (QCD)
  • Electro-Weak Theory (EWT)

However, we shall start by simply introducing some of the general ramifications of quantum mechanics in the post-war era. As the name might suggest, quantum mechanics started out as an extension of classical mechanics in the sense that it still embodied the idea of ‘particles’ within the atomic structure. However, some of the ambiguities of the wave-particle debate would quickly spilt over into the physics of fields, i.e. electromagnetism and gravitation. In this context, the generalised concept of a quantum field theory was thought to be better equipped to describe what was known of quantum systems, which had many degrees of freedom, plus the need to accommodate the spacetime invariance defined by relativity.

Note: Based on earlier discussions, it has been shown that the Compton wavelength is always smaller than the deBroglie wavelength. However, in general terms, we might define the deBroglie wavelength as a distance scale at which the wave-like nature of particles starts to become apparent; while the Compton wavelength is the distance scale  at which the concept of any single point-like particle breaks down completely. While this sub-atomic scale might be quantified in the range of 10-15 metres, it is highlighted that this is still 20 orders of magnitude greater than the Planck scale.

So, as such, QFT might be said to have started out as an open-ended approach, which it was hoped might provide a more productive framework for further developments, rather than necessarily being an all encompassing theory in its own right.

So how did QFT develop?

Based primarily on mathematical logic, QFT was often able to move well beyond the limits of empirical verification, which opened up mathematical hypothesis to as much philosophical interpretation as scientific. As such, it might be said that the scope of QFT proceeded on the basis of:

  • Mathematical hypothesis
  • Limited scientific verification
  • Philosophic interpretation

So, within this somewhat abstracted framework, QFT was free to provide conceptual analysis of problems, which seemed to have no solution within the original framework of quantum mechanics. For example, as a mathematical hypothesis, QFT was not constrained to explain the quantum realm in terms of any physical reality, i.e. as a particle or a wave, as long as the logical consistency of its equations could be verified against existing experimental data. However, while the outcome of these equations was subject to some level of verification, i.e. the predicted end state, the actual reality of the process itself still seemed opened to interpretation, both scientific and philosophical.

What was the goal of QFT within this framework?

The initial goal was possibly to provide a description that underpinned the particle model, but as the bullets above suggest, much of QFT’s development was initially based on what appears to be little more than a mathematical premise and while the outcome was still expected to align to experimental data, it did not necessarily demand any physical reality to be associated with the description of the process itself. For example, in its earliest forms, quantum mechanics could not really describe a photon in terms of a relativistic ‘particle’, since photons were assumed to have no rest mass and propagate at constant velocity [c] in vacuum. As such, photons were often vaguely described as a by-product of an electron’s energy transition within an atom, which was then outlined in terms of an early QFT/QED approach in Dirac's paper, published in 1927, entitled ‘The quantum theory of the emission and absorption of radiation ’. It is this paper that coins the name ‘Quantum Electro-Dynamics (QED)’ as being part of ‘Quantum Field Theory (QFT)’ and in which Dirac outlined the description of a photon in terms of the quantization of an electromagnetic field. In so doing, Dirac helped established the foundations of both QFT and QED based on the following assumptions:

  • The quantization of the electromagnetic field.
  • The relativistic nature of the electron.

However, while these early beginnings appeared to be pointing to a way forward, many problems subsequently emerged. Possibly the most fundamental of these problems was associated with the apparent infinite self-energy of the electron in connection to its own electromagnetic field. Worryingly, these infinite values appeared to suggest that the cause of the infinity lay in the very premise of QFT and not in any specific calculation. Of course, this did not stop many people from trying to side-step such problems using what might be described as a form of mathematical ‘trickery’, i.e. renormalisation.

Were there other problems associated with theory and results?

In the immediate post–war period there was considerable interest in trying to explain the discrepancies between the empirical and predicted results linked to Dirac’s relativistic equation, when applied to the hydrogen atom in terms of the magnetic moment of its electron. In 1947, Hans Bethe was the first to explain the ‘Lamb shift in the hydrogen spectrum and, in so doing, would help consolidate the ideas emerging under the heading of quantum electrodynamics (QED). What Bethe highlighted was the deviation of the [2s] and [2p] levels of hydrogen, as determined by the Dirac equation, was due to a quantum electro-dynamic effect, which could be computed through a process that was to become known as ‘mass renormalization’. The parameters, mass [m0] and charge [e0], which appeared in the original formulation of QED, did not align to the ‘measured’ mass and measured charge of an electron. The ‘measured’ mass [m] of the electron is defined in terms of its momentum [p] within the relativistic energy equation √(p2c2+m2c4). Similarly, the ‘measured’ charge should be defined by the force between two electrons, at rest, separated by a distance [r], as described by Coulomb’s law [ke2/r2], where [e] is the ‘measured’ charge of an electron. It was later shown by Julian Schwinger and by Richard Feynman that the divergence encountered in the low orders of a perturbation theory could be eliminated by re-expressing the parameters [m0] and [e0] in terms of the ‘measured’ values [m] and [e] through the procedure of ‘renormalization’.

Perturbation theory can be described as an approximation method of a more complicated quantum system. The idea is to start with a simple system, for which a mathematical solution is known, to which is added an additional ‘perturbing’ Hamiltonian that represents a weak disturbance to the system. If this disturbance is not too large, the physical quantities associated with the perturbed system can be extrapolated based on the idea of continuity, i.e. as small 'corrections' to the simpler system. These corrections, being 'small' compared to the size of the quantities themselves can be calculated using approximate methods, such as an asymptotic series. As such, more complicated systems can therefore be studied based on knowledge of the simpler one.

However, Feynman's subsequent formulation of QED is often presented in terms of Feynman diagrams, which might appear to contradict the position of QFT in that these diagrams seem to depict the paths of ’particles’. However, Feynman's mathematics is based on calculating probability amplitudes based on the integral formulation of a field theory. In this context, the diagrams only provide a method of visualizing the various terms of a perturbation series in the form of a flow of electrons and photons. As such, Feynman’s use of the word ‘particle’ does not align to any classical concept or for that matter even imply any physical existence. While we will return to some of the details of Feynman diagrams in later discussions, each element on these diagrams can be introduced as simply being symbolic of some underlying mathematical expression, which in-turn defines the probability amplitude of each path within a quantum system. Therefore, while QED still resorts to the semantics of ‘particles’, QFT continues to exclude the classical idea of physical particles within its description  of sub-atomic interactions.

Is there a physical explanation for renormalisation?

In terms of QED, part of the justification for renormalization was based on the idea of symmetric properties. For example, the idea of Lorentz and gauge invariance made it possible to formulate, and physically justify, a finite result, which had not been possible in earlier theories. While some initially questioned the validity of this approach, its results were justify by experimental data, e.g.

  • The anomalous magnetic moment of the electron and the muon, caused by Lamb Shift, were avoided

  • It provided the necessary corrections to the scattering of photons by electrons and the idea of pair production plus the concept of bremsstrahlung or braking radiation.

So, by the early 1950’s, local quantum field theory was considered to be the most appropriate framework for the unification of quantum theory and special relativity. As such, several local QFT theories were forwarded as a more fundamental description of the ‘elementary particles’ that also explained their internal symmetries. Therefore, photons, pions, electrons, muons, and neutrinos were all beginning to be described in terms of a localized excitations of an underlying field. However, it soon started to become clear that meson theories were inadequate, when trying to account for the properties of all the new hadrons being discovered. In addition, an influx of new experimental discoveries appeared to be dashing any hope of a obvious transition from QED to the formulation that described all the dynamics of the strong interaction.  As such, by the end of the 1950’ s, QFT was beginning to face a crisis of confidence, because of its inability to describe these strong interactions and the growing problems of trying to invoke any sort of realistic model needed to explain the dynamics of hadrons. Therefore, efforts to develop a theory for the strong interactions, based on the QED model,  were essentially abandoned, although a local gauge theory, advanced by Yang and Robert Mills in 1954, would prove to be influential at a later date. So, at this point, quantum theory is left with the problem of not only trying to reconcile the semantics of the original particle model, but also how QFT might explain the apparent existence of 4 fundamental forces or interactions:

  • Gravitation,
  • Electromagnetism,
  • Strong nuclear force
  • Weak-decay interactions

As such, there was a need for a theory that would help explain the strong, electro-weak and gravitational interactions. Initially, even the unification of the electromagnetic force with the weak force prove to be an obstacle due to the lack of particle accelerators  with energies high enough to reveal the processes at work at the necessary energy levels within the atomic structure. As a result, it took time for any understanding of the hadron sub-structure to appear based on the theoretical development of the quark model.

The quark model was first proposed, independently, by physicists Murray Gell-Mann and George Zweig, in 1964, as parts of an ordering scheme for hadrons. However, there was virtually no evidence of any physical existence until the appropriate scattering experiments were carried out in 1968. Subsequently, six types of quarks have been proposed based on accelerator experiments, the last being the ‘top’ quark only being discovered in 1995.

However, even today, progress toward any sort of  ‘unification’  based on quantum field theory that incorporates the electromagnetic, weak and strong forces might, at best, be described as slow; while any ideas on the ‘grand unification’ inclusive of the gravitational force continues to remain both elusive and speculative.

But why are so many complicated ideas required to describe such elementary particles?

In part, this question is raised simply to provide a transition into the following sub-pages of this discussion, which tries to provide some initial outline some of the concepts that were developed within the QFT/QED framework  of particle physics.

Note: From today's perspective, we might judge the following outline of QFT to follow in terms of a question: if QFT is a comprehensive description of the physics in the quantum realm, why are so many physicists still seeking alternative theories?