Renormalisation

1We might introduce the general concept of  renormalisation as a mathematical method in which an infinite theoretical limit is then converged towards some finite measurable limit. Based on the previous description of ‘bare parameters, we might realise why this approach might be of value within Feynman’s QED model. Of course, as the previous discussion also tried to point out, it is not just quantum theories that have struggled to reconcile infinities. For example, if we step back to the classical description of an electromagnetic field, we also run into a problem with infinity:

[1]      1

In [1], we see the classical interpretation of the force between two charges [q1,q2], which can also be interpreted as an electric field strength [E] surrounding charge [q1], as measured by a unit test charge [q2]. While there is an implied infinity as the radius [r] is increased, the limit of [E] in this case falls to zero, such that there is no infinity associated with any measurable value. However, we can see that this is not the case as the radius [r] falls to zero, implying that [E] increases to infinity. In terms of any physical interpretation, we might realise that there is a problem with the theory, as expressed in [1], as the radius approaches zero, even though measured values of [E] appear to conform to [1] for all other values, i.e. [r>0]. However, we might also approach this problem from a practical perspective, e.g. physical particles have to occupy some physical volume, which would then put some lower limit on the radius in [1] on the basis that point charges with zero radius cannot exist physically.

But why raise this issue at this point?

In some respect, Feynman’s QED model is still predicated on the idea of a point-charge in as much as when making  calculation of the probability amplitude, all possible coupling points are taken into consideration, i.e. to a conceptual resolution of zero distance. The apparent consequence of this possibly overly abstract model is that the energy associated with the electric field around the electron becomes infinitely large, which is not so different from [1]. However, in contrast to practical limit place on [1] above, QED seems to resort to what amounts to a mathematical procedure called ‘renormalization’ in order to remove the infinities from its equations, so that the theory aligns with experiments. While it appears that this approach is now accepted, it is possibly true to say that many physicists are still uncomfortable with some of the assumptions underpinning one of the most fundamental theories in modern physics. This also extends to some of the founders of quantum mechanics, e.g.

"Renormalization is just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr's orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number." Paul Dirac: 1933

Of course, some might rightly point to the fact that Dirac’s quote originates from the earlier pre-war years of quantum mechanics, rather than the post-war years of quantum field theory. However, this objection cannot be so easily levelled against the following quote:

"The shell game that we play ... is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate." Richard Feynman: 1965

However, in fairness, it should also be pointed that 5 Nobel prizes in physics have been won for work heavily dependent on the idea of renormalization, i.e. 1965, 1979, 1982, 1999 and 2004. The first being Feynman himself for his contribution to QED, despite the concerns raised above in the same year. 

OK, but what about some details?

While the in-depth mathematical details are beyond the scope of this discussion, some attempt might be made to describe the quantum processes involved. The infinities in QED originate because the electric field [E] linked to identically prepared systems is subject to random quantum fluctuations. As a result, the quantum electric field does not reflect the smooth undulations as assumed in classical theory, but rather more like a ‘sea of fluctuations’ that becomes infinite in size at vanishingly small length scales. These infinite fluctuation  are at the root of the problem of quantum field theory, because the derivatives of a field [E] diverge as the separation scale approaches zero. Therefore, in perturbation theory, the fluctuations at the quantum scale result in a divergence of the sum of the pathways included in the probability amplitude calculations, i.e. intermediate paths or states can be associated with arbitrarily large momentum and energy. In order to give any meaning to the probability amplitude calculations, it was first necessary to ‘regulate’ the scale of the fluctuations by having a sort of upper limit to the energies involved, such that the infinities could be ignored, if not necessarily explained. Of course, the introduction of the upper limit might appear quite arbitrary, especially given the apparent importance of quantum fluctuation to the whole premise of quantum theory. However, while the upper limit is effectively removed at the end of the calculation, this last step is still the source of much debate within the renormalization procedure, as a whole, and many still question its validity.

Is it possible to be a little more physically descriptive of the issues?

So far, it has been suggested that the ‘infinities’ arising from the sum of the paths diverge quickly as energy-momentum values approach infinity or when the distances involved approach zero. Some of the most serious effects leading to these infinities are sometimes referred to as ‘ultraviolet divergences’ where the implied frequency [E=hf] links these effects to energy. However, there are also ‘infrared divergences’ that we might simply link to increased wavelength at this stage.

OK, but what are ultraviolet divergences?

These divergences are linked to the value of the electric charge possessed by the electron, which we might initially describe as a point charge, situated at some point in space and surrounding this charge is an effect known as ‘vacuum polarization’ to be described below. Now assume that at some point close to this charged particle, a ‘virtual’ pair of particles is created, e.g. an electron and positron, which the laws of conservation require annihilate in a very short period of time. The time period being defined by Heisenberg’s energy-time uncertainty relationship:

[2]      2

We might try to represent this process in diagram (a) below, where the ‘virtual’ photon also leads to the creation of the virtual electron-positron pair within the electric field of the electron lying along the path [E]:

2

The inset (b) above represents the creation of other virtual particles that take place outside the effective electric field at [E], which are therefore  considered to have no observable effects on [E]. However, in the context of inset (a), the electric field [E] does have an opposing effect on the virtual electron-positron pair due to their different charge polarity. As such, the virtual electron is repelled by [E], while the virtual positron is attracted, such that a small physical separation is created during their brief existence. In fact, based on the previous discussion of Feynman’s QED model, we might assume that this process of virtual particle creation and annihilation is taking place all the time, which leads to the model of ‘vacuum polarization’ that in-turn reduces the measured value of the electron charge.

Note: As described, vacuum polarization might be forwarded as the physical basis of charge renormalization, where the field surrounding the electron induces a small charge separation in virtual electron-positron pairs momentarily created out of the vacuum. This process then reduces the charge of the electron from its infinite ‘bare value’ to its observed or measured value; although we might still wish to question an infinite value of the bare charge in any form of physical reality.

So while the idea of vacuum polarisation seems plausible, we still might have a nagging doubt as to why the scale factor between the bare and measure value of charge, and by association its mass, is effectively infinite. Of course, we might now see why the idea of an upper limit or ‘cut-off’ was introduced in the outline above. However, irrespective of any gut-feel regarding renormalisation, it is now appears to be a required, and a generally  accepted, feature of QFT, and in particular QED, in order to obtain any meaningful finite answer for the probability amplitude of a given interaction. So given that this discussion cannot resolve this debate, we might simply table a question:

Does the need for renormalisation question the validity of any quantum field theory?

Based on the argument that the results obtained using renormalisation appear to be supported by experiments, it validity appears may be said to be justified, if not verifiable. As such, it is possibly only the lack of any obvious physical process that is being questioned. However, Feynman also raised another aspect to this debate, which we might also wish to consider:

“So the framework of amplitudes has no experimental doubt about it: you can have all the philosophical worries you want as to what the amplitudes mean, if indeed they mean anything, but because physics is an experimental science and the framework agrees with experiment, it’s good enough for us so far.”

To be honest, I have a profound problem with this position. Not in respect to the assertion that physics is an experimental science, as clearly any science needs to be verified by experiments, but rather with the inference that this is all science aspires to be. For it would seem for many, the goal of science should extend beyond just a mathematical models, which while capable of predicting the probability of an given quantum overcome, may not yet be able to explain whether, or why, it should be accepted as a true description of physical reality – whatever that means.