A note on the quantum mechanical measurement process.
Dedicated to Peter Mittelstaedt (1929-2014)
Summary
Traditionally one main emphasis of the quantum mechanical measurement theory is on the question
how the pure state of the compound system ‘measured system + measuring apparatus’ is trans-
formed into the ‘mixture’ of all possible results of that measurement,weighted with their probabil-
ity: the so-called “disappearance of the interference terms”. It is argued in this note that in reality
there is no such transformation, so that there is no need to account for such a transformation theo-
retically.
Zusammenfassung
Gewöhnlich liegt ein Hauptgewicht der quantenmechanischen Meßtheorie auf der Frage des Über-
gangs vom 'reinen Fall' des Gesamtsystems 'gemessenes System + Meßgerät' in das 'Gemenge' aus
allen möglichen Meßergebnissen, gewichtet mit ihren Wahrscheinlichkeiten: Das sog. "Verschwin-
den der Interferenzterme". In dieser Notiz wird die Meinung vertreten, daß es einen solchen Über-
gang faktisch nicht gibt, so daß es nicht notwendig ist, eine theoretische Erklärung für den Über-
gang zu geben.
0 The measurement process:
connection of the mathematical theory with reality
Quantum mechanics contradicts many of the traditional prejudices of the (then so-called) classical
physics, i.e. mainly mechanics and electrodynamics. From there a discussion originates about its
foundations that has not ceased since the discovery of its mathematical structure in 1925/26. Much
of that discussion centers on the description of the measurement. That is quite plausible because the
measurement is the point in physics where all those theoretical and mathematical constructs are tied
to concrete reality. And that is what in the end should be described somehow by quantum mechan-
ics [(Mittelstaedt, 1998, 122: “The interface between object theory and metatheory is given by the
measuring process”; cf. Lyre, 2010]. The problems arise because measuring instruments are large
machines, in dimensions of our everyday life, whereas quantum mechanics is considered to apply
mainly to very small objects, of the size of atoms or even smaller elementary particles. But quan-
tum mechanics is supposed to be the universal theory of the change of any object whatsoever, thus
also of measuring instruments; and since measuring instruments are composed of atoms, it is quite
plausible, too, that the laws of their change could be derived from the laws of the change of atoms.
So the subject in question is a quantum mechanical theory of measurement interactions. (Bächtold,
2008)
The matter apparently is difficult. It is so difficult that some experts have given up the hope for a
solution: Eugene Wigner1 wrote already in 1963 that a new theory, in addition to quantum mechan-
ics, would be necessary in order to solve the problems of the quantum mechanical measurement.
Peter Mittelstaedt, who had dealt with those problems for decades, wrote in his (Mittelstaedt, 1998)
that the theory of quantum mechanical measurement suffers from serious inconsistencies, which
even his own proposal of ‘unsharp measurements’ cannot solve. Apparently, he had given up the
hope for a solution.
In my opinion this shows that it is now necessary to step back and take quite a new view of the
problem. I think this new view has to reconsider the foundations of probability on the one hand and
to keep as close as possible to the facts that are characteristic for experiments in quantum mechan-
1 Wigner, 1963
qmMeasurement
2
23 March 2017,11:14
ics, on the other. Since the formal properties are clear, the questions of interpretation cannot be
solved by still more mathematical apparatus, but rather by stepping back for a while and taking a
broader view.
In this note I will concentrate on the central point: the ‘disappearance of the interference terms’
or the ‘objectification of the result’. I am proposing a rather simple solution, namely that this proc-
ess is not necessary for the explanation of the quantum mechanical measurement.
1 Indeterminist theory
QM is an indeterminist theory: For some measurements there is more than one possibility of out-
comes. One of them will turn out to be the “real” result of the measurement.
Since this is a very important aspect I will expose it a bit further.
All classical theories (where ‘classical’ means all theories that are not quantum, including the
Relativity Theories) are fundamentally determinist. This is true also of statistical thermodynamics,
which draws heavily upon statistics and probability. But even in that theory one could assume that
“really” the whole system (e.g. of molecules in a gas) is in a certain state, albeit unknown, which
determines all past and future states. There is a classical formula by P.S. Laplace in his philosophi-
cal reflections on probability (Laplace, 1814, p. 2). There he states about a superhuman intelligence
that has complete command of mechanics, “nothing would be uncertain for it and the future like the
past were present before its eyes.” Probability in those theories is considered to only represent the
“ignorance” of the scientist.
This is different in quantum mechanics. For quantum mechanics, indeterminism is fundamental; the
theory itself states that in general it is impossible to make predictions with certainty. Albert Ein-
stein did not at all like this feature of quantum mechanics. He searched for a replacement theory
that would in some way restore its ‘classical’ character, leaving for quantum mechanics something
similar to thermodynamics, which would admit of an ignorance interpretation. But in 1964 J.S. Bell
showed that any theory that has in that sense ‘classical’ properties cannot reproduce quantum me-
chanical probabilities if it is ‘local’, i.e. restricted to speeds of interaction that are not higher than
the velocity of light. But ‘locality’ is a fundamental feature every theory has to have according to
Special Relativity. The difference between the predictions of quantum mechanics and those of any
classical local theory is in certain situations so large that it can be measured with not too much ex-
pense, and it has been measured as early as in 1976 (Clauser, 1976); and—as everybody ex-
pected—quantum mechanics was confirmed. Thus, as far as we keep within the speed limits of
Special Relativity, it is confirmed that quantum mechanics is a really indeterministic theory.
2 Two types of change
Indeterminacy is the reason for a feature of quantum mechanics that seems rather strange at first
glance, namely that the transformations of the state of the system are described as two entirely dif-
ferent types2:
The first type (according to the numbering by John von Neumann, 1932, p. 417-418) is the
change caused by a measurement: Before the measurement the state of the system describes the
probabilities of all possible outcomes of the measurement. After the measurement there is only
the one result left that has become real, the other possibilities have disappeared. That is an in-
stant change of state. In order to keep up a general description of the process, the standard ac-
count of the measuring process does not describe that single outcome but goes over to a statis-
tical description of an ensemble of experiments: it describes a “statistical mixture” of all possi-
ble results, where every result is weighted with its probability. But we have to keep in mind
that the actual measurement outcome is exactly one of the results that were possible before.
The statistical mixture is only an abstract aggregation of all results that were possible. Already
John von Neumann (1932, p. 417-418) jumps very quickly from the single measurement to the
2 Wigner: „The assumption of two types of changes of the state vector is a strange dualism.“
(Wigner, 1963, p. 7)
qmMeasurement
3
23 March 2017,11:14
statistical mixture. He writes, after describing the "causal" change of state according to the
Schrödinger equation (cf. here below): "On the other hand the state φ is transformed by a
measurement—measuring, in this example, a quantity with all simple eigenvalues and the ei-
genfunctions φ1, φ2 … —, an acausal change that can produce any one of the states φ1, φ2, with
2
probabilities |(φ,φ1)|2, |(φ,φ2)|2,… respectively. I.e. the mixture
U
' =
(ϕ,ϕ
)
P
is pro-
n
[ϕ
n
]
n=1
duced. Since here states are transformed into mixtures, this process is not causal."3 Thus, with-
out further comment, just by "I.e.", Neumann jumps directly from a single result of a meas-
urement to the weighted mixture of all possible results. So, for John von Neumann the first
type of change is the transition from the state before the measurement to the mixture after the
measurement, and, as far as I can see, all later discussions of the process of measurement do
the same. My point in this note is that every measurement yields exactly one of the possible re-
sults - impossible to predict, which one -, and not a mixture. We shall come back to that ques-
tion. - In view of a single event, the name ‘collapse of the wave function’ has come into use
for that first type of state change.
The second type of change is the one described by the Schrödinger equation. It describes the
change of the state (which determines the probabilities of measurement outcomes) according to
the dynamics of the system. That change is determinist and reversible, just as in any classical
field theory. That second type of change applies for all times when there is no measurement.
That there are two so entirely different types of change is a new feature in physics quantum me-
chanics has brought about. There have been attempts at unifying the theory again by giving the first
type of change a “physical explanation” in terms of the second type: There must be, was the argu-
ment, some unusual interaction bringing about the ‘collapse of the wave function,’ which should be
describable in the framework of the Schrödinger equation. But imagine that this attempt had been
successful. Then there would be a nice determinist description of the reduction of the wave packet,
and the whole theory would not be indeterminist any more. Or, to put it the other way round: In an
indeterminist theory there must necessarily be two entirely different types of change of state.
3 Probability
Quantum theory is an indeterminist theory. So in general, predictions with certainty are not possi-
ble. A weaker kind of predictions is still possible, namely predictions with probability. The prob-
ability value is a predicted relative frequency. Probability theory has some difficult points that
show up in quantum mechanics as interpretation problems; but actually they are problems of prob-
ability. One of those problems is the question, what a probability propositions is about: Does it re-
fer to a single event or does it refer in some way to a collection [set, class, mass, aggregate …?] of
events? - Since probability predicts a relative frequency, it is clear that it would not make much
sense to attribute probability to a genuinely unique event. On the other hand, a probability proposi-
tion gives a predicted frequency that is valid for collections of any number of events. Gibbs, in his
treatise on statistical thermodynamics (Gibbs, 1902), coined the term “ensemble” for such a collec-
tion of collections. Thus a probability proposition applies to an ensemble of events. But how is such
an ensemble defined? Which events belong to an ensemble? - The events must be ‘equal’ as far as
the probability assignment is concerned. So, e.g., in throws of dice the conditions of throwing must
be the ‘same’ for all throws, among others providing symmetry of the six sides as far as the throw
is concerned. So actually, the ensemble is defined by certain properties of the events that can be-
come elements of the ensemble. Thus there is also a good sense in stating that the probability
proposition applies to a single event, namely as far as that event has the properties that include it
into the ensemble the probability is about. The ensemble is not a definite collection of events; it is
rather the aggregate of all possible (finite) sets of events. By the way: there is no use in defining
3 translation MD
qmMeasurement
4
23 March 2017,11:14
probability as limiting relative frequency for infinite sets of events; for according to probability
theory itself, there does not exist a mathematical limit for the relative frequency.
The co-called ensemble interpretation of quantum theory, on the other hand, is quite different: It
was an interpretation that tried to approach quantum theory to thermodynamics, according to the
pattern proposed by Einstein. Since J.S. Bell showed that theories with "hidden parameters", as far
as they are local, cannot reproduce all quantum mechanical probabilities, this type of interpretations
has practically disappeared. But in view of Gibbs’ concept of ensemble discussed above, every
probabilistic theory would have to be an ensemble theory in this Gibbs' sense. It is largely in this
sense that the “ensemble interpretation” is presented in the excellent article in Wikipedia (2014).
This is different from the “real ensemble interpretation” by Lee Smolin, which is rather in the Hid-
den-Variables tradition.
4 The usual description of the measurement process4
The description of the measurement process is rather intricate because of the universality of quan-
tum mechanics: Since the measured object as well as the measuring apparatus is a physical object
the right physical theory for both is quantum mechanics. Therefore the measuring process must be
described as an interaction between two quantum mechanical objects—understanding that the
measuring apparatus in some sense has ‘classical’ properties such that the measurement ends up
with a definite result (the ‘pointer value’) that can be read off the measuring apparatus. That this
again poses serious difficulties is not the subject of this note.
The process begins with two separate objects, the system under consideration (S) and the measur-
ing apparatus (M). They come into an interaction that results in M showing a "pointer value". If S
was in an eigenstate of M before the measurement (i.e. in a state compatible with the measurement)
then the pointer value will confirm the corresponding eigenvalue. This is the so-called ‘calibration
condition’. For all other cases the results will be distributed according to the probabilities of the
possible outcomes that can be derived quantum mechanically from the state of S before the meas-
urement. As mentioned above, that process is usually described as a transformation into a ‘mixture’
with the corresponding probability weights. This transformation, according to that description, is
divided conceptually into three stages:
1. The pre-measurement, i.e. the phase when the interaction of the two systems S and M is active.
Beginning and end of that phase naturally are approximative idealizations. At the end of the
pre-measurement the compound system S+M is in a state that cannot be described as separate
states of its components S and M.
2. The ‘objectification’, i.e. the transformation of the quantum mechanical pure state into the re-
sult of a measurement. There the ‘cut’ between S and M plays an important role in the descrip-
tion by Süßmann because it seems to bring about the transformation from the ‘pure state’ of the
compound System S+M to the ‘mixed state’ that is supposed to describe the result of the meas-
urement.
3. The ‘reading’ of the result
I am not going to discuss all the details. All I am interested in here is the objectification, and
there specifically the character of the result.
6 “Cut”
As mentioned above, the objectification is usually described as a transformation of the pure state
of the compound system into the ‘mixture’ of the possible results weighted with their probabilities.
The reason that such a transformation seems necessary is a discrepancy met with in the description
of the measuring process when it is described from two different points of view:
1. The dynamics of the measurement interaction results in a pure state of the compound sys-
tem.
4 The first rather thorough description of this type was given in Süßmann, 1958
qmMeasurement
5
23 March 2017,11:14
2. The description of the expected result of the measurement is a statistical mixture of the pos-
sible results, weighted with their probabilities.
There is no quantum mechanical process that could describe the transition from one to the other,
because the pure state contains certain ‘interference terms’ that will never truly vanish by any quan-
tum mechanical process.
In that situation it seemed very practical that those interference terms in the compound object
S+M concern only correlations between the two parts S and M; they do disappear if you limit your-
self to features the component systems S and M have separately. So the ‘cut’ between S and M was
introduced. It seemed quite plausible that in the end we need a description of the system under con-
sideration without further reference to the measuring apparatus.
The “cut” is an enticing knack to provide a transition form the pure state description to the mix-
ture after the measurement. But that knack does not work, as P. Mittelstaedt has convincingly
shown. Mittelstaedt (1998) gives a thorough discussion of all attempts at a solution—as, e.g., deco-
herence or the Many-Worlds proposal or even his own proposal with unsharp observables—and
concludes that there is absolutely no way for an exact objectification of the measured values. One
of the reasons a mixture resulting from the cut does not serve as a solution is the fact that this mix-
ture cannot be decomposed into possible results in a unique way; and that would be necessary in
order to have an ‘ignorance interpretation’, i.e. the interpretation that ‘in fact’ there is a certain re-
sult, it is only unknown which one5.
I did myself propose the solution6 which every physicist would agree to, namely that physics can
exist only when we accept approximations as part of its foundations. So it would actually not mat-
ter dropping those interference terms because they are smaller than the approximations we have to
introduce anyway. - That argument is true, but it is still too superficial. I think I can now present a
better solution:
The efforts at making disappear the interference terms have brought about a lot of interesting
mathematical insights. But to my mind they did not ask the right question. I mentioned above al-
ready that the physical process of a single measurement does not produce a mixture but only one of
the possible results. There can be no mechanism that explains exactly this result, since indetermi-
nism does not allow the existence of a unique mechanism, as mentioned above as well. So, all that
can enter into the quantum mechanical description of the measuring process is the calibration con-
dition: If the state of the system under consideration before the measurement is compatible with the
measurement, then the measurement has to confirm that state.
Consider, as an example, the Stern-Gerlach Apparatus: The beam of silver atoms passes through
an inhomogeneous magnetic field and is thereby separated into two beams, one of them with spin
up, the other one with spin down. Then the two beams leave two black spots on a glass plate behind
the magnet. For a single event, if the spin is 'up' already before the magnet, it will be 'up' after the
magnet, too; the atom will end at the glass plate where the atoms with spin-up go; and similarly for
spin-down. If the spin is neither 'up' nor 'down' before the magnet, the silver atom will go to either
of the two places on the glass plate, the probability being determined by the initial state. If the ex-
perimentalist prepared a beam of atoms all in the same spin state, he might collect the results of all
passages through the magnet in a statistical table and form a mixture of the whole sample; he will
find, probably, that the frequencies of the results will correspond approximately to the quantum
mechanical probabilities. One might as well take the prediction of the result to construct a mixture:
every possible result would be represented in the mixture with the weight of its probability. But this
is actually a bookkeeping process, not a physical transformation. - By the way, at least in this case
there is no question of separating ("cutting") the system from the apparatus. Quite the reverse: The
silver atom itself functions here as the pointer of the apparatus!
5
(Busch et al, 1991, p.24): “This nonunique decomposability of mixed states in quantum mechanics constitutes a first
indication that such states do not, in general, admit an ignorance interpretation.”
6 Drieschner, 2002, p. 106; Drieschner, 2004, Ch. 8.
qmMeasurement
6
23 March 2017,11:14
Put in the more formal terms of the usual description, one would have to turn around the order of
the last steps: The first step after the interaction is the ‘reading’, i.e. the appearance of only one of
the results that were possible before. Then comes as a second step, if the scientist decides so, the
forming of the mixture: it is a formal collection of all possible results of the measurement into a
statistical ensemble. This ensemble is the result of a bookkeeping process. It is an ensemble of
‘classical’ results; quantum mechanics must not be applied there, so there can be no interference
terms.
8 The mixture is a concept of classical theory
Thus, the mixture of the possible results is a classical collection: If one likes, one can describe the
result of the measurement as a mixture of the different possible results, weighted with their actual
frequencies or their probabilities. But beware! This is not a physical process. Physically, every
measurement results in just one of the possible outcomes. The mixture as a result of the measure-
ment is composed explicitly by the scientist from the single results he found or from the prediction
of the frequencies of the possible results; it is an entirely classical affair. In that respect there is no
structural difference between this mixture and the one that describes e.g. the possible results of
throws of dice: For a good die the result can be described as a mixture of the six possible results
with equal weights—even though every single throw results in exactly one number of points.
9 Objection: the possibility of restoration
Already Wigner writes in his paper of 1963:
“[A]ttempts to modify the orthodox theory […] presuppose that the result of the measurement
(ν
)
(ν
)
α
[a
×σ
]
is not a state vector
ν
(2), but a so-called mixture, namely one of the state vec-
(ν
)
(ν
)
tors
a
×
σ
(3), and that this particular state vector will emerge from the interaction be-
2
| α
|
tween object and apparatus with the probability
µ
. If this were so, the state of the system
would not be changed when one ascertains—in some unspecified way—which of the state
vectors (3) corresponds to the actual state of the system, one would merely ‘ascertain which
of various possibilities has occurred.’ […] This is not true if the state vector, after interaction
between object and apparatus, is given by (2) because the state represented by the vector (2)
has properties which neither of the states (3) has.” (Wigner, 1963, p. 1)
Then Wigner proceeds to declare that the real difficulty of the theory of measurement is the de-
scription or explanation of the transformation from state (2) to state (3).
What Wigner presents as an alternative to the ‘orthodox’ theory sounds much like our proposal
here. But he jumps immediately—like John von Neumann and Peter Mittelstaedt—to the mixture,
where ‘which of various possibilities has occurred’ has to be ascertained later. Starting from there
he finds, correctly, that the mixture is physically different from the pure state: the pure state can be
restored after it is split into several beams according to the possible results of the measurement;
with the mixture, one cannot do that. In the years since 1963 experiments have been performed
which show that such restoration is practically possible.
But what does actually happen in an experiment? - Let us take Wigner’s example of the Stern-
Gerlach-Experiment we mentioned already above: The result of the interaction of the measuring
instrument with the measured particle is a pure state, spatially divided into two branches, each one
connected with one of the possible outcomes of the measurement (‘spin-up’ / ‘spin-down’). Those
two branches could be united again, as Wigner emphasizes, to restore the original (superposition)
state as long as there is not interaction. The measurement is completed when that state-function
meets the glass plate. Then only one single black spot is left—either in the spin-up or the spin-
down position.
As discussed above, there can be no theory of how the state “decides” where to put that black
spot, or else quantum theory would not be indeterminist. There is no need for an intermediate state,
qmMeasurement
7
23 March 2017,11:14
as e.g. a mixed state, in between; it would not contribute anything to understanding the measure-
ment process.
10 Conclusion
In the discussion so far of the measurement process, the central problem seems to have been the
transition from the pure state (Wigner’s no. 2) to the mixture of measurement results (Wigner’s no.
3). I do not propose a solution to the specific problems of that transition. But I give a solution to the
corresponding problem of measurement theory: In the process of measurement there is no such
transition!
Literature:
Bächtold, Manuel, 2008: Five Formulations of the Quantum Measurement Problem in the Frame of
the Standard Interpretation. Journal of General Philosophy of Science 3, p. 17-33
Bell, John S., 1964: On the Einstein-Podolsky-Rosen paradox. Physics 1, p. 195-200
Busch, Paul; Lahti, Pekka J.; Mittelstaedt, Peter, 1991: The Quantum theory of Measurement. Hei-
delberg etc.: Springer.
Clauser, John F., 1976: Experimental Investigation of a Polarization Correlation Anomaly. Phys.
Rev. Lett. 36; p. 1223. - Better known is: Aspect, Alain; Grangier, P.; Roger, G., 1982: Experimen-
tal Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's
Inequality. Physical Review Letters 49, p. 91-94.
Drieschner, Michael, 2002: Moderne Naturphilosophie. Paderborn: mentis;
Drieschner, Michael, 2004: Die Quantenmechanik - eine Revolution der Naturphilosophie? Philo-
sophia Naturalis 41, p. 187-225.
Gibbs, Josiah Willard, 1902: Elementary Principles in Statistical Mechanics. New York: C. Scrib-
ner's sons.
Laplace, Pierre Simon de, 1814: Essai philosophique sur les probabilités. Paris: Courcier
Lyre, Holger, 2010: Why Quantum Theory is Possible Wrong. Found. Phys.40, p. 1429-1438
Mittelstaedt, Peter, 1998: The Interpretation of Quantum Mechanics and the Measurement Process,
Cambridge: UP
Neumann, Johann von, 1932: Mathematische Grundlagen der Quantenmechanik. Berlin: Springer.
= Mathematical foundations of quantum mechanics. Translated from the German, ed. by Robert T.
Beyer. Princeton, N.J. (Princeton University Press) 1955. ch. V.1,
Süßmann, Georg, 1958: Über den Meßvorgang. München: Beck. P. Mittelstaedt summarizes Süß-
mann’s account in his Philosophical Problems of Modern Physics, Heidelberg (Springer) 1975
Wigner, Eugene P., 1963: The problem of measurement in quantum theory. American Journal of
Physics 31, p. 6-15.
Wikipedia, 2014: http://en.wikipedia.org/wiki/"Ensemble interpretation"