Of Particular Significance

Atoms of an Isotope Are Identical, Literally

Matt Strassler [December 14, 2012]

Now here’s a remarkable fact, with enormous implications for biology.  Take any isotope of any chemical element with atomic number Z.  If you take a collection of atoms that are from that isotope — a bunch of atoms that all have Z electrons, Z protons, and N neutrons — you will discover they are literally identical.   [A bit more precisely: they are identical when, after being left alone for a brief moment, each atom settles down into its preferred configuration, called the “ground state.”]   You cannot tell two such atoms apart.   They all have exactly the same mass, the same chemical properties, the same behavior in the presence of electric and magnetic fields; they emit and absorb exactly the same wavelengths of light waves.  This a consequence of the identity of their electrons, of their protons and of their neutrons, which will be discussed later.

That all atoms of the same isotope are identical, and that different isotopes of the same element have nearly identical chemistry, is a profound fact of nature!  Among other things, it explains how our bodies can breathe oxygen and drink water and process salt and sugar without having to select which oxygen or water or salt or sugar molecules to consume.  Contrast this with what a construction company has to do when building a house out of bricks, or out of concrete blocks.  Bricks and concrete blocks vary, and are sometimes defective, and so a builder must exercise quality control, to make sure that cracked or over-sized or misshapen bricks and blocks aren’t used in the walls of the house.  No such quality control is generally needed for our bodies when we breathe; any oxygen atom will do as well as any other, because we only need the oxygen to make molecules inside our bodies, and chemically all oxygen atoms are essentially the same.  (This is all the more true since, for most elements, one isotope is much more common than the rest; for example, most hydrogen atoms [one electron and one proton] have no neutrons, and most oxygen atoms [eight electrons and eight protons] have eight neutrons.)  

37 Responses

  1. Thanks a bunch for sharing this with all folks you really understand what you are speaking
    approximately! Bookmarked. Please also seek advice from
    my site =). We may have a hyperlink alternate contract between us.

  2. Dear Prof
    your comments about the identity of isotopes is not correct – quite. In making it, you assume, without saying so explicitly, that each atom may assume any position in space, with the probability of being there given by the appropriate choice of statistics (Maxwell, Fermi or Bose-Einstein, as dictated by Z, temperature, crystal lattice{superconductivity!}, etc). This is a valid assumption for situations where the position of one atom does not depend upon the position of another atom. In a chemical compound, where atoms are held in (almost) fixed positions relative to each other, this assumption is no longer true. This is borne out by all the spectroscopic methods used by organic chemists, be it infrared, Raman, visible light, nuclear magnetic resonance . . .
    Take the hydrogen molecule as an example: If the two atoms are not bonded, the probability of each atom to have an up spin is independent of that of the other atom, it is 50%. Once they bond together as a single molecule, only one can have the up spin, the other must have a down spin. That is a consequence of the Pauli principle. And because their spin is different, they no longer are identical.

    In general, I find your approach to chemical bonding questionable, if not misleading. For example, I would never call NaCl to be a molecule. To be able to do so, I would demand to be able to determine which Cl atom is bonded to which Na atom, an impossibility since each Cl atom in the NaCl lattice is equally distant to six Na atoms, and vice versa. Please read also what the Wikipedia has to say about Daltonide and Berthollide compounds (http://en.wikipedia.org/wiki/Berthollide) and consider that non-stoichiometry is by no means rare among compounds, but rather mandated by thermodynamics.
    When you discussed the internal structure of the atom (http://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-structure-of-matter/atoms-building-blocks-of-molecules/), you set yourself up for the long and fruitless discussion with Kudzu. What is missing there is the deBroglie wavelength of an electron at the energy of a chemical bond, i.e. at 1 eV or a fraction thereof. From there, it would have been rather obvious that chemical bonds must be treated within the context of the Schrödinger equation, i.e. as a standing wave of (several) electrons shrouding the nucleus and not as dimensionless charges flitting about the nucleus. Then you could have introduced the quantum numbers n, l and s, which ultimately determine whether a chemical bond will form, the bonding angle and the bonding distance.
    It’ll be interesting to see how you extricate yourself from the mess you have put yourself into.
    Wolfgang Vogelbein

    1. You talk a good game. But you understand neither the science nor the pedagogical issues involved.

      You haven’t understood the scientific point that the identity of atoms of the same isotope is independent of context, so talking about them as being different in a particular context isn’t relevant. If their identity *did* depend on context, all sorts of aspects of physics would be different. In particular, you wouldn’t be able to swap out one atom and replace it with another of the same isotope. In other words: in the word “pointless”, you can say that one of the “s”‘s is different from the other because of context — it comes at the end. But when I switch them, I get the word “pointless”, which, according to you, is different from the one I started with, but according to me is the same word. Your example of Hydrogen is exactly of this type.

      Answering your other question is pointless. I mean, pointless.

  3. <<>>

    Are you saying that ANYTHING we cannot “perfectly predict” is ultimately and inherently “random”? Anytime you can’t figure something out, just call it “random” and then move on with your life? So much for scientific investigation.


    Lucky for me, I didn’t make such a statement. Instead, I said:

    “You can only establish that the process obeys a given statistical model. Quantum mechanics yields statistical distributions, which are backed up by experiment. In other words, we insert “randomness” into our models and our equations, and sometimes we may even claim there’s “randomness” yielding the distribution, but that’s only for reasons of convenience; i.e. in order to have practical use for the phenomenon.”


    I am suggesting that our prediction models use randomness because that is the best we can do. We are unable to detect a single neutrino (or similar particle) reacting with an unstable nucleus. So, instead we use averages of large quantities of atoms decaying in order to make predictions.


    They aren’t sure what the exact mechanism is. They only know that decay increases/decreases along with solar increases/decreases of solar particles. This phenomenon was confirmed by independent labs. Nevertheless, not knowing the precise details of a mechanism does not mean that there ultimately IS no mechanism. It only means that the best we can do is inject “randomness” into the theory in order to overcome our lack of precision/knowledge.


    1. The wonderful thing about science is that it has the ‘last word on nothing’; it is always possible for example that the earth is in fact flat and all our data was a gigantic collective mistake. However if we assume that our experiments to date are true then there is *no way* that the data can be explained by particles with definite positions and velocities even if those positions\velocities can’t be known. (That is they are ‘hidden’) Whatever math there might be it is more complex than that.

      To stick to the ‘But it could be!’ line is to take a stand with those who think the earth is 6000 years old. It would require a massive and systematic and utterly unlikely failure of basic experiments.

  4. Some scientists have suggested that the solar flares are producing increased particles (perhaps neutrinos) which when reacting with an unstable nucleus in a certain way, which may cause the decay. Granted, this is speculative at this time, but there is some evidence:


    Just because we have a hard time measuring when a reaction from one of these particles will trigger decay in an unstable nucleus, does not mean that there is ‘core randomness’. Instead, as previous posters have suggested, it just means that our methods/instruments are not sophisticated enough in order to predict it.

    “It is random” does not seem like a scientific statement, and I do not even think it is possible to measure a process and establish for a fact that it is “random”. You can only establish that the process obeys a given statistical model.

    Quantum mechanics yields statistical distributions, which are backed up by experiment. In other words, we insert “randomness” into our models and our equations, and sometimes we may even claim there’s “randomness” yielding the distribution, but that’s only for reasons of convenience; i.e. in order to have practical use for the phenomenon. That is not grounds to claim that there is ultimately no cause for radioactive decay whatsoever.

    1. “It is random” is as much a scientific statement as “It is not random” , it can be tested and disproved. What is unscientific is dogmatically sticking to an explanation in the face of good evidence.

      Part of the problem here is I believe the different uses of the word ‘random’ Radioactivity has definite ’causes’ that aren’t random, but the process does not currently appear deterministic.Solar neutrinos would just be another non-random factor. To eliminate randomness entirely we would need to observe a perfectly predictable situation where a particle such as a neutrino caused a decay.

      If I read your posts right you are suggesting that the universe is a ‘clockwork’ one; where there is no inherent randomness in QM. I myself rather liked this idea in my youth but things like Bells theorem http://en.wikipedia.org/wiki/Bell%27s_theorem have led me to the view that randomness is an inherent part of our universe. I might be wrong but I do not currently see any compelling evidence.

      Incidentally the solar neutrino mechanism you link to would cause your original atom switching problem not to work, the atom in your hand will never decay because it will never be in the right location to be hit by a solar particle in ten seconds.

    2. Your words make rational sense, but they’re very naive. For one thing, you do need to learn about the Bell inequality and its generalizations. It can be shown — and it has been shown experimentally — that the correlation patterns in quantum mechanics are inconsistent with classical statistics. So what you’re suggesting is inconsistent with experiment. Other, more subtle alternatives to quantum mechanics may be possible, but nothing as simple as what you’re suggesting.

  5. How do we know that? Atoms of the same isotope may APPEAR identical from an external perspective, and the process of decay may APPEAR random. Nevertheless, is it possible that there are subatomic processes in an unstable nucleus that we are unaware of from our limited view. Historically, processes that appear to be random turn out not to be once we have developed more precise instrumentation.

    Someone on this thread posted a comment linking to research saying that protons themselves may not be exactly identical, as previously thought, which may suggest that the nuclei of unstable atoms may not be identical, not to mention that gluon cloud configurations in quarks may differ just as electron shell configurations may differ from one atom to another.

    Perhaps inserting randomness in our models in order to predict decay is useful for large numbers of atoms, but to assume it is “random” for each single nuclei may be taking the concept too far.

    1. The problem is the assumption of randomness works so *well* The half life of an isotope can be related to the difference in energy levels between the parent and daughter product plus the mechanism of decay. The standard model predicts the structure we see and fails to predict (as far as I am aware) any mechanism for radioactive decay that would be ‘non-random.’

      We have evidence that particles in a nucleus are highly identical; nuclei have specific energy levels that in several cases have been measured quite exactly and any variation would show up as ‘broadening of the bands’ in these measurements. Likewise with hadrons themselves we have measured excitations on things like the proton. The evidence that protons may not be identical is tentative and the effect is hardly a major one..

      Then of course there is the question of whether or not simply not being identical would have a measurable or predictable effect on decay. (The chemical environment of electron-capture nuclei affects their decay rate, but not that of other decay modes.) And if the difference itself is random…

      Certainly if we find even that protons are not identical it will be a monumental discovery most certainly requiring extensive new physics.

      1. It works “well” in the sense that there is a predictable “average” for decay rates when looking at a large quantity of atoms. It does not work well when looking at an individual one.

        Kudzu wrote <<>>

        We can speed up decay rates for certain isotopes under certain conditions (like in a nuclear reactor). Solar flares of the sun tend to speed up decay rates for many radioactive isotopes so that even the average mentioned above is not actually constant (as previously thought). These ideas suggest that decay is not entirely “random”.

        1. The weakness when dealing with individual atoms is an inherent weakness of all random processes. It is what we *expect* if the process were indeed random. This is the problem with postulating a non-random process. All the facts so far are consistent with a random process.

          A big problem is that, as you note, radioactive decay isn’t entirely random in that in order for it to occur various conditions have to be met. A fully ionized K40 nucleus is stable, it needs an electron in the vicinity of the nucleus to decay. (This is usually a 1s electron that spends some time there, hence the dependence of the decay rate on chemical environment.) Likewise changing conditions WILL change the decay rates of various isotopes as conditions are made more or less conductive to decay.

          But this does not eliminate the ‘core randomness’ of the process. You can double the rate of decay of an isotope, but all that means is that any given atom in it has twice the probability of decaying in any time. What you propose would be some mechanism where we could, in theory, measure a single atom and know exactly when it would decay, eliminating *all* randomness from the process. This is something that would be relatively easy to prove but very hard to disprove. It is one of those things like the ‘shadow biosphere’ where you can never be totally sure it isn’t there, but we have no good reason to assume it is.

    2. Thank you for this comment, which appears wise and reasonable but is hopelessly naive. You seem to think that this is something that theoretical physicists made up out of their heads, and that obviously it can’t be checked because, gosh, how could you possibly go in and compare two protons?

      Physicists didn’t make an assumption that protons are identical, that electrons are identical, etc; they considered this hypothesis with care, and recognized that the question could be tested, using statistical considerations, which underlie thermodynamics.

      Consider three “A”s. Now if they are not the same, then there are 9 ways to arrange three A’s: for instance, AAA, AAA, etc. If they are identical, there is only one: AAA. The difference isn’t that big, but once we have a million A’s, the difference between having one arrangement and (1,000,000)1,000,000 arrangements becomes pretty noticeable. (Keep in mind that a drop of water has a million million million molecules of water in it.) And it has a huge impact on how a system changes with temperature, and on how energy is distributed into a system, etc.

      Following along similar (but more sophisticated) lines, the list of tests of the identity of elementary particles and of protons, neutrons, nuclei, isotopes of elements, etc. is very long indeed. Here are three sets:

      *** The Pauli exclusion principle, stated crudely in chemistry class, is in fact the statement that the states involving electrons change by a minus sign if you exchange one electron with another. This has a big effect on the scattering of two electrons; if it weren’t true, it would change the scattering rate by a factor of two. Quantum mechanics does give the correct scattering rate (and even its angular dependence) but only if the electrons are identical. That is not all, of course; atomic structure and even the solidity of solid matter is dependent on this principle for electrons.

      *** Without a similar Pauli exclusion principle affecting protons, and a similar one affecting neutrons, nuclear physics would be completely different from what is observed. And there would be no neutron stars in the sky, or even white dwarfs; they would collapse to form black holes, because only the Pauli exclusion principle keeps them from doing so.

      *** If atoms of a certain class were not identical, then bosonic atoms could not form Bose-Einstein condensates. This requires all of the atoms be in lock-step, which is not possible if they are different from one another. But Bose-Einstein condensates do indeed form: http://www.nobelprize.org/nobel_prizes/physics/laureates/2001/ . Of course the same is true for photons, which can be made into a laser as a result. Similar statements apply to superfluids like helium-four, and superconducting materials.

      Basically, your question is a little like asking whether we’re sure the earth orbits the sun due to gravity. Yes, we’re sure, and we have tons of evidence to back up the statement. Not only that, but a lot of modern technology was designed under the assumption that it was true, and that technology works. There are hundreds of experiments (including many going on at the Large Hadron Collider right now) that could have proved the identity of particles was false, but none have done so. So this is just not something open for much debate.

      But the other point is that back in the 1940s physicists understood why all electrons are identical, etc. In quantum field theory, which is the essential mathematics of the Standard Model that describes all of particle physics in such great detail, particles of a particular type are ripples in a quantum field. Two ripples on a pond have identical properties; and two ripples in an electron field do too.

      1. Once again I am awed by your knowledge. I knew all of the points to which you refer, yet I had not linked them together and asked myself what conclusion they pointed to. This post has fundamentally changed my view of the universe. If not drastically then at least deeply.

      2. Let me bring up the uncertainty principle of Quantum Theory for a moment, and let’s see if that takes the conversation somewhere.

        As you know, we can only know both the position and momentum of a particle within a certain degree of accuracy. The more accurate of our measure of the position, the less accurate will be the measure of momentum (and vice versa). We can never know the exact values of both.

        Would you say that the exact values of both do not exist at all?

        1. In the case of position-vs-velocity, I would say no, the exact values do NOT exist. Given our current understanding of particles as waves in a field the uncertainty is ‘built in’; you cannot get an infinitely precise value for either property and measuring one more precisely does not simply ‘hide’ the true value of the other but in fact changes the object being measured so that the other property becomes less exact. It is not simply a matter of not having accurate enough experiments but a fundamental property of the universe.

          1. Kudzu wrote: In the case of position-vs-velocity, I would say no, the exact values do NOT exist.

            If a particle has a position, does that mean it does not have a velocity? If it has velocity, does it not have a position? All the uncertainty principal says is that one can be KNOWN. It does not say that the other does not exist.

            Kudzu wrote: measuring one more precisely does not simply ‘hide’ the true value of the other but in fact changes the object being measured so that the other property becomes less exact.

            You are confusing the uncertainty principle with the “observer effect”. I don’t believe that it is the same concept.


            1. That’s all the uncertainty principle “says”, when you write it in words. But when you write it in math, it says much more. You’re incorrect about the math.

              What the math says is that there’s no meaning to the questions that you are asking.

              Now maybe the math is incomplete. But again, it’s been shown, using the math and testing it using experiments, that nothing so simple as “hidden variables” ((i.e., the idea that you are espousing, that the particle has both a position and velocity, you just can’t measure it) can be consistent with DATA. That’s the Bell inequality and its generalizations. This is not an interpretation issue. It’s a data issue. You’re ignoring data. That’s illegal in physics.

              Why don’t you go learn about this and leave us alone until you understand it?

        2. The uncertainty principle is a manifestation of the basic nature of particles. They are not ‘small hard balls’ like we tend to imagine them. They are waves like the waves on the ocean (but in three dimensions.) This means that any given particle is not a ‘fixed’ object that remains the same whether or not we can measure it. Instead when we measure a property we arrange that particle in a particular way.

          Imagine trying to measure the *exact* position of a wave in a pond. A wave has no clearly defined edge there, it curves into the flat surface of the pond gradually. So its position is not exact, there is uncertainty in it. (We could try and define the position of the exact center of the wave, but to find *that* exactly we need to measure the edges of the wave exactly.)

          The only way we can improve the accuracy is to somehow ‘squash’ the wave into a smaller volume of space, make it more like a point particle. There is in fact a way to do this; instead of having a wave with a single momentum, we can make the wave by combining a number of lesser waves. These will interfere in a manner such that the resulting wave will occupy a smaller and smaller volume of space, allowing its position to be known more and more accurately. Shown here: http://upload.wikimedia.org/wikipedia/en/d/db/Sequential_superposition_of_plane_waves.gif

          But what have we done by making the wave out of all these ‘sub waves’? Each wave can be considered as being the same particle but having a different frequency (Thus energy and thus momentum) So in making our wave’s position more accurate we have made its momentum (and by relation, speed) less accurate, not obscured.

          This can be considered similar to the observer effect which basically states that with a rough observation what is being observed will be altered. Many people think this explains the uncertainty principle, that somehow all our measurements of say, speed, must disturb the system’s position and that the ‘real’ position was there, just unobservable. but in fact we can see this effect even without measuring a system.

          We can look for experimental evidence for this and one of the best is to pass a laser beam through an adjustable slit. As the slit is narrowed at first the beam that emerges from the slit narrows too; this is logic, less of the beam can get through a narrower slit.

          But eventually something strange happens, the beam begins to *widen* actually it begins to disperse, to radiate out in a fan from the slit. This is because to have passed through the slit the photons must have been in a small volume of space, their position must have been quite accurate. Because of the way the world works their momentum becomes less certain. The beam that passes through the slit is now not a laser beam of particles all moving in the same direction but a spreading, radiating beam.

          But nothing has been measured; the photons that get through the slit get through precisely because they have not hit anything on the way through. There can be no observer effect here because all the photos that are interfered with are absorbed, destroyed. Only unmolested ones pass.

          See here for video demonstration: http://www.youtube.com/watch?v=a8FTr2qMutA This is a neat demonstration because it is not hard to build and test in your own home.

          1. I think Matt just made a couple interesting comments:

            “Now maybe the math is incomplete. …. That’s the Bell inequality and its generalizations.” – Matt

            Yet you both act as if the math IS complete and act as if these are NOT generalizations.

  6. Let’s do this mind experiment. We have two radioactive atoms of the same isotope, A and B. Let’s say that A is in your hand and B is in mine (assume that all external forces are exactly the same for each atom). Now, yours (A) decays after 10 seconds and mine (B) does not. Imagine us going back in time 10 seconds and swapping atoms so that A is in my hand and B is in yours. Which one will decay in 10 seconds? I will suggest that the one in my hand will, because the internal workings (arrangements of subparticles in the atom) would result in decay in that particular atom.

    1. Recall however that radioactive decay is a quantum process. Atoms do not have little ‘internal clocks’ that count down to their decay time. As such if we reversed time and swapped the atoms there’s no guarantee that either of them would decay in ten seconds.

    2. The particular experiment you’ve just suggested can’t be done, even logically. [I.e. you can’t go back and do the same experiment with the same atoms after they’d decayed.] You have to use logic if you’re going to do science; once you drop logic you make lots of mistakes.

      How about suggesting an experiment you could actually do? Let’s throw the two atoms at each other. Suppose one comes back to your hand and the other comes back to mine. Did they miss each other, so that you caught mine and I caught yours? or did one bounce off the other, so you caught yours and I caught mine? Well, according to you, we can tell the difference. But experiment shows we can’t tell, because the scattering probability, which can be calculated, would be larger if we could tell the difference. (And you can test this by scattering non-identical atoms off each other and checking that your calculation always gives the correct scattering rate in that case; only when the atoms are of the same type do you get a different answer.)

      There are thousands of other checks.

      1. Matt wrote: “Thank you for this comment, which appears wise and reasonable but is hopelessly naive. You seem to think that this is something that theoretical physicists made up out of their heads, and that obviously it can’t be checked because, gosh, how could you possibly go in and compare two protons?”

        I am not talking about comparing just two atoms of the same isotope or two protons from an external view. I am talking about taking two atoms of the same isotope or protons and looking at the specific details of the inner workings, like the specific configuration of an unstable nucleus at a particular moment or, in the case of a proton, the quarks, gluons, etc. that make up the proton.

        With protons, are the colors, flavors, orientations, etc exactly identical at any given time? Can gluons split into virtual quarks in one proton while remaining gluons in the other? The idea that every proton is made from two up quarks and one down quark is incorrect. A proton has two more up quarks than up antiquarks, and one more down quark than down antiquarks and they are moving all over the place. I don’t imagine the configurations of two protons to be exactly the same, but instead that the net particle appears to be the same.

        Matt wrote: Let’s throw the two atoms at each other. Suppose one comes back to your hand and the other comes back to mine. Did they miss each other, so that you caught mine and I caught yours? or did one bounce off the other, so you caught yours and I caught mine? Well, according to you, we can tell the difference. But experiment shows we can’t tell,….

        My point is this, just because WE can’t tell the difference with our methods and measurements at this time … does not mean that ultimately two atoms of the same isotope are exactly alike.

        Of course, I understand that through experiments and things like the diffusion problem, you can get at the movement of nuclei and protons. Yes, I also know that these movements currently may only be well-modeled using quantum mechanics, which assumes random behavior. But I am saying that quantum mechanics may not be the end-all, or pinnacle of our ability to understand how matter behaves. It may be unproductive to assume that randomness is the inherent underpinning of nature.

  7. So are these authors just completely on drugs? http://arxiv.org/abs/1302.6012 “Nonidentical protons” T. Mart, A. Sulaksono (Submitted on 25 Feb 2013) We have calculated the proton charge radius by assuming that the real proton radius is not unique and the radii are randomly distributed in a certain range.

  8. You point out that the chemical activity of different isotopes of the same element is just about identical and so one’s body does not have to pick and choose. However, your choice not to mention basic molecules of an element, oxygen for example, leaves out the possibility of an interesting contrast between different isotopes, which can be processed without problem, versus different allotropes: O3 (ozone) versus O2, which do not affect the body in the same way at all. I realize that this would require a much longer article, but somehow I felt it was missing.

    1. Hmmm. I haven’t thought about where that would fit in my presentation. I’m trying to get to particle physics as quickly as possible and not do too much with molecules. Maybe at some point I’ll be able to add that in.

  9. I’m not sure that sugar is the best example because of the possibility of chirality — the difference between D-glucose and L-glucose is very important for biology!

  10. Would a radioactive atom (Say K-40) be a defective building block?

    If any two given atoms of the same isotope of an element are completely identical, how does this sit with the Pauli exclusion principle? Surely something must differentiate them, or do they obey the maths of bosons?

    1. I’m probably stepping on the professor’s toes here, but atoms aren’t single particles. In 2 atoms of the same element, each electron is in each atom in its proper orbital that obeys the principle, but the two atoms aren’t linked (unless they are a molecule, in which case each electron is in its orbital and the valence electrons are in their hybrid bonding orbital, all of which obey the exclusion principle).

      1. You are correct in that atoms are composite particles, but so are things like protons. An atom can behave like a single fundamental particle in that I can perform experiments like the double slit experiment on it. (I beleive the largest aggregation this has successfully been performed on is C60 fullerene molecules.) I also know that atomic nuclei can be fermions (Odd number of nucleons) or bosons (even number of nucleons) and that this is directly responsible for say, Helium-4’s ability to become a superfluid at higher temperatures than that of helium-3.

        1. Kudzu is correct, andy; your objection isn’t accurate. Because of quantum mechanics, composite objects in their ground states can still be exactly identical, just as Kudzu describes.

          Kudzu, your point about radioactivity is a good one. Not sure how to bring it in though. Our biology is designed with error-correction mechanisms, to handle the damage from the small amounts of radioactivity that we’re likely to encounter in daily life.

          As for Pauli’s exclusion principle for identical fermions — I am confused about your point. Electrons are identical, in that you can swap one for another and nothing changes. They are indistinguishable. But that doesn’t mean they are *doing* the same thing; one of them could be here on earth and another on the moon, or one could be in the inner shell of a carbon atom with another in the outer shell. (Think about identical twins; you can’t tell them apart, but they don’t have to do the same thing at the same time.) Since electrons are fermions, and identical, they can’t be in the same location, doing exactly the same thing; that’s Pauli exclusion. Exactly the same logic holds for atoms of the same isotope in their ground state. The statement that they are identical isn’t the statement that they are doing identical things; it is the statement that if you swapped two of them, making the first do what the second was doing and vice versa, you wouldn’t be able to tell you’d made a swap. And if they are fermions, they can’t be doing exactly the same thing (since, for experts, the swap would produce a relative minus sign in the wave function, which would be impossible if they were behaving identically.)

          1. Right. I guess I should have been more specific, and on further thought this question may be better worded not relating primarily to the exclusion principle at all.

            As you know you cannot force an arbitrary number of identical fermions into the same space, I cannot make a ‘laser’ beam out of fermions. Given that some atoms are fermions, I assume that you cannot place two of them in the same space in their ground state. Does this affect interatomic forces? I have always assumed the primary force keeping atoms from packing closer together than they do was electromagnetic repulsion between electrons in the orbitals of neighboring atoms, though I have recently seen it argued with some force that it is in fact the exclusion principle not allowing an atom’s electrons to occupy the same space. I am very doubtful of this, but would like to know if the exclusion principle has any affect on atoms, does a gas of helium 4 atoms have a higher density than one of helium-3 (In terms of atoms of course) since He-4 atoms are bosons?

          2. Electrons come in two configurations of 1/2 spin up and 1/2 spin down. This is why two electrons can occupy the same orbital at the same time. To think of it another way, as wave functions, if they frame-shift on the same location, with opposite spin, they form standing waves. if they had the same spin and occupied the same location and frame-shifted, they would cancel each other out. that is if the peak of one wave lined up with the valley of another wave, and they had the same spin, rather than a standing wave, the function would collapse to zero. A flat line, and they would cease to exist as electrons. But because they can have opposite spin, or angular momentum, they form a stable standing wave function. Hence the reason only a maximum of 2 electrons can occupy any given orbital. They have 3 out of 4 identical quantum properties, spin being the difference. If they shared all 4 quantum properties, the Pauli exclusion principle would not allow them to occupy the same space. Kind of like say if two ghosts tried to stand in the same place, they could not. Nor could two sold people stand in the same place. However if you pair up a ghost and a solid person that are identical in every other respect besides one being a ghost and the other being solid, then the two can easily stand in exactly the same physical space at the same time because of that one opposite quality.

Leave a Reply


Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC