1

The Science of Scale — Greg Egan

 1 year ago
source link: https://www.gregegan.net/SCALE/01/ScienceOfScale.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The Science of Scale — Greg Egan

Scale

The Science of Scale


The Lightest Mass Sets the Scale

What is it that determines the size of an atom? A hydrogen atom consists of one proton and one electron, with equal and opposite electric charge, but the proton is much heavier than the electron: its mass is about 1836 times greater. Both particles obey the same laws of quantum mechanics, but the quantum-mechanical wave function that lets us calculate the probability of finding a particle at a given location is much more spread out for the electron than it is for the proton. So while the proton contains most of the mass of the atom, the electron takes up most of the space that the atom occupies.

There is a small chance of finding an electron at any distance at all from the proton, so we can’t ask for the radius of a sphere in which the electron is definitely contained. However, there is a simple length scale we can use instead, known as the Bohr radius. In the early history of quantum mechanics, this was conceived of as the radius of a circular orbit in which an electron travelled around the nucleus, but while that picture of atoms was simplistic, the Bohr radius still turned out to be a useful quantity in atomic physics. In modern terms, it is approximately equal to the most likely distance between the electron and the proton.

Probability of electron's distance from hydrogen nucleus

The image on the right plots the probability of finding the electron at various distances from the proton in a hydrogen atom, where distance is shown as a fraction of the Bohr radius. Note that the average distance of the electron is 50% greater than the most likely distance, because of the long tail of probability that stretches out to greater distances.

The Bohr radius is usually given the symbol a0, and its value is:

a0 = ε0h2 / (π e2me)

The constant ε0 here measures an aspect of the electrostatic field known as the permittivity of the vacuum, while h is Planck’s constant, e is the electric charge on a proton or electron, and me is the mass of an electron. If we insert the values for these quantities into this formula, we get:

a0 ≈ 5.291 × 10–11 m

If it strikes you as a bit strange that the mass of the proton doesn’t appear at all in this formula, you’re right, because the main reason the Bohr radius is only approximately equal to the most likely distance between the electron and the proton is that the standard definition of a0 comes from treating the proton as being so much heavier than the electron that it remains completely fixed. But just as the Sun does not stay perfectly fixed while the Earth and other planets orbit around it, neither does an atomic nucleus. And in both the celestial mechanics of two orbiting bodies, and the quantum mechanics of a hydrogen atom, the simplest way to adjust our calculations to account for this is by using what is known as the reduced mass. Whenever two bodies are subject to equal and opposite forces, if we analyse their motion relative to each other — rather than their individual trajectories in space — the mathematics of the problem turns out to be equivalent to that of a single body whose mass is equal to the reduced mass, moving under the influence of the original force around an immovable centre of attraction.

For a hydrogen atom, the reduced mass is:

mreduced = (memp) / (me + mp)

where mp is the mass of the proton. Because mp/me ≈ 1836, we have:

mreduced = me / (me/mp + 1) ≈ (1836/1837) me ≈ 0.99946 me

So, in the case of a hydrogen atom, the difference we get from using mreduced in place of me is fairly small. This is not to say that physicists don’t care about the difference; the properties of hydrogen atoms have been calculated and measured with far greater precision than this. But for the sake of making the Bohr radius a kind of general yardstick for atomic phenomena, rather than a specific, precisely measured property of hydrogen atoms themselves, it is defined by the formula we’ve given, with no reference to the mass of any particle other than the electron.

The same formula that gives the Bohr radius also yields useful distances for other atomic-scale systems if we make some simple changes. If, instead of a hydrogen nucleus with a single positive charge, we have a single electron orbiting a larger nucleus that contains Z protons, then making the substitution e2 → Z e2, which amounts to dividing a0 by Z, gives the most likely distance of the single electron from that larger nucleus. For example, if we take a helium atom, with two protons and two neutrons in the nucleus and two electrons, and strip away one of the electrons, the remaining electron will be most likely to be found at a distance of approximately a0/2 from the nucleus. In other words, the positive ion He+ is about half the size of a hydrogen atom.

Another change we can make is to replace me with the mass of a different negatively charged particle. There is a particle known as a muon, which is similar to an electron in many respects; it has the same electric charge, but it has a larger mass, mμ ≈ 207 me. Unlike the electron, it can decay into other particles, and it has a mean lifetime of about 2.2 microseconds. However, that is long enough for experimenters to create atoms which contain muons instead of electrons, and to measure some of their properties.

Because the muon is not as light as an electron, the reduced mass:

(mμmp) / (mμ + mp)

is substantially different than the mass of the muon itself:

(mμmp) / (mμ + mp) ≈ 207 × 1836 me / (207 + 1836) ≈ 186 me

So an atom of muonic hydrogen is about 186 times smaller than an atom of ordinary hydrogen, rather than 207 times smaller.

Apart from the change in size, how is muonic hydrogen different? Whenever positive and negative electric charges are brought together, the system has a certain potential energy: the amount of energy it would take to separate the individual charges, pulling them apart against the attractive force they experience. For two pointlike particles, this potential energy is proportional to 1/r, where r is the distance between them. It also depends on the amount of electric charge, but since that is exactly the same in both the ordinary hydrogen atom and the muonic hydrogen atom, the energy required to pull a muonic hydrogen atom apart is about 186 times greater than for the ordinary atom.

This factor of 186 also shows up in the spectrum of muonic hydrogen: the set of frequencies of light [and other electromagnetic radiation] that are absorbed or emitted when the atom changes its quantum energy level. Photons of light absorbed or emitted in these transitions have frequencies that are proportional to the difference in energy between the levels, and since all those energies are 186 times greater, all the frequencies are 186 times greater too.

Increasing the frequency of a wave of light by a factor of 186 means making the period of the wave — the time it takes to complete one cycle — smaller by a factor of 186. Any clock that relied on transitions between electron states in an ordinary hydrogen atom would run 186 times faster if it was converted to muonic hydrogen.

This illustrates a very general principle that comes from the basic properties of quantum wave functions. The energy of a particle is proportional to the frequency of its quantum wave function (the number of cycles the wave fits into a given time), and the momentum is proportional to the spatial frequency (the number of cycles the wave fits into a given distance). But the energy and momentum are both proportional to the mass as well, so both the period of the wave, and the wavelength, must be inversely proportional to the mass.

This means that if all the masses in a quantum system were changed by the same factor, then all distances and times related to the size and behaviour of the system would be reduced by exactly the same proportion. Of course when we swap a muon for an electron in an atom, we are not quite doing that: the mass of the nucleus stays the same.

Apart from the electron and the muon, there is another similar particle known as the tau lepton. This is 3477 times heavier than an electron, and has a much shorter lifetime than a muon, around 3 × 10–13 seconds. The term lepton comes from the Greek for “small”, but tau leptons are more massive than protons. In our universe, there are (almost certainly) just three generations of elementary particles, including these three leptons.

In the novel Scale, however, the premise is that there are eight leptons, with masses that are all powers of two times the mass of the lightest one: m0, 2 m0, 4 m0, 8 m0, 16 m0, 32 m0, 64 m0, 128 m0. What’s more, they all have lifetimes much longer than the current age of the universe. We will call these particles e0, e1, ... e7, in order of increasing mass.

What can we say about atoms that contain several different kinds of leptons, with different masses? In our own universe, people have formed atoms that contain both muons and electrons, so this is not just a hypothetical question. In a helium atom with one muon and one electron, the muon is located much closer to the nucleus than the electron, for two reasons: first, because it is more massive, but also because it is exposed to the full charge of the nucleus (+2e), whereas the electron spends most of its time far enough away from both the nucleus and the negatively-charged muon (–e) that the muon effectively “screens” a substantial part of the nucleus’s charge. This leaves the electron feeling something closer to the attraction of the combined charge of 2ee = e, so while the muon’s location is roughly that of the muonic Bohr radius with the further reduction that comes from dividing by Z=2, the electron’s location is roughly that of the ordinary Bohr radius for the hydrogen atom.

Probability of leptons' distance from helium nucleus, for one lepton double the mass of the other

In the universe of Scale, an electrically neutral atom might contain any mixture of leptons of different masses, so long as their total number, and hence the total negative charge, balances the positive charge of the nucleus. But while an atom like this can be a lot more complicated than the muon/electron helium atom, the same general principle applies. The overall scale of the atom will be determined by the mass of the lightest leptons that the atom contains, the number of such leptons, and the screened charge of the nucleus: its actual positive charge, plus the negative charge of the heavier leptons that sit closer to the nucleus than the lightest ones.

The image on the right shows the probability of finding e0 and e1 leptons at various distances from a shared helium nucleus. Because they only differ in mass by a factor of 2, the heavier lepton’s wave function is not as dramatically smaller as a muon’s, but the screening of the nuclear charge is still enough that the lighter lepton does not experience the full charge of Z=2, and so its wave function (blue) is only about 20% smaller than that of hydrogen. On the other hand, the wave function for the heavier lepton (red), is smaller by roughly a factor of four, due to both the attraction of [almost] the full Z=2 charge of the helium nucleus, and the lepton’s mass being twice as large.

As the number of each kind of lepton in the atom increases, they will fill up the orbitals available to them: these are the different wave functions for particles moving around the atom, which differ in their energy and angular momentum quantum numbers. Leptons are fermions, which means no two of them (of a given kind) can occupy the same quantum state, and each orbital can only be occupied by a maximum of two leptons, which distinguish themselves by having different directions of spin. The chemical properties of an ordinary atom are largely determined by the number of electrons, and the resulting pattern of filled and partly empty shells. In an atom with several different kinds of leptons, it will be the number of the lightest leptons, and the pattern of shells they occupy, that is the dominant influence on its chemical properties. However, unlike the electron-only case, where the charge of the nucleus fixes the number of electrons, the number of the lightest leptons will be set by both the charge of the nucleus and the number of other leptons. So while in our universe, each chemical element with a fixed number of electrons will generally come in a fairly small number of isotopes, which differ only by the number of neutrons in the nucleus, in the universe of Scale there will be a much larger set of “meta-isotopes”, where only the screened charge of the nucleus is the same, but the actual number of protons, and the particular mixture of heavier leptons present, are free to vary. For example, a helium nucleus combined with e0 and e1 leptons will be a meta-isotope of hydrogen, because although it has a nuclear charge of +2e and two leptons in total to balance that charge, it only has one lightest lepton, making it chemically similar to a heavy isotope of hydrogen.

Because the lightest leptons set the overall scale of an atom, in the novel matter is described as “Scale Zero”, “Scale One”, “Scale Two” ... “Scale Seven” if it lacks 0, 1, 2, ... 7 of the eight possible leptons. In “Scale S” matter, the lightest leptons in the atoms are the ones we call eS.

  • The lightest leptons present in “Scale S” matter, eS, have mass 2Sm0, where m0 is the mass of the lightest of the eight leptons.
  • The atoms in “Scale Zero” ... “Scale Seven” matter are successively smaller by factors of two. All else being equal, “Scale S” atoms are roughly ½S the size of “Scale Zero” atoms.
  • The quantities of energy involved in the chemistry of these atoms are successively greater by factors of two, with the energy stored in a molecule, or required to break a chemical bond, being 2S times greater in “Scale S” matter than “Scale Zero” matter.
  • Individual atoms with the same kind of nucleus have roughly the same mass (since the mass of the nucleus is still much greater than that of the leptons), so matter becomes denser mostly due to the smaller size of the atoms. All else being equal, “Scale S” matter is about (2S)3 = 8S times denser than “Scale Zero” matter.

Cosmic Abundance of the Leptons

In the universe of the novel, there are eight different kinds of leptons with different masses, and unlike muons and tau leptons in our own universe, we don’t have to worry about them decaying away. But what would we expect the relative numbers of these different leptons to be, after the universe cooled down in the aftermath of the Big Bang? Will there be more of one kind than another?

You might think that the heavier leptons would be scarcer, since it costs more energy to produce them. However, it doesn’t turn out that way!

In the very early universe, the temperature was so great that particles and antiparticles of all kinds were constantly being produced and then annihilating each other. We do not yet understand why there was slightly more matter than antimatter produced in our universe, but if we assume that the same thing happens in the universe of Scale, and that this particular aspect of the process does not act differently for the different kinds of leptons, then what we need to calculate is the number of leptons and antileptons of each kind that were present just as the universe cooled down enough that they were no longer being produced. So long as the matter-to-antimatter ratio is the same for each kind of lepton, then after all the antileptons of a given kind have annihilated a matching number of leptons, the number of surviving leptons in each case will be the same fraction of the original number.

Of course, if the universe is infinite it will contain an infinite number of leptons, but in that case we can just take some finite region and follow its expansion over time, in line with the overall expansion of the universe. It might seem tricky to ask how much the universe itself has expanded if it is infinite, but cosmologists have no trouble with this; the simplest measure is how much the radiation from an earlier time has increased its wavelength due to cosmic redshift.

When the plasma that fills the early universe is at temperature T, the energy that is available for pair-production (the creation of lepton-antilepton pairs) is Ek T, where k is Boltzmann’s constant. The temperature itself is inversely proportional to L, the size of the universe (or the size of the region we are tracking).

The temperature we are interested in, for leptons of mass 2Sm0, is the temperature such that 2Sm0c2 ≈ k T, because this is when the universe cools down to the point that pair-production ceases for these particular leptons. Here we have used the famous mass-energy equivalence formula, E = m c2.

What this means is that the size of the universe, when this happens for each particular kind of lepton, will scale with S as:

LS = ½SL0

But as discussed in the previous section, the characteristic length scale for leptons of mass 2Sm0 obeys the same scaling law. So at each transition temperature, when one kind of lepton stops being created by pair-production, everything effectively “looks the same” to each of the leptons in question. The universe is smaller and hotter when this happens for the more massive leptons, but they have proportionately smaller wavelengths, so they will be present in the same number, in each case.

We are making a lot of simplifying assumptions here, but the bulk of the elaborate calculations in statistical mechanics that we could have slogged through, to try to quantify the number of leptons and antileptons in equilibrium at a given temperature in a universe of a given size, end up depending solely on the ratios between the lepton mass and the temperature, and between the lepton wavelength and the size of the universe. Since the temperature T when pair-production ends is proportional to the lepton mass, and since temperature is inversely proportional to the size of the universe, L, the differences in circumstances for the different masses all just cancel out.

So, under our assumptions, we expect all eight leptons to be present in equal numbers.

How Are the Leptons Distributed Between Atoms?

We have seen that the different kinds of leptons will all have the same overall cosmic abundance. But does this mean that they will also be evenly distributed? In every atom with Z protons in the nucleus, will the Z leptons that accompany it all have a 1-in-8 probability of being each of the eight kinds?

The detailed answer to that will be complicated, and will depend on the whole context and history of the particular matter we are talking about. In our own universe, even the subtle difference between isotopes of the same element can lead to their local abundance being different in different astronomical, geological and biological contexts.

One important consideration will be the amount of energy it takes to form or disassemble atoms with different arrangements of leptons. Let’s consider one of the simplest examples. Suppose we have two atoms, formed from two ordinary helium nuclei, along with a total of four leptons: two e0 leptons of mass m0, and two e1 leptons of mass 2 m0. We could arrange things in either of two ways: we could have two atoms that each have identical leptons, or two atoms that each have dissimilar leptons. Writing He for the helium nucleus, we have either:

(He e0e0) + (He e1e1)
2 (He e0e1)

Which of these arrangements would have the lowest energy?

The energy levels available to a single lepton bound to a nucleus with atomic number Z are given by:

En = –Z2e4mreduced / (8 h2 ε02n2)

where n = 1, 2, 3, ..., and the lowest energy level is given by n=1. For example, for an ordinary hydrogen atom, the lowest energy, or ground state energy, is:

EH = –2.18 × 10–18 Joules

What this means is that it would take 2.18 × 10–18 Joules to pull the electron away from a hydrogen atom completely, ionising it, so this quantity is also known as the ionisation energy of hydrogen. (There are further refinements to this formula that take account of effects beyond the non-relativistic Schrödinger equation we are using, but for our purposes this approximation is good enough.)

For atoms with more than one lepton, there is no simple formula like this, but there are techniques we can use to estimate the energy levels. One way to estimate the ground state energy of a helium atom is known as the variational method. Here, we take a family of possible wave functions, and vary the parameters that describe their shape until we find the one with the lowest energy within the family we are looking at. For the case of a helium atom, we adjust the value of Z, the nuclear charge, as it appears in two separate versions of the lowest-energy hydrogen wave function. This is essentially the same as squeezing or expanding each of the one-particle wave functions, until we reach the minimum energy that can be achieved this way.

For an ordinary helium atom in our universe, the best solution in this family of wave functions is found by setting:

Z = 27/16 = 1.6875

for both electrons. This value of Z, less than the true nuclear charge of 2, tells us that each electron is seeing the nucleus partly screened by the other electron.

The associated ground state energy is:

EHe ≈ –1.24 × 10–17 Joules

This is within about 2% of the measured value.

If we apply the same method to all three kinds of helium atoms that interest us, the results are:

AtomZ1Z2Eground
(He e0e0)1.68751.6875–1.24 × 10–17 Joules
(He e1e1)1.68751.6875–2.48 × 10–17 Joules
(He e0e1)1.248661.95348–2.00 × 10–17 Joules

In the first two cases, because the two leptons have equal mass, the amount of screening is identical. In the third case, we have Z for the lighter lepton of 1.24866, indicating that the heavier lepton, being closer to the nucleus, is screening even more of the nuclear charge that the lighter lepton sees than when the masses are equal. But Z for the heavier lepton is very nearly 2, because the lighter lepton rarely comes between it and the nucleus. We plotted the probabilities of finding these two leptons at various distances from the nucleus earlier.

As expected, the helium atom where we have two e1 leptons has about twice the energy as the one with two e0 leptons, and the energy of (He e0e1) lies in between. But the crucial values are the total energies for the two possible arrangements of the ingredients:

AtomsTotal Eground
(He e0e0) + (He e1e1) –3.72 × 10–17 Joules
2 (He e0e1) –4.00 × 10–17 Joules

So we achieve the lowest energy if we share the heavier leptons equally between both atoms.

But let’s look at another example. Suppose that we have the same four leptons, but instead of two helium nuclei, we have one helium nucleus and two hydrogen nuclei. For the two possible hydrogen atoms, the ground state energies are:

AtomEground
(H e0)–2.18 × 10–18 Joules
(H e1)–4.36 × 10–18 Joules

The possible arrangements of the leptons, and their total ground state energies, are:

AtomsTotal Eground
(He e0e0) + 2 (H e1) –2.11 × 10–17 Joules
(He e1e1) + 2 (H e0) –2.92 × 10–17 Joules
(He e0e1) + (H e0) + (H e1) –2.66 × 10–17 Joules

This time, the lowest energy comes from putting all the heavier leptons into the helium atom, and leaving all the lighter ones with the hydrogen atoms.

In a scenario like this, the heaviest leptons will end up bound to the nuclei with the greatest positive charge. Roughly speaking, when a lepton with mass m is bound to a nucleus with charge Z, there is a factor of –Z2m in the energy. If m2 > m1 and Z2 > Z1, then:

(m2 – m1)(Z22 – Z12) > 0
Z22m2 –Z12m1 < –Z12m2 –Z22m1

Of course this isn’t the whole story with the energy, which will also depend on what shell each lepton is in, and the complicated effects of the presence of all the other leptons. But with those caveats, we can still make the case for a general heuristic: all else being equal, the configuration where the larger lepton mass is associated with the larger nuclear charge will have the lower energy.

Once we start considering more than a handful of particles, there will be a vast number of possible ways that any collection of leptons and nuclei could be arranged, and just as we generally do not find matter in our own universe having formed the precise set of chemical compounds that minimises its chemical energy, the distribution of leptons between nuclei in any substantial quantity of matter is unlikely to have reached the true minimum energy. Rather, as we noted at the start of this section, we will generally need to consider many more factors than a list of all the particles that are present. How long has the particular collection of atoms and molecules we are considering been together? Is it solid, liquid, gas, or some mixture of all three phases? How concentrated is it? What temperature is it now, and what temperature has it been in the past? Energy and entropy can give us some hints, but the detailed history and circumstances will always have the potential to make a huge difference.

Molecules

Most matter is not comprised of single atoms, but of molecules consisting of two or more atoms bound together in some way. When two atoms share a lepton, that is known as a covalent bond. In a covalent bond, the kind of wavefunction that a lepton can occupy in the vicinity of a single atom is extended to one that binds the lepton to two different nuclei at once, and although the nuclei repel each other because they both have a positive charge, through the attraction they both have for the shared lepton (or usually, two leptons), they are bound together at a certain distance where the attraction and repulsion balance.

Since the distance between two covalently bound atoms is determined by the lepton wave function, its size will scale in the same fashion as the size of an individual atom, with the same factor of ½S. And the energy associated with the leptons forming this bond will scale with 2S, just like atomic energy levels.

However, there are other ways for a molecule to possess energy. A molecule can vibrate, with the bond lengths changing periodically, a bit like a toy model of the molecule could vibrate if the atoms were solid balls joined together with springs. And a molecule can rotate, turning around its centre of mass, giving it rotational kinetic energy.

When a molecule vibrates, the change in the bond length, say δ, is associated with a potential energy U:

U = ½ k δ2

Here k is a constant, and the formula is modelled on the potential energy of an elastic material that obeys Hooke’s law with a spring constant of k. This is the same as saying that the force restoring the spring towards its relaxed length is proportional to the distance δ by which it has been stretched or compressed. In a quantum-mechanical harmonic oscillator, a system where a mass M is subject to such a force has energy levels given by:

En = (n+½) (h/(2π)) √(k/M)

for n = 0, 1, 2, ... The mass M in this case is related to the mass of the atomic nuclei, not of the leptons, so it is unchanged by the scaling of the lepton mass. But how does k change?

Very roughly, we would expect U to scale like the typical lepton energy when δ is comparable to the size of the molecule. That is:

k ~ U / δ2
~ 2S / (½S)2
~ (2S)3

Our formula for En then gives the result:

Evib ~ (2S)3/2

So the vibrational energy levels of a molecule will increase with S, at the 3/2 power of the rate at which the lepton energy increases.

What about rotational energy? Rotational kinetic energy and angular momentum for classical objects are governed by the equations:

I = M r2
L = I ω
Krot = ½I ω2

where I is the momentum of inertia for a mass M at a distance of r from a centre of rotation, L is the angular momentum, ω is angular velocity and Krot is rotational kinetic energy.

Because we are dealing with a quantum system, angular momentum is quantised, and for the lowest energy rotational states it is of the order of the reduced Planck’s constant:

L ~ h/(2π)

It follows that:

ω ~ h/(2π) / I
Krot ~ I ω2
~ (h/(2π))2 / (M r2)
~ (2S)2

So the rotational energy levels of a molecule will increase with S, at the square of the rate at which the lepton energy increases.

Scales as Ecological Niches

Whenever there are mixtures of atoms or molecules that could be recombined into a state with lower energy, but the reactions that would make this happen do not take place spontaneously (or only do so very slowly), then in principle there is an opportunity for life to step in and extract some useful work from the energy difference. Life on Earth does this all the time — often in conditions that would be hostile to most other organisms, and require specific biochemical adaptations for a particular niche (such as hot springs, or hydrothermal vents).

If one of the opportunities in the Scale universe involves moving heavy leptons to higher-charged nuclei — or other energetically favourable rearrangements — then it is possible that this kind of “lepton shuffling” will be exploited in various biological pathways.

On the face of it, there is no reason why this requires any organism to evolve beyond what is known in the novel as “rootlife” (composed primarily of “Scale Zero” matter), and set about actively removing one or more of the lighter leptons from their body. But there are certainly some potential advantages in specialising to a single scale:

  • If there is a large supply of energy waiting to be exploited by any organism that is robust enough to deal with it, then the first innovations that tap into this supply will be followed by selection pressure to adapt further and gain safer, more reliable, more extensive access to the same resource.
  • Dealing with the higher energies involved when manipulating heavier leptons will be less likely to damage the organism if all its molecules have chemical bonds at that energy scale, rather than the more fragile ones due to lighter leptons.
  • Predation by rootlife becomes more difficult if the organism is “tougher”, whether chemically (making it harder to digest) or physically (making its skin harder to rupture).

There are also obstacles: scale-specific organisms are going to need suitable air and water, and it will be easier if these inputs are already of the required scale. But on Earth, bacteria and plants have drastically altered the composition of the atmosphere compared to its original state, and also created countless niches where the waste products of one set of organisms serve as food and energy sources for others. The Earth’s atmosphere contained almost no free oxygen until the Great Oxidation Event, 2.4 billion years ago, when cyanobacteria developed photosynthesis. The 20% of the atmosphere that is now oxygen is almost all due to biological activity.

So the most likely scenario for life creating the scales as separate ecological niches would involve single-celled organisms acting as pioneers, exploring the biochemical options and pumping out their own scale-specific metabolites. Once they had paved the way, larger creatures could co-opt some of the same biochemistry, while exploiting both the pioneers themselves and their waste products for their own purposes.

Biomechanics and Metabolism

The question of how organisms of different sizes vary in their anatomy, lifespan, metabolism, body temperature, running speed, physical strength, etc., is already a complex subject when it is limited to life on Earth. This area of biology, known as allometry, has produced a vast literature and a multitude of “scaling laws”, some reasonably well supported and uncontroversial, some more contentious. But what the existing, real-world subject certainly doesn’t cover is the kind of scaling that lies at the heart of the novel, where one creature is smaller than another because all of their atoms are smaller. So instead of relying on conventional allometry, we will need to go back to first principles and construct our own scaling laws.

Mouse vs. Elephant

Here is one fairly uncontroversial claim, in the context of ordinary biology: if a mouse and an elephant both step off a cliff that is two metres high, the mouse is likely to be harmed much less by the fall than the elephant.

Why? Suppose the linear dimensions of the elephant (its length, breadth and height) are each about 100 times greater than those of the mouse, but both animals have similar densities. Then the elephant will weigh 1003 = one million times more than the mouse, and the kinetic energy it gains by falling the same distance will be one million times greater. Of course, an elephant also has thicker bones, but if their cross-sectional area was only 1002 = 10,000 times that of the mouse, then so long as the energy that needs to be dissipated is concentrated in a thin slice of bone, this still leaves about 100 times more energy that each molecule of bone has to cope with.

In fact, the bones of larger animals tend to be proportionately thicker, scaling up more than the overall size of the animal, for precisely this reason. This goes some way towards preventing injuries when the animal suffers the kind of falls they are likely to experience. But even these anatomical changes are not enough to compensate completely for the increased weight, and the mouse would still fare better.

Now, let’s compare people of different scales (in the sense of the novel), again both stepping off a cliff of the same height. This time, both people will weigh roughly the same, so the kinetic energy they need to dissipate when they hit the ground will be similar. And though the smaller person will have smaller bones, there will be roughly the same number of molecules in any cross-section. However, all the chemical bonds in the smaller person’s body will be stronger by a factor of 2S [where S increases for smaller scales], so they will suffer less harm from an identical fall.

So we have the same result for mouse vs. elephant and Scale Seven person vs. Scale One person, but the reasons are entirely different.

Metabolism and Body Temperature

How would we expect the metabolic rate of a person to change with their scale? That is, how fast would we expect analogous biochemical processes to take place in their bodies?

This will be related in part to their body temperature, but if we naively tried to increase the thermal kinetic energy of every molecule in their body to scale along with the chemical energy, we would have to double their body temperature (on the absolute temperature scale) with every halving in size! Clearly that would be impossible, given that these organisms all live on the same planet with the same ambient temperature. So, although body temperature is likely to increase with S, the thermal energy cannot grow in step with the chemical energy.

At similar temperatures, molecules of similar mass will have similar velocities, and since the typical distance a molecule needs to travel to complete a process scales with ½S, because the whole organism is smaller by that factor, if a molecule merely has to move somewhere to do its job, the rate will scale with 2S.

But if the molecule has to react chemically, and if it relies on its thermal energy to supply the activation energy, the rate of the process will depend on a quantity known as the rate constant, given by the Arrhenius equation:

Rate constant = A exp(–Ea/(k T))

Here Ea is the activation energy, T is the temperature, k is Boltzmann’s constant, and A depends on the details of the reaction. Ordinarily, we would think of the rate constant as being fixed by the temperature and the details of the reaction, and then it would be multiplied by the concentrations of the reactants to find the rate at which the reaction took place. But we cannot expect A to stay the same if we shrink the molecules themselves! Instead, we should imagine a movie showing the same total number of molecules colliding, magnified to look the same whatever the actual scale, with the only difference being that the time between collisions is shorter by the factor of 2S that comes from the molecules being closer together, while the fraction of collisions that exceed the activation energy and allow the reaction to take place is controlled by the exponential factor in the Arrhenius equation.

If the activation energy Ea scaled with 2S, but the temperature T only increased slightly, the rate of the reaction would be exponentially suppressed. However, in an evolved biochemical system, the activation energies for the reactions need not scale with 2S just because the overall chemical energy behaves that way. There is no compulsion for different organisms to use molecules that are all completely identical apart from swapping lighter leptons for heavier ones. Rather, evolution will need to identify catalysts that keep the activation energy low enough at each scale, and vary the detailed molecular pathways as needed to ensure that these reactions can proceed at a reasonable rate without an excessive increase in body temperature. If this sounds like a challenge, it certainly is — but our own bodies manage to extract all the energy that comes from burning carbohydrates, without requiring the kind of temperatures needed for inorganic combustion.

In what follows, then, we will assume that reactions proceed more rapidly by a factor of 2S, and that the power consumed by an organism scales by (2S)2, since each reaction is both faster and more energetic. [There is no contradiction between the energy liberated by a reaction increasing with 2S, while the activation energy needed to make the reaction happen grows much more slowly.]

How can a smaller organism avoid overheating, if it is dissipating more power? The only method that scales the right way is evaporative cooling, where a suitable liquid absorbs energy as it turns to a vapour on the skin. The total number of molecules evaporating per unit time will only need to scale with 2S, the rate at which other processes happen, so long as the energy absorbed per coolant molecule also scales by the typical chemical energy factor of 2S. Note that the same total number of molecules of sweat will fit on the skin of a similar organism of any scale; there is less surface area, but the molecules themselves are smaller.

Similarly, food intake will occur at a rate that scales with 2S when expressed as molecules per unit time, with the energy in the food scaling by 2S per molecule, and total calories consumed per unit time scaling by (2S)2.

Reaction Times

If we adopt the rule that chemical reactions proceed at a rate that scales roughly with 2S, then it would be consistent to assume the same kind of scaling for most physiological and neurological processes. This means it would be reasonable to expect subjective time to pass more rapidly by a factor of 2S, and smaller-scale people to be able to respond to stimuli faster, both in terms of identifying an event and physically reacting to it.

Of course, a smaller-scale environment might give rise to shorter time scales, in a sense cancelling out any subjective benefit, even if an objective advantage remained. For example, if people of different scales were all playing the same kind of ball game (with a separate game taking place for each scale), and the speed of the ball was independent of the scale, then the fact that the ball was crossing shorter distances, in shorter times, would make the game proceed objectively more rapidly for the smaller scales, but subjectively the participants would consider everything to be happening at a similar pace, relative to their natural response times, so they would not find the game any easier.

“Water” is Not Water

Life at each scale will need its own universal solvent. Rootlife could still rely on H2O, but for the other scales, simply replacing all the lightest leptons with heavier ones would be unlikely to result in a liquid at ambient temperatures. Rather, each scale will need a molecule of its own that possesses all the right chemical and physical properties.

Similarly, whatever pigments are used by plants and micro-organisms for photosynthesis, the molecules that worked at one scale could not simply be cloned at a smaller scale with any prospect that they would still function. Sunlight will only be available across a limited range of frequencies, so Scale Two plants could not survive if they needed radiation at double the frequency that Scale One plants used.

This will be true across the entire gamut of biochemical processes: the molecules that perform similar roles at different scales will not be the result of simply swapping in heavier leptons.

Strength and Speed

The force exerted by a spring (or any elastic material) is equal to the rate of change of its potential energy with respect to its length. If we pack all the molecules of the spring together so that its linear dimensions are scaled by ½S, then even if the change in energy as it contracts by the same proportion of its length is the same, the force it produced would scale with 2S. This suggests that, even conservatively (allowing for the possibility that some biological processes will not be able to exploit the increased chemical energy that comes from using heavier leptons), the muscles of smaller scale organisms will be able to exert forces that scale by at least 2S.

Under the same conservative assumptions, people of all scales ought to be able to jump at least equally high, which means smaller-scale people jumping to a larger fraction of their own body height.

How would we expect the running speed of an organism to change with scale? The crudest estimate would be to take a stride that is shorter by ½S but more frequent (more strides per second) by a factor of 2S, resulting in a velocity independent of the scale. But what if we allow for the possibility of using power that scales as (2S)p, where p ranges from a conservative value of 1 (no gain in mechanical energy for each muscle contraction, but more contractions per second) up to a maximum of 2, in line with the overall metabolic rate? Drag force due to air resistance takes the form:

FD = ½ CDA ρ v2

where FD is the drag force, CD is a dimensionless drag coefficient, A is the cross-sectional area of the body, ρ is the density of air, and v is the velocity of the body. The power expended to overcome this force is given by:

P = FDv = ½ CDA ρ v3

If P scales as (2S)p and cross-sectional area scales as (½S)2, this would allow velocity to scale as (2S)(p+2)/3.

However, this is potentially complicated by the fact that CD can’t always be taken as a constant for objects of a given shape; it can also be affected by the Reynolds number, a parameter associated with the airflow. This is given by:

R = ρ v L / μ

where L is a linear dimension of the object and μ is the dynamic viscosity of air. For high enough values of R, the coefficient CD can be taken as approximately constant; for low R, CD becomes proportional to 1/R. In that case, we would have:

P = FDv ~ L v2

If P scales as (2S)p and L scales as ½S, this would allow velocity to scale as (2S)(p+1)/2.

In either case, the velocity increases for smaller organisms by at least a factor of 2S.

Gravity

The acceleration due to gravity, g, will of course be the same for everyone living on the surface of the same planet. And since people of different scales have a similar total mass, the total force on their body due to gravity — i.e. their weight — will be similar across the scales.

However, when this scale-free acceleration and weight interact with other quantities that do change with scale, the result can certainly lead to different experiences for people of different scales.

For a start, a person’s weight will be supported by the area of the soles of their feet, which will scale as (½S)2, and the pressure their weight applies to the ground will scale as (2S)2, making some terrain as difficult to traverse as walking through sand dunes in stilleto-heeled shoes. Evolution might give smaller-scale people proportionately broader feet to lessen this effect, but it would be impossible to compensate for it completely.

We have already noted that a smaller-scale person falling from a given height is potentially less likely to fracture a bone. This advantage is further consolidated if the kind of heights they typically fall from scale like their bodies, as ½S. A smaller person who is simply standing on level ground and then loses their balance will be falling a smaller distance, decreasing the risk of injury. Evolution might trade off this advantage to some degree by making their bones proportionately thinner, as is certainly the case for a mouse compared to an elephant.

The reaction times needed to respond to events controlled in part by gravity will be different from those whose time scale is set entirely by the distances involved. The kind of constant-velocity ball game we mentioned previously, played over various distance scales, will be subjectively very similar for people of each scale, because their sense of time will scale in roughly the same way as the distances involved. But if a person drops a cup from, say, a height of z that scales as ½S, then the time it takes for the cup to hit the ground is given by:

t = √(2 z / g) ~ √(½S)

The square root here means that although the time to hit the ground is shorter for smaller scales, it does not grow shorter as rapidly as the neurological and physical processes for the person who dropped the cup grow faster, so a smaller-scale person will find it less demanding to respond to the event and grab the cup before it hits the floor. For any object dropped from a height that scales like ½S, the subjective time until it hits the ground scales like √(2S), or 8 times longer for a Scale Seven person than a Scale One person.

If the time scale of an event is set wholly by gravity, there will be an even greater advantage in being small. For example, if an object is thrown straight up into the air at a velocity that is independent of anyone’s scale, the time it will spend in motion will also be constant, so in this case smaller-scale people will get the full advantage of their faster response time, with a subjective time that scales like 2S.

What about tolerance to increased gravity (or equivalently, the “g-force” due to acceleration in a spacecraft)? A given force will compress a smaller-scale material by a smaller proportion of its length, with a conservative factor of ½S just from packing the same change in elastic potential energy into a smaller distance, decreasing to (½S)2 if the potential energy also increases. So smaller-scale people can be expected to be more tolerant of higher g-forces.

Suppose two people of different scales wished to travel the same distance through space, and rather than being mostly in free fall (which is the case for current human space flight), they were able to embark on powered flights with constant acceleration, limited only by their tolerance of the g-force. How would their subjective journey times scale?

The time it takes to travel a distance x with an acceleration a is:

t = √(2 x / a)

With the conservative assumption of a ~ 2S, and subjective time scaling with 2S, the square root here means:

t ~ √(½S)
tsubjective ~ √(2S)

So, although the journey would be objectively faster for smaller-scale people, it would still seem longer for them. At best, if we assumed a ~ (2S)2, the journey would be subjectively the same for all scales.


Scale / The Science of Scale / created Tuesday, 1 November 2022 If you link to this page, please use this URL: https://www.gregegan.net/SCALE/01/ScienceOfScale.html
Copyright © Greg Egan, 2022. All rights reserved.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK