By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race.
A.N. Whitehead (via Ken Iverson)
Scientific notation is unnecessarily complicated. It’s difficult to type and speak and read, and it’s easily deformed by a simple cut-and-paste [6.022×1023 from my clipboard just now]. Even the name itself is a seven-syllable mouthful.
The modern number technology is scientific notation. For instance, you may have had to memorize this number in high school:
6.022 × 1023
This is pronounced “six point oh two two times tentatha twenty-third”.
This “number” has 3 or 4 numbers in it. Which of these numbers is the most important?
It’s the “23”. The magnitude of the number. How long is the number, how many digits does it have?
Every other number is less important. Far less important. Even the first digit, the “6” in this case, is 1023 times less important than the “23”. But the magnitude is tucked away at the very end in a smaller font, like a footnote.
This is the crux of the problem. Scientific notation is perfectly functional as a general solution for precise numeric computation (and floating point basically works this way), but design-wise for humans it is backwards. The most important number should come first.
Also, Scientific Notation is complicated to use: if you want to multiply two numbers, you have to separately add the exponents and multiply the coefficients.
In many cases we can just use the exponent: 1023 (or “tentatha 23”), which removes a lot of the clutter. But if we do want some more precision, we don’t actually need a separate coefficient, we can just use a fractional exponent:
1023.77974
In computer-land, you can almost imagine “1e23.77974” being suitable in any computer language that allows the “6.022e23” notation.
And how many sig figs do you actually need as a layperson or even an undergraduate? In most cases we can round up to 1023.8 or even 1024–and this is a lot closer to the actual number than if you misremember it as 6.023 × 1022 (and how long did it take you to see the difference there?).
To simplify the notation even further, we don’t exactly need to specify base 10, since it’s all but exclusively used when specifying actual quantities:
^23.8
We will need to use a symbol to differentiate mag numbers from real numbers. The ^
(caret) is a common ASCII notation for exponentiation, used in many languages, and available on every keyboard.
In print or more formal writing, it might be nice to use a character like the up-arrow (↑, which is congruent with Knuth up-arrow notation to indicate continued exponentiation).
↑23.8
I say “mag 24”, short for magnitude, because it sounds like the inverse of log10, which it is. People seem to intuitively get it, and it sounds a little mathpunk.
If you’re going to a scientific conference, you can always code switch and use “tentatha”.
So Avogadro’s number is ↑24. Mag 24. Tentatha 24. A trillion trillion (2x↑12) but only if you’re showing off. It’s the staggeringly huge number of atoms in 14g of Carbon.
Negative magnitudes extend Mag World into the realms of the extremely small. Mag notation can’t represent 0 exactly, but it can get arbitrarily close.
For example, a nanometer is 10 − 9 meters or ↑-9 length.
We could pronounce this “mag negative nine” but that’s an awkward mouthful. So we can abbreviate it to “mag neg 9” or even “neg 9” (perhaps short for “negligible”).