Time is fundamentally a duration; a distance along the nonspatial space-time dimension.
So we can say that the Sun is ↑17.2 old, whereas the Universe is ↑17.638 old. A human, by contrast, generally lives to be ↑9 (32 years) old, and can live to be ↑9.5 (100 years) old.
A time can more specifically be an age, or how long something has been in existence; the duration from its beginning to the current time.
Frequency is the inverse of time. So ↑2 frequency (100Hz) is ↑-2 time (10ms) per cycle.
Here are the levels of mag frequency. ↑1-↑4 are noted for sound waves, whereas ↑5 and greater are on the electromagnetic spectrum. (There are low frequency radio waves (and very-low and ultra-low etc), but they are not as common.)
↑0: metronome
↑1: bass and sub-bass
↑2: human vocals
↑3: highest audible tones
↑4: ultrasonic (dog-whistle)
↑5: AM radio
↑8: FM radio; VHF
↑9: UHF; 802.11; Bluetooth
↑10: microwaves; radar
↑11-↑13: infrared
↑14: visible light
↑15: ultraviolet
↑17: X-rays
↑19: gamma rays
The wavelength–peak-to-peak distance–for radio/EM waves is ↑8.5 (speed of light) minus the mag frequency.
So the wavelength of FM radio, for example, is ↑8.5 - ↑8 = ^.5, or about 3m.
A time can also be latency, or the delay between a cause and its effect.
Every so often a post goes around the software engineering community entitled “Numbers every computer/software engineer should know”, which is a list of the latencies of some common computer operations. The idea is that any software person should have internalized, for example, that reading a value from main memory is 100x slower than reading from L1 cache.
This is a good practice in general, to know the orders of magnitude of operations. But the information is presented like this:
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
Read 1 MB sequentially from memory 250,000 ns 250 us
Round trip within same datacenter 500,000 ns 500 us
Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory
Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from disk 20,000,000 ns 20,000 us 20 ms 80x memory, 20X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms
This is trying to show the order of magnitude differences between these operations, but it’s still just a bunch of facts to memorize. The actual timings are highly dependent on the specific computer system and year. A mutex lock took 25 nanoseconds in 2012? Is that still true? Does it matter?
On the other hand, the Mag Latency scale is less precise, but more stable. These numbers have gone down over the decades, but notably some of them not that much–a main memory reference on the Apple II took ↑-6 time (1µs) in 1977, and ↑-7 in 2012.
And when these mag numbers do change, especially relative to one another, it’s a big deal.