Entry: Magnitude
URI: https://terra-vocabulary.org/ncl/FAIR-Incubator/earthsciencevariables/c_e3780f98
In seismology, a quantity intended to measure the size of earthquakes and is independent of the place of observation. The Richter magnitude or local magnitude (ML) was originally defined by Charles F. Richter (1935) as the logarithm of the maximum amplitude in micrometers of seismic waves in a seismogram written by a standard Wood-Anderson seismograph at a distance of 100 km from the epicenter. Empirical tables were constructed to reduce measurements to the standard distance of 100 km (see also magnitude calibration function) and the zero of the scale was fixed arbitrarily to fit the smallest earthquake then recorded. The concept was extended later to construct magnitude scales based on other data, resulting in many types of magnitudes, such as body-wave magnitude (mB and mb), surface-wave magnitude (Ms), moment magnitude (Mw) and energy magnitude (Me). In some cases, magnitudes are estimated from seismic intensity data, tsunami data, or the duration of coda waves. The word “magnitude” or the symbol M, without a subscript, is sometimes used when the specific type of magnitude is clear from the context, or is not really important. According to Hagiwara (1964), earthquakes may be classified by magnitude (M) as: major if M ? 7, as moderate if M ranges from 5 to 7, as small if M ranges from 3 to 5, as micro if M ranges from 1 to 3, and as ultra-micro < 1. Later usages include: as nano if M < 0, as great if M ? 8 (or sometimes 7 3/4), and as mega if M ? 9. In principal, all magnitude scales could be cross calibrated to yield the same value for any given earthquake, but this expectation has proven to be only approximately true, thus the need to specify the magnitude type as well as its value.