With its central peak and gracefully sloping sides, the Bell Curve is one of the best-known and important graphs in maths and science. Put simply, it shows the spread of values of anything affected by the cumulative effects of randomness. And there’s no shortage of those: from stock market jitters to human heights and IQ, many phenomena follow at least a rough approximation of the Bell Curve, with the most common value in the centre, and rarer, more extreme values to either side.
Many textbooks refer to it as the Gaussian Curve, reflecting the fact that the brilliant 19th-Century German mathematician Karl Friedrich Gauss deduced the shape of the curve while studying how data are affected by random errors. But a French maths teacher named Abraham de Moivre arrived at the same curve decades earlier while tackling a problem that had baffled mathematicians for years: how to calculate the frequency that heads or tails appear over the course of many coin-tosses. Most mathematicians refer to the curve simply as the ‘Normal distribution’, while historians often use the term ‘Gaussian Curve’ as an example of Stigler’s Law of Eponymy, which states that no scientific discovery is named after its actual discoverer.