## The Logarithm and the Exponential

Motivation is the anti-derivative (a function transformation innately related to the finite area under any action-at-a-distance mathematical model curve) of the hyperbolic function $1/x=x^{-1}$, a function described by René Descartes in 1637, and solved by Gottfried Leibniz in 1676, first publisher of the formalism of tangents (calculus)[1].

The slope of a function at a given point, $x$, is the derivative of that function, at $x$, and is what is used in physical laws (Newton's Laws of Mechanics, Laws of Thermodynamics, etc.). If the function is formulated as $x^n$, or more generally $Ax^n$ ($A$ and $n$ being constant), such as physical fields we observe, then the derivative is simply $nAx^{n-1}$, except for the function which has exponent $n=0$. Likewise the reverse: the anti-derivative of $Ax^n$ is $A(n+1)^{-1}x^{n+1}$, for every $n$ except $n=-1$. One can verify that the derivative of the anti-derivative is the same function of $x$, where the algebra of $(q+1)(q+1)^{-1}=1$ is employed. Which is to say it isn't obvious that the slope of the Natural logarithm of $x$ is $x^{-1}$, but that the derivatives and anti-derivatives of every other power of $x$ is dictated by that Leibnizian formula.

Before the closed-form discovery of the Natural log, Jacob Bernoulli had proven that the following series diverges in 1689 (i.e., the sum is larger than any number, for the number of terms being sufficiently large): $$\sum\limits_1^m n^{-1} = 1 + 2^{-1} + 3^{-1} + \cdots + m^{-1}$$ Bernoulli proved this divergence, which is that the Harmonic series can be constructed from a sequence of sequences: $$\sum\limits_{n=1}^\infty n^{-1} = \sum\limits_{n_1=1}^{m_1-1} n_1^{-1} + \sum\limits_{n_2=m_1}^{m_1^2} n_2^{-1} + \cdots + \sum\limits_{n_q=m_q}^{m_q^2} n_q^{-1} + \cdots$$ Where each sub-summation, over $n_q$, is greater than one for $m_1=2$.

To illustrate this, I've calculated the following (finite) subsequences, but instead of going from $m$ to $m^2$ I went as far as necessary to exceed one: $$\sum\limits_2^4 n^{-1} = 2^{-1} + 3^{-1} + 4^{-1} = 1.08\overline{3}$$ $$\sum\limits_5^{12} n^{-1} = 5^{-1} + 6^{-1} +\cdots+ 12^{-1} \approx 1.02$$ $$\sum\limits_{13}^{33} n^{-1} = 13^{-1} + 14^{-1} +\cdots+ 33^{-1} \approx 1.01$$ So that in $33$ terms of the series we see the sum is about $4.11$, where I included the first term/subsequence being equal to one.

For an arbitrary subsequence starting at $m_q=1,000$, to add credence that such a partitioning of the series is the sum of numbers each greater than unity: $$\sum\limits_{1E+3}^{1E+6} n^{-1} = 1E-3 + 1001^{-1} +\cdots+ 1E-6 \approx \int\limits_{1E+3}^{1E+6} dx/x = \ln(1E+6)-\ln(1E+3) = 6.9$$ Where I used the scientific notation to indicate a thousand and its square (a million).

So, while the hyperbola is arbitrarily close to zero for sufficiently large $x$, its rate of getting there is slow enough that the infinite sum of the series is also infinite.

If you were wondering how close an approximation the Natural log is for a summation over the same interval, the error is constrained to be less than that for $m=1$ (difference being one), while being less than that over the entire interval $m \to \infty$. The difference over the entire interval is defined as the Euler-Mascheroni constant: $$0.57721\ldots=\lim_{m \to \infty} \sum\limits_1^m n^{-1} - \ln(m)$$ The deviation for finite-$m$ is a saw-tooth function geometrically understood as the area under the hyperbola $x^{-1}$ starting at $x=1$ subtracted from the area of the union of the unit-width-by-$n^{-1}$ contiguous rectangles, with the first unit-width rectangle being positioned adjacent to the ordinate axis (spanning from zero to one in horizontal and vertical sides), so that the first rectangle isn't under the hyperbola part under evaluation (so that the $m=1$ case works).

Which we can put into perspective by comparing it with the difference between the two quantities for $m=33$: $\ln(33)-\ln(1) = 3.53$, and for the partial-sum of the Harmonic we have, $1 + 1.08 + 1.02 + 1.01 = 4.11$ so the error is $4.11 - 3.53 = 0.613$, which is between the limit and one.

The logarithm was revealed by Euler to be the inverse of the exponential function (1748). Given a variable $y$ related to $x$ by its logarithm (and $x$ being any number, positive, negative, whole number, or in-between), $x=b^y$ is the same statement as $y=\log_b(x)$ . For exponential-base $2$, it can be stated like so: $$y=\log_2(x) \rightarrow x=2^y$$ Likewise, the Natural logarithm is the inverse of the Natural-based exponent.

The Natural exponential function is also encountered in Statistical mechanics (pioneered by Yankee-physicist Josiah Gibbs, after the Civil war), where it is found with an exponent of energy, as opposed to $x$-alone, the identity function.

The irrational number $e$ ($2.718282\ldots$) can be calculated in myriad ways ( [Dunham] ) one of which will be demonstrated with graphical plots. It was posited by Leonhard Euler that the natural base can be found by solving the equation $b^x=1+x$, where one orients with the equation first by noting that both left and right are easily satisfied for $x=0$. For points very close to $x=0$, regardless of the base $b$, the curve is approximately a line, of varying slope, all passing through point $(0,1)$: $$b^x=1+kx$$ Because we know $b^0=1$ (regardless of $b$, more at zero-exponent ), the curves of different bases go through the point $(0,1)$, and it turns out they have different slopes, with the "natural" base having a slope of unity ($k=1$).

Below are plots of different bases (values of $b$ constant over the interval), with lines drawn at slopes $+0.1$ greater slope, and $-0.1$ less slope than the slope of the curve at the point $(0,1)$, to demonstrate a behavior common to all exponentials regardless of the charactaristics of the function, $f(x)$, in the exponent position ($e^{f(x)}$). By looking at the plot of $b^x$, for some constant $b \in \{2, e, 3\}$, and drawing a line in the right place (tangent to the exponential curve, at point $f(x)=x=0$), we exercise one way. First, inspect a plot of the exponential function with base $b=2$, below.

This plot of the base of $2$ from $-2$ to $2$ (light pistachio color) has a red line drawn through point (0,1) at a slope of 0.6 and a similar blue line at slope 0.8 (indicating $k=0.7$, minimal precision).

The pistachio curve, and the red, and blue lines are the same width, so looking at the triple crossing, drawn at the exponent of zero, point (0,1), one can see that the red, and blue lines have slopes just a little less and a little more than the slope of the exponential curve at that point.

This plot of the "natural base" of $2.718$ , from $-2$ to $2$ (pistachio) has a red line drawn through point (0,1) at a slope of 0.9 and a similar blue line at slope 1.1, indicating the "natural", ideal, slope of one.

This plot of the base of $3$ from $-2$ to $2$ (pistachio) has a red line drawn through point (0,1) at a slope of 1.0 and a similar blue line at slope 1.2, indicating $k=1.1$.

This plot of the base of $3$ from $(-0.0002)$ to $0.0002$ (pistachio) has a red line drawn through point (0,1) at a slope of 1.0 and a similar blue line at slope 1.2, indicating a slope of 1.1 for the exponential curve (pistachio), when it is very close to zero.

Next step: plots of $ln(x)$ (abreviation for logarithm with natural base) and $1/x$ (not at zero), so plots showing the slope of $ln(x)$ at interval 0.0005 +/- 0.0001 ($x$ close to zero), and around $x=1$ (something negative, because $1\lt e$), and around $e$ (ordinate is equal to one). Also, in a subsequent, post-calculus article, the lines drawn will have more terms added to the approximation of the exponent around zero.

1. Scriba, Christoph J. (1963). The inverse method of tangents: A dialogue between Leibniz and Newton. Archive for History of Exact Sciences 2 (2):113-137.