Motivation is the anti-derivative (a function transformation innately related to the finite area under any action-at-a-distance mathematical model curve) of the hyperbolic function's most simple formula referred to as inverse-$x$, $1/x$, or $x^{-1}$, a function described by René Descartes in his 1637 treatise, and solved by Gottfried Leibniz in 1676, first publisher of the formalism of tangents (calculus)[1]. Leonhard Euler had to be well familiar with trigonometry, tangents, and logarithms to solve the total quantity which inverse-length-squared from one on accretes to.

The slope of a function at a given point, $x$, is the derivative of that function, at $x$, and that is essentially what is used in physical laws (e.g. Newton's Laws of Mechanics, Laws of Thermodynamics, Maxwell's Electrodynamics, etc.), be they three-dimensional fields and fluxes, or beyond, and so one must be acquainted with the Leibnizian process in one variable. If the function in subject is formulated as $x^n$, or more generally $Ax^n$ ($A$ being a constant real number and $n$ being a cardinal number, which is still pretty useful), , then the derivative is simply $nAx^{n-1}$, except for the function which has exponent $n=0$, which is adjacent to the negative powers of $x$ starting with negative one, the remainder of the negative integer powers have tangent slopes according to formula Leibnizian as $x^{-n}=(x^n)^{-1}$.

The formula is derived as follows, using Leibniz' convention for $\Delta$ which is to mean the value which is the change in whatever is contiguously (not separated by $+$ or $+(-)=-$) to its right: given $f(x)=x^n$ then we calculate the change in $f$ at $x$ (any variable) for an arbitrary deviation from $x$ by $\Delta x$, the arbitrariness in this case will be such that the distance from $x$ will be negligable, by algebraic method. One can think about the derivative of $f_n$ at $x$ as being associated with a right-triangle with the tangent being the hypotenuse. which simultaneously adjusts both the vertical and horizontal sides $$ \Delta f(x) = f(x+\Delta x) - f(x) = (x + \Delta x)^n - x^n $$ Which says that if we want to know the difference in $f$ at $x$ and $f$ at some lesser/greater amount $\Delta x$ away from $x$, then we need a couple more tools. First we need the Binomial theorem, for cardinal number $n\geq 0$ (first published by Bhāskara II in 1150 A.D., along with the factorial concept): $(a + b)^n= n! \sum\limits_{k=0}^n (k!(n-k)!)^{-1} a^{n-k} b^k$ The factorial notation ($!$) was introduced in 1808 A.D. and is a shorthand for the number which is a multiplicative sequence $\prod\limits_1^n q$ (where $ab=ba$ commutative property of integers and reals is used ($(1)(2)\cdots(n)=(n)(n-1)\cdots(1)$)): $$ q!=(q)(q-1)(q-2)\cdots(q-q+1) $$ Where the last term can also be written $(q-(q-1))$. For $q=5$, we have $5!=(5)(4)(3)(2)(1) = 120$. It should be noted in passing that the zero-factorial is defined as unity, so formulas work with $0!$. Applying this yields a summation of $n$ terms (from the $n$ factors of the binomial $a+b$ or the binomial $x+\Delta x$), each being of the form $A_k x^n (\Delta x)^{n-k}$.

It may seem an intractable formula, but if one remembers that $\Delta x$ is the distance from our point of reference $x$, then it becomes clear that we can neaten up this formula if we designate the deviation as nominal only, so that $(\Delta x)^2 \ll (\Delta x)^1$ (i.e. the square of the infinitesimal is much much less than the single factor of the infinitesimal), which is true in so far as the infinitesimal is much much less than unity, $\Delta x \ll 1$. And so it is true that more numerous factors of $\Delta x$, are (more) negligable (than the squared infinitesimal term). So we only keep the first two terms of the binomial expansion, labelling with the big-o ($\mathcal{O}$) the largest of the terms we are omitting from consideration: $$ \Delta f= (n!)[ (0!n!)^{-1}x^n (\Delta x)^0 + (1!(n-1)!)^{-1}x^{n-1}(\Delta x)^1 + \mathcal{O}(\Delta x^2) ] -x^n $$ $$ = (n!(n!)^{-1}x^n + (n!)((n-1)!)^{-1}x^{n-1}(\Delta x)^1 + \mathcal{O}(\Delta x^2) ] -x^n $$ And since $q^0=1$, $(n!)(n!)^{-1}$ is just unity, and $(n!)((n-1)!)^{-1}=n$, we have: $$ \Delta f= x^n + nx^{n-1}\Delta x + (-x^n) = nx^{n-1} \Delta x $$ So in the limit $\Delta x\to0$, the derivative gives us the slope of the function precisely at the point $x$. Since the slope-of-the-tangent $=$ rise-over-run, $\Delta f/ \Delta x$, for any term of the function which goes like $Ax^n$ in $x$ there is contributed a term $nAx^{n-1}$, since you can multiply the function in the derivation by $A$ without changing the dependence on $n$.

Likewise the reverse: the anti-derivative of $Ax^n$ is $A(n+1)^{-1}x^{n+1}$, for every $n$ except $n=-1$. One can verify that the derivative of the anti-derivative is the same function of $x$, where the algebra of $(q+1)(q+1)^{-1}=1$ is employed. Which is to say it isn't obvious that the slope of the Natural logarithm of $x$ is $x^{-1}$, but that the derivatives and anti-derivatives of every other power of $x$ is dictated by that Leibnizian formula.

The logarithm was revealed by Euler to be the inverse of the exponential function (1748). Given a variable $y$ related to $x$ by its logarithm (and $x$ being any number, positive, negative, whole number, or in-between), $x=b^y$ is the same statement as $y=\log_b(x)$ . For exponential-base $2$, it can be stated like so: $$ y=\log_2(x) \rightarrow x=2^y $$ Likewise, the Natural logarithm is the inverse of the Natural-based exponent.

The Natural exponential function is also encountered in Statistical mechanics (pioneered by Yankee-physicist Josiah Gibbs, after the Civil war), where it is found with an exponent of energy, as opposed to $x$-alone, the identity function.

The irrational number $e$ ($2.718282\ldots$) can be calculated in myriad ways ( [Dunham] ) one of which will be demonstrated with graphical plots. It was posited by Leonhard Euler that the natural base can be found by solving the equation $b^x=1+x$, where one orients with the equation first by noting that both left and right are easily satisfied for $x=0$. For points very close to $x=0$, regardless of the base $b$, the curve is approximately a line, of varying slope, all passing through point $(0,1)$: $$ b^x=1+kx $$ Because we know $b^0=1$ (regardless of $b$, more at zero-exponent ), the curves of different bases go through the point $(0,1)$, and it turns out they have different slopes, with the "natural" base having a slope of unity ($k=1$).

Below are plots of different bases (values of $b$ constant over the interval), with lines drawn at slopes $+0.1$ greater slope, and $-0.1$ less slope than the slope of the curve at the point $(0,1)$, to demonstrate a behavior common to all exponentials regardless of the charactaristics of the function, $f(x)$, in the exponent position ($e^{f(x)}$). By looking at the plot of $b^x$, for some constant $b \in \{2, e, 3\}$, and drawing a line in the right place (tangent to the exponential curve, at point $f(x)=x=0$), we exercise one way. First, inspect a plot of the exponential function with base $b=2$, below.

The pistachio curve, and the lines are the same width, so looking at the triple crossing, drawn at the exponent of zero, point (0,1), one can see that the red, and blue lines have slopes just a little less and a little more than the slope of the exponential curve at that point.

Next step: plots of $ln(x)$ (abreviation for logarithm with natural base) and $1/x$ (not at zero), so plots showing the slope of $ln(x)$ at interval 0.0005 +/- 0.0001 ($x$ close to zero), and around $x=1$ (something negative, because $1\lt e$), and around $e$ (ordinate is equal to one). Also, in a subsequent, post-calculus article, the lines drawn will have more terms added to the approximation of the exponent around zero.

1. Scriba, Christoph J. (1963). The inverse method of tangents: A dialogue between Leibniz and Newton. __Archive for History of Exact Sciences__ 2 (2):113-137.

Copyright © 2019-2020 Gabriel Fernandez