The Taylor Series: a curve's local, differential composition

By local, we mean that we are interested in the behavior of a function, $f(x)$, in the vicinity of a specific point, say $x_0$, in a domain, $X$. We'll pin down what vicinity means quantitatively, herein. Consider a single-valued function, $f(x)$, which nowhere does it change instantaneously in a step-wise fashion, then the local behavior of $f$ can be analytically described by its Taylor series, for any point in its domain, $X$. For example, the hyperbola, $1/x$, has an infinity at the origin, but if we exclude that from the domain, then it's well-defined, just not approaching $x\gt 0$ from below zero, $x_0\lt 0$, though they may be in the same neighborhood of the domain, because crossing the hyperbolic origin (infinite ordinate, local abscissa) is undefined.

Let the $n$'th derivative of $f(x)$ exist (that is, be a smooth, non-zero function), then that derivative can be written, $(\frac{d}{dx})^nf(x)=f^{(n)}(x)$, where the parentheses in the exponent are the notation for derivative order, being $n$. The $n$'th derivative existing implies all preceding derivatives are also nonzero, where a function must be zero for the whole domain to be considered a zero function namely $\{f(x)=0: \forall{}x\in{}\mathbb{R}\}$. For example, the exponential and trigonometric functions all have non-zero derivatives (more in the Sine Series article), whereas a polynomial only has non-zero derivatives up to the highest degree—so by induction we can develop the Taylor series. We can integrate the $n$'th derivative, $G=f^{(n)}(x)$, anywhere inside the interval of smooth continuity, say $(x_0, x)$, where $x\lt x_1$, and consider the result after repeating for a total of $n$ integrations—which you might immediately be able to see holds the $f^{(0)}=f$ term, and through analysis we will see to carefully use the rules of calculus to bring to the fore the original function, with some degree of error when truncating the series short of the end, as will be demonstrated in the Sine Series article. The first of $n$ integrations gives us the following.

$$ I_1 = \int_{x_0}^{x} G dx = \int_{x_0}^{x} f^{(n)} dx = \left. f^{(n-1)} \right|_{x_0}^x = f^{(n-1)}(x) - f^{(n-1)}(x_0) $$

Since the last term is constant and recurring, for $I_m$ we'll label the $m$'th integration constant, $c_{n-m}=-\left. f^{(n-m)}(x) \right|_{x_0}$, or minus the $(n-m)$'th derivative of $f(x)$ evaluated at $x_0$. Each of the $c$ terms, of which there will be $n$, is the result of one of the $n$ iterations, $0\le{}m\le{}n$, of integrating the function $f^{(n-m+1)}(x)$ over the interval, and evaluating such definite integral at the two boundaries of the integral, $x_0$ on the left and $x\le{}x_1$ on the right.

$$ I_2 = \int_{x_0}^{x} I_1 dx = \int_{x_0}^{x} f^{(n-1)}(x) dx + \int_{x_0}^x c_{n-1} dx = f^{(n-2)}(x) + c_{n-2} + c_{n-1} (x-x_0) $$ $$ I_3 = \int_{x_0}^{x} I_2 dx = \int_{x_0}^{x} f^{(n-2)}(x) dx + \int_{x_0}^x [ c_{n-1} ( x - x_0 ) + c_{n-2} ] dx $$ $$ = f^{(n-3)}(x) + c_{n-3} + c_{n-2}(x-x_0) + \frac{1}{2}c_{n-1} (x-x_0)^2 $$

Since

$$\int_{x_0}^{x}(c)dx=c(x-x_0)$$

And,

$$\int_{x_0}^{x}c(x-x_0)dx = \frac{1}{2}c(x-x_0)^2$$

To be sure this binomial of variable-plus-constant can be treated as a single term, since it has the same derivative as the monomial $x$, and for all integer exponents (except -1), the derivative of such a binomial-to-the-power is the same as the derviative of such monomial-to-the-power.

$$ \int_{x_0}^{x}\frac{1}{2}c(x-x_0)^2dx = \frac{1}{3!}c(x-x_0)^3 $$

Where the exclamation mark indicates factorial, here $3!=3*2*1$ is formed in the denominator by successively integrating a constant three times. For all $c\in{}\{c_{n-1}, c_{n-2},...\}$, and the reason you don't have to break up the binomial into monomials, in order to integrate, is that the derivative of the integrated term, as proposed above, is in fact the $c(x-x_0)$ term, so we keep it as a binomial term, demonstrating that it is the relative distance which defines the series in these terms, as we will also see that it has to have a contextual element added to the series—an offset (not depending on the relative distance, but the absolute/global coordinate, $x_0$).

Then continuing the integrations, the $m=n$'th step will have the following terms:

$$ I_n = \int_{x_0}^x I_{n-1} dx = f(x) +c_0 + \frac{c_{1}}{1!} (x-x_0) + \frac{c_{2}}{2!} (x-x_0)^2 + \cdots + \frac{c_{n-1}}{(n-1)!} (x-x_0)^{n-1} $$

Where $f(x)=D^0_x f(x)$, and with some algebraic rearangement (cancelling and adding terms) the terms in $I_n$ yields the following, with $f(x)$ on the left, and the remainder on the right, with the $c$ terms substituted back for, $-f^{(n-m)}(x_0)$, the minus $(n-m)$'th derivatives of $f$ evaluated at $x_0$ they represented, to obtain the final form of the Taylor series.

\begin{equation}\label{TaylorSeries_eqn} f(x) = f(x_0) + f^{(1)}(x_0) \frac{1}{1!}(x-x_0) + f^{(2)}(x_0) \frac{1}{2!}(x-x_0)^2 + f^{(3)}(x_0) \frac{1}{3!}(x-x_0)^3 +\cdots{}+R_n \end{equation}

The remainder term, $R_n=\int_{x_0}^x\cdots{}\int_{x_0}^x G(x)dx^n$, can be analyzed by bounding the first integral. We can say that $G(x)$, is less than, or equal to, the magnitude of the interval, $x-x_0, x\gt{}x_0$, times the maximum of $G=f^{(n)}(x)$ on the interval, which exists since we are only talking about functions $f$, which are continuous and smooth (on that interval). Conversely, we also know that $G$ has a minimum, out of all the values it may take, $G(x)$, on the interval, which may be at the start or end of the interval, or anywhere in between when the first derivative goes to zero (overall peak and pit, but not peak-smaller-pit-peak pits, being a false-minima). So that the first integral is in the ballpark of the product of the interval, $x$, times the average of $G$, $G(x_{\text{avg}})$. And so we jump ahead on calculating the subsequent steps, with an error value depending invariably upon $n$, as it diminishes the remainder term by $n!$, on the denominator, which is arbitrarily large for a smooth function, $f$. So, we have for the error, another constant (the $m=0$ term) integrated $n$ times, which gives the following:

$$ R_n = \frac{(x-x_0)^{n}}{n!} f^{(n)}(x_\text{avg}) $$

The constant term here is the $n$'th derivative of $f(x)$, with respect to $x$, evaluated at the fixed point $x_\text{avg}$, which could also be written, $f^{(n)}(x_\text{avg})=\langle{}f^{(n)}(x)\rangle{}$, where the angle brackets means average (same as expectation value for any statistically representative function).

There are two different arguments to the various forms of $f$ appearing in the Taylor Series, eq. \eqref{TaylorSeries_eqn}, being the whole function, left-side of the equation (LSE), expressing the arbitrary point, $x$, and on the RSE, there are terms of $f^{(n)}(x_0)$, evaluated at the point $x_0$ chosen to be within the neighborhood of $x$. The Series gives the most weight to the $n=0$ term, $f(x_0)$, and less weight given to higher-order terms. The local neighborhood of the point is arguably within unity, or within the one-dimensional ball of radius one centered at $x_0$, $x_0:\lvert x-x_0 \rvert\lt 1$, since the monomial terms are all evaluated at $(x-x_0)$, looking like the Geometric Series. The other coefficient is monotonically decreasing with $n$, $n^{-1} \gt (n+1)^{-1} , \forall\ n\gt 0$, therefore $n!^{-1} \gt (n+1)!^{-1} , \forall\ n\gt 0$, the series converges, and the amount of deviation from the true function is controlled with more terms (pushing out the error, $I_n$, by way of geometric-factorial-denominator coefficient).

The Monomials over a local, positive neighborhood

When studying functions (algebraically) to find significant features of position and shape (curve fitting), one starts with monomials ($x^n$, $n$ being a natural number, for the Taylor Series). The first five monomials are plotted below.

Monomials of degrees one through five: $f_1(x)=x$ is the red one (flat), $f_2(x)=x^2$ is the blue one closest to $f_1$, and $f_3(x)=x^3$ is the next power (pistachio), and so on.

The Taylor Series of $f_h(x)$

To recap the hypotenuse function and its derivative,

\begin{equation} \label{f_h_eqn} f_h(x)=\sqrt{(x^2 + a^2)}=(x^2 + a^2)^\frac{1}{2} \end{equation}

The hypotenuse function, eq. \eqref{f_h_eqn}, as a function of $x$, or $f_h(x)$, for $x$ on the interval $[0,10]$.

\begin{equation} \label{f_h_prime} f_h'(x)=f'(g)g'=\frac{1}{2}(x^2 + a^2)^{-\frac{1}{2}}(2x) = \frac{x}{(x^2 + a^2)^{\frac{1}{2}}} \end{equation}

Where we recognize a slope with an asymptote of unity--now quantifed, as for $x \gt 0$, we can divide through by $x$ and get, $f_h'=\frac{1}{(1 + \frac{a^2}{x^2})^{\frac{1}{2}}}$, which clearly approaches 1 as $x$ increases. Of course, $f_h'(x)=0\vert_{x=0}$, which affects the non-trivial expression of the Taylor series, including now the first non-vanishing term of the full series following the constant term.

The slope of the hypotenuse function, as a function of $x$, or $f_h'(x)$, for $x$ bounded from below by the door thickness, $a=1.5$.

\begin{equation} \label{f_h_dprime} f_h''(x)=f''(g)(g')^2 + f'(g)g''=\frac{-1}{4}(x^2 + a^2)^{-\frac{3}{2}}(4x^2) + \frac{1}{2}(x^2 + a^2)^{-\frac{1}{2}}(2) = -\frac{x^2}{(x^2 + a^2)^{\frac{3}{2}}}+ \frac{1}{(x^2 + a^2)^{\frac{1}{2}}} \end{equation}

To quantify how much below a slope of one the derivative is, we use just the first two terms of its Taylor Series, which is now the constant term plus the first non-vanishing term (second order term).

$$ f_h(x)=\sum_{n=0}^{\infty} \frac{f_h^{(n)}(0)}{n!}(x)^n \approx f_h(0) + \frac{f_h''(0)}{2!}(x^2) = a + \frac{1}{2a}(x^2) $$

The Taylor series of the hypotenuse function, as a function of $x$, or $f_h(x) vs 1.5 + \frac{1}{3}(x^2)$. Here the red stroke is the hypotenuse function for a fixed side of $1.5$, and the blue is its Taylor series approximation up to the second order (third, too).


The Taylor Series of $b^x$, $\log_b(x)$

Recalling the first derivative of $b^x$ and $\log_b(x)$, from Calculus of Tangents article, we have: $D_x b^x = b^x \ln(b)$, and $D_x \log_b(x) = \frac{1}{x \ln(b)}$.

The Taylor series expansion of $b^x$ about $x=0$ is given readily, as $f^{(n)}(x)=b^x \ln^n(b)$ by:

$$ b^x = b^0 + b^0 \ln(b) x + \frac{b^0 \ln^2(b)}{2!} x^2 + \cdots = 1 + \ln(b) x + \frac{\ln^2(b)}{2!} x^2 + \cdots $$

Or for $b=e$, the Taylor series expansion is a little more neat, with $f^{(n)}(x)=e^x$:

$$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots $$

And around $x=x_0$, we have:

$$ e^x|_{x_0} = e^{x_0} + e^{x_0}(x-x_0) + \frac{e^{x_0}}{2!}(x-x_0)^2 + \cdots $$

The second derivative of the arbitrary logarithm is, $f^{(n)}=D^n_x \log_b(x) = (-1)^n\frac{1}{x^n \ln(b)}$. So, the Taylor series expansion of $\log_b(x)$ about $x=1$ is given by:

$$ \log_b(x)\vert_{x_0= 1} = \log_b(1) + \frac{1}{1 \ln(b)}(x-1) - \frac{1}{2!\ln(b)}(x-1)^2 + \cdots $$

Or more neatly for $b=e$, with $f^{(n)}=D^n_x \log_e(x) = (-1)^n\frac{1}{x^n}$:

$$ \log_e(x)\vert_{x_0= 1} = \frac{1}{1!}(x-1) - \frac{1}{2!}(x-1)^2 + \frac{1}{3!}(x-1)^3 - \frac{1}{4!}(x-1)^4 +\cdots $$

The logarithm can be evaluated for $x,x_0 > 0$, since it is so closely related to the hyperbola.


Back to Physics Listing
Copyright © 2025 Gabriel Fernandez