Integrals: The Fundamental Theorem of Calculus

The area under a curve, however the curve may go as long as it doesn't double-back then it's good for figuring the area between it and the $x$-axis, The Riemann Integral is derived similarly to the derivative, as follows.

The area between a function and the $y=0$ line ($x$-axis) is named the integral of that function, $\int_a^b f dx = \left. F \right|_a^b$, and we will see that it is the antiderivative of the function. The antiderivative is the inverse operation to the derivative, which we will construct.

First we define a new function, labelled capital-$F$, as the area between the curve and the $x$-axis. The area, $F$, is a function of the interval, $[x_1, x_2]$, $F=F(x_1, x_2, f)$. The change in $F$ for a given change $b$ in the interval defined, chosen at the upper end of the interval ($x_2$) is $\Delta F = F(x_1, x_2 + b, f) - F(x_1, x_2, f)$. So, to use the Greek letter Delta (for the root of difference diaphora) to symbolize the difference of the quantity:

$$ \Delta F = (b)\times(f(x_2+b)) $$

This is the formula for an area which is a rectangle of width $b$ and height $f$, and as long as the function is continuous then the difference of $f(x_2+b)$ and $f(x_2)$, is less than $\epsilon$ then this construction is meaningful.

So, the change in the area per change in interval is proportional to that change, ($\Delta x = b$), and the value of $f$ at $x=x_2+b$. Below, the area between the low-point and zero is constant and left out.

Figure 1: Details: A plot of a portion of a well known function which is up to fifth-power of x polynomial (a sum of three terms of the form coefficient times $x^n$) on the interval $[x_1, x_2]$.

The area between function $f$ and $y=0$ (the line with zero slope and zero as $y$-intercept) over the bounded yet imprecisely defined interval, $[x_1, x_2]$, is the filled region.

Figure 2: A plot of the same function, but with a change to the area illustrated as a rectangle $b$-wide and $f(x_2+b)$-high.

The sum of rectangle areas, with more rectangles over the given interval, is more accurate, but it's returns are diminishing as the size of the neglected part contributing to the change in $F$ per change in $x_2$ is $ b \ll 1$. The error reduced by doubling the partition granularity is

$$ \frac{\Delta F(x_1, x_2, f)}{\Delta x_2} = f(x_2) $$

The difference in area acquired by extending the interval boundary by $b$ is simply the height of the function at $(x_2+b)$ times the width, the differential parameter $b$. We can say the quantity of such area is, in general, a function of the following parameters.

Where the fourth bounding line, the $x$-axis, is constant. So the area which is bounded between $f(x)$ and the $x$-axis, over the interval, $[x_1, x_2]$, can be solved by the differential equation, $D_x F = fb$, which is identical to the derivative formula for $f$, except now the derivative is $f(x)$ (compare to $D_x f(x)$ derivative formula). We can write, with a change of variable $x_2\to x$ while keeping $x_1$ constant, we write:

$$ \lim\limits_{b\ll 1}\frac{F(x_1, x_2+b)-F(x_1, x_2)}{b} $$

$$ =D_{x_2} F(x_1, x_2) = f(x_2) $$

So for arbitrarily small $b=\Delta x$, the ratio becomes the derivative of $F$ at $x_2$, and is equal to $f(x_2)$. This immediately tells us that the area function, $F$, is the antiderivative of the function $f$.

The area which could be thought of as a (finite) series of the function in subject as kernel to a summation operating from the interval lower bound in subject to the intervals upper bound all terms multiplied by some differential parameter $k$, as the uniform width of the rectangle heights.

The $k$-width rectangles add up to some area equivalent to the area between the function $f(x)$ and the $x$-axis (the abscissa), for the given interval $[a, b]$. So, the change in that total area as a function of the upper bound of the interval $b$ is the height of the the curve at $b$ so $F(x+b) - F(x) = b f(x) $

For historical placement of this notion, the antiderivative of $x^{-1}$ was solved by Gottfried Leibniz in 1676, first publisher of the formalism of calculus [1].

The example polynomial used above, $f$, is a truncated sine series, of four times the argument, to sixth degree in the Taylor series. And the relationship between the integral and the derivative of a function is known as the Fundamental Theorem of Calculus.

Polynomial Integrand

For the example curve, depicted in the above figures, the integral of the polynomial is evaluated term by term using the antiderivative of the monomial.

$$ \int_{x_1}^{x_2} \left(4x - \frac{32}{3}x^3 + \frac{128}{15}x^5\right)dx $$

The integral of a sum of functions (in this case, a trinomial) is the sum of the integrals of those functions. This follows from the linearity of the derivative: since the derivative of a sum is the sum of the derivatives, the antiderivative shares this property.

$$ D_x\sum_i f_i(x)=\sum_i D_x f_i(x) $$

Since $D_x (f_i + f_{i+1}) = D_x f_i + D_x f_{i+1}$:

$$ \frac{d}{dx}(f_i + f_{i+1}) = \lim_{dx\to 0}\frac{f_i(x+dx) + f_{i+1}(x+dx)-f_i(x) - f_{i+1}(x)}{dx} $$

$$ = \lim_{dx\to 0}\frac{d f_i }{dx}+\frac{ d f_{i+1}}{dx} $$

So, the integral of the function figured is:

$$\begin{equation}\label{integral_f} = 2x^2 \Big|_{x_1}^{x_2} - \frac{8}{3} x^4 \Big|_{x_1}^{x_2} + \frac{64}{45} x^6 \Big|_{x_1}^{x_2} \end{equation} $$

Since the derivative of eq. \eqref{integral_f} is f, we verify the antiderivative is correct.

Geometric Series Approximation

Integral of $r^x, \forall\, 0 \lt r\lt 1$

The integral of $r^x$ is the continuous summation, or Riemann Integral, of the exponential with a proper fraction, $r$ not $e$, as base.

$$ \sum_{n=0}^\infty r^n \to \int_0^\infty r^x dx $$

With $r^x=e^{\ln (r) x}$, so we have:

$$ \int_0^\infty r^x dx = \frac{1}{\ln (r)} r^x |_0^\infty = \frac{-1}{\ln (r)} $$

This of course should be compared to the series for a range of proper fractions, which has some similarity to the approximation of the Harmonic Numbers to the integral of the Hyperbola in that the result is asymptotically accurate.

$r$$\frac{1}{1-r}$$\frac{-1}{\ln (r)}$$\Delta$
$0.25$$\frac{4}{3}$$0.72$$0.61$
$\frac{1}{3}$$1.5$$0.91$$0.59$
$0.5$$2$$1.44$$0.56$
$0.8$$5$$4.48$$0.52$
$0.9$$10$$9.49$$0.51$
$0.9999$$10,000$$9,999.50$$0.50$
Table 1. The Geometric Series for ratios listed are compared to the integral of ratio as the base of an exponential function. The pattern demonstrated is that there is a decreasing error with increasing ratio, and that the approximation is always less.

The approximation is more accurate the closer $r$ is to one, since the deviation is almost a fixed amount of undershot for a given series' value. Compare this to the Harmonic Numbers approximation, Euler Constant, which undershoots the Harmonic Numbers series for the same reason $-1/\ln(r)$ undershoots $1/(1-r)$, which is the continuous approximation is continuously decreasing while the corresponding series is constant for each unit of abscissa.

Let's look at the set of proper fractions as domain interval for the two functions, to compare the Geometric Series with our continuous function approximation graphically.

Figure 3. The Geometric Series, $f(r)=\frac{1}{1-r}$, is drawn here in red , and its continuous approximation, $f(r) = \frac{-1}{\ln(r)}$, which is drawn in cobalt, the strictly lower ordinate value.
Figure 4. Error Function: The difference between the Geometric Series and its continuous approximation, $f(r) = \frac{1}{1-r} + \frac{1}{\ln(r)}$, is drawn here to emphasize the pattern of error has an upper bound of one, and no lower bound ().

Integration By Parts

Integrals are everywhere in physics, and over and over again the method employed is Integration By Parts. The solution to an integral can be solved by breaking the integrand into two factor terms, when possible, using the following method.

When the derivative of a product function, $F'=(UV)'$, f-prime equals u-v-all-primed, is integrated you get the function plus a constant.

$$ \begin{equation}\label{FTC} \int F' dx = F + \text{constant} \end{equation} $$

The Fundamental Theorem of Calculus, eq. \eqref{FTC}, is all we need for the the I.B.P. theorem.

We can look at the two terms of a product function's derivative and use the identity created by applying the Product Rule to the product function. The article section on it can be reviewed here: Product Rule .

$$ \begin{equation} \label{IBP} \int (UV)' dx = \int U'V dx + \int UV' dx = UV + \text{constant} \end{equation} $$

The way the Integration By Parts formula, eq. \eqref{IBP}, is used is by decomposing an integral in question into one of the two additive integrals, which then makes the job of integration simpler because the decomposition separated the integration into a formula of two smaller integrals, with one factor always working best as the derivative factor (primed factor).

Integrating Our Simple Function

The antiderivative of a function is generally not obvious, and our example function, that of the normalized one-half power binomial, $f(x)=\sqrt{1+x^2}$, is no exception.

$$ \begin{equation}\label{simple_integral} F = \int \sqrt{1+x^2} dx \end{equation} $$

Since the function has an inner-function, $x^2$, you can't just use the Power Rule on the outer function, $\sqrt{x}$, because the Chain Rule requires us to account for the derivative of the inner function, $1+x^2$. Next, I.B.P., eq. \eqref{IBP}, can't simply be applied in the current form because $f$ is not a function of a product, it's a single composite function. To solve for $F$, the integral of $f$, eq. \eqref{simple_integral}, we must change its form with a change of variable.

Change of Variable

Since we have a composite function in the integrand, we must use $u$-substitution. The letter chosen is reasonably terse for Unkown intermediate function. $u$-substitution is the process of transforming the domain, and consequently the form of the integrand in the new domain's variable is different—and amenable to the by parts integration method described above. This change of variable is called a Point Transformation (as opposed to function transformations like the Fourier).

Figure 5. Tangent around the origin, on the interval $[-\pi/2 + 0.07\lt x\lt \pi/2 - 0.07]$.

The tangent function is a valid substitution for $x$ because:

This invertibility ensures a unique mapping, or bijection, between the variable $x$ and the angle $\theta$. If two functions map separate domains to the same range via continuous bijections, they are compatible substitutes.

Changing the variable has consequences for the integration beyond the composite function. Recall that the width of the infinitesimal slice of the area under the curve in Figure 2 is some width in space $x$, but $dx$ transforms as its own function into the new slice transformation, $g(\theta)d\theta$, determined by the derivative of the point transformation, $dx/d\theta$.


© 2026 Gabe Fernandez. All rights reserved.