All Posts
Mathematics

The Fundamental Theorem of Calculus: Connecting Derivatives and Integrals

We state and prove both parts of the Fundamental Theorem of Calculus, explore why it is the single most important result in analysis, and trace its historical development from Newton and Leibniz to Riemann and Lebesgue.

The Theorem

The Fundamental Theorem of Calculus (FTC) has two parts, each establishing a deep link between differentiation and integration — two operations that appear, at first glance, to have nothing to do with each other.

Fundamental Theorem of Calculus — Part I

Let f:[a,b]Rf: [a,b] \to \mathbb{R} be continuous. Define F:[a,b]RF: [a,b] \to \mathbb{R} by

F(x)=axf(t)dtF(x) = \int_a^x f(t)\, dt

Then FF is differentiable on (a,b)(a,b) and F(x)=f(x)F'(x) = f(x) for all x(a,b)x \in (a,b).

Fundamental Theorem of Calculus — Part II

Let f:[a,b]Rf: [a,b] \to \mathbb{R} be continuous and let GG be any antiderivative of ff (i.e., G(x)=f(x)G'(x) = f(x)). Then:

abf(x)dx=G(b)G(a)\int_a^b f(x)\, dx = G(b) - G(a)


Intuition

Part I: Integration then Differentiation

Think of f(t)f(t) as a rate — say, velocity. Then F(x)=axf(t)dtF(x) = \int_a^x f(t)\,dt is the accumulated distance from time aa to time xx. The theorem says: the rate of change of accumulated distance is the velocity itself. This is almost tautological in the physical picture, but proving it rigorously requires care.

Part II: The Evaluation Formula

Part II is the computational workhorse: to evaluate a definite integral, find any function whose derivative is ff, then evaluate it at the endpoints. This transforms the problem of computing areas (limits of Riemann sums) into the often simpler problem of finding antiderivatives.


Proof of Part I

Proof.

We must show that limh0F(x+h)F(x)h=f(x)\lim_{h \to 0} \frac{F(x+h) - F(x)}{h} = f(x).

Step 1. By definition of FF:

F(x+h)F(x)=ax+hf(t)dtaxf(t)dt=xx+hf(t)dtF(x+h) - F(x) = \int_a^{x+h} f(t)\,dt - \int_a^x f(t)\,dt = \int_x^{x+h} f(t)\,dt

Step 2. Therefore:

F(x+h)F(x)h=1hxx+hf(t)dt\frac{F(x+h) - F(x)}{h} = \frac{1}{h}\int_x^{x+h} f(t)\,dt

Step 3. Since ff is continuous at xx, for every ε>0\varepsilon > 0 there exists δ>0\delta > 0 such that tx<δ|t - x| < \delta implies f(t)f(x)<ε|f(t) - f(x)| < \varepsilon.

For h<δ|h| < \delta:

1hxx+hf(t)dtf(x)=1hxx+h[f(t)f(x)]dt1hxx+hf(t)f(x)dt<ε\left|\frac{1}{h}\int_x^{x+h} f(t)\,dt - f(x)\right| = \left|\frac{1}{h}\int_x^{x+h} [f(t) - f(x)]\,dt\right| \leq \frac{1}{|h|}\int_x^{x+h} |f(t) - f(x)|\,dt < \varepsilon

Since ε\varepsilon was arbitrary, F(x)=f(x)F'(x) = f(x). \square


Proof of Part II

Proof.

Let F(x)=axf(t)dtF(x) = \int_a^x f(t)\,dt as in Part I, and let GG be any antiderivative of ff.

By Part I, F(x)=f(x)=G(x)F'(x) = f(x) = G'(x), so (FG)(x)=0(F - G)'(x) = 0 for all x(a,b)x \in (a,b).

By the Mean Value Theorem, FGF - G is constant on [a,b][a,b]: there exists CC such that F(x)=G(x)+CF(x) = G(x) + C.

Evaluating at x=ax = a: F(a)=aaf(t)dt=0=G(a)+CF(a) = \int_a^a f(t)\,dt = 0 = G(a) + C, so C=G(a)C = -G(a).

Evaluating at x=bx = b:

abf(t)dt=F(b)=G(b)+C=G(b)G(a)\int_a^b f(t)\,dt = F(b) = G(b) + C = G(b) - G(a) \qquad \square


Examples

Example 1: A Direct Computation

01x2dx\int_0^1 x^2\, dx

An antiderivative of x2x^2 is G(x)=x33G(x) = \frac{x^3}{3}. By FTC Part II:

01x2dx=G(1)G(0)=130=13\int_0^1 x^2\, dx = G(1) - G(0) = \frac{1}{3} - 0 = \frac{1}{3}

Example 2: Differentiating an Integral

Let F(x)=0xet2dtF(x) = \int_0^x e^{-t^2}\, dt. By FTC Part I:

F(x)=ex2F'(x) = e^{-x^2}

Note: F(x)F(x) has no elementary closed form (it is related to the error function erf(x)\operatorname{erf}(x)), yet we can compute its derivative instantly.

Example 3: Chain Rule Variant

For F(x)=0x2sin(t)dtF(x) = \int_0^{x^2} \sin(t)\, dt, we apply the chain rule:

F(x)=sin(x2)2xF'(x) = \sin(x^2) \cdot 2x


The Mean Value Theorem for Integrals

A useful corollary of FTC Part I:

Mean Value Theorem for Integrals. If f:[a,b]Rf: [a,b] \to \mathbb{R} is continuous, there exists c(a,b)c \in (a,b) such that:

abf(x)dx=f(c)(ba)\int_a^b f(x)\, dx = f(c)(b - a)

Proof. Apply the Mean Value Theorem to F(x)=axf(t)dtF(x) = \int_a^x f(t)\,dt: there exists c(a,b)c \in (a,b) with F(b)F(a)=F(c)(ba)=f(c)(ba)F(b) - F(a) = F'(c)(b-a) = f(c)(b-a). \square

Geometrically, the area under the curve equals the area of a rectangle of width bab-a and height f(c)f(c) for some well-chosen cc.


Before the FTC: Riemann Sums

Without the FTC, computing 01x2dx\int_0^1 x^2\, dx requires evaluating a limit of Riemann sums:

01x2dx=limnk=1n(kn)21n=limn1n3k=1nk2\int_0^1 x^2\, dx = \lim_{n \to \infty} \sum_{k=1}^{n} \left(\frac{k}{n}\right)^2 \cdot \frac{1}{n} = \lim_{n \to \infty} \frac{1}{n^3}\sum_{k=1}^{n} k^2

Using k=1nk2=n(n+1)(2n+1)6\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}:

=limnn(n+1)(2n+1)6n3=limn2n3+3n2+n6n3=13= \lim_{n \to \infty} \frac{n(n+1)(2n+1)}{6n^3} = \lim_{n \to \infty} \frac{2n^3 + 3n^2 + n}{6n^3} = \frac{1}{3}

The FTC eliminates this laborious computation entirely.


Historical Development

The connection between tangent problems (differentiation) and area problems (integration) was understood in various forms before Newton and Leibniz:

  • Isaac Barrow (1670) proved a geometric version of FTC in his Lectiones Geometricae.
  • Isaac Newton (c. 1666) and Gottfried Leibniz (c. 1675) independently developed calculus as a systematic method, with the FTC at its core.
  • Augustin-Louis Cauchy (1823) gave the first rigorous proof using limits.
  • Bernhard Riemann (1854) defined the integral precisely, enabling a rigorous statement of FTC.
  • Henri Lebesgue (1902) generalized the integral, leading to more powerful versions of the theorem.

Generalizations

The Lebesgue Version

If f:[a,b]Rf: [a,b] \to \mathbb{R} is Lebesgue integrable and F(x)=axf(t)dtF(x) = \int_a^x f(t)\,dt, then FF is absolutely continuous and F(x)=f(x)F'(x) = f(x) almost everywhere.

Conversely, if GG is absolutely continuous on [a,b][a,b], then GG' exists a.e., is Lebesgue integrable, and G(x)G(a)=axG(t)dtG(x) - G(a) = \int_a^x G'(t)\,dt.

Stokes' Theorem

The FTC generalizes to higher dimensions via Stokes' theorem:

Mdω=Mω\int_M d\omega = \int_{\partial M} \omega

where MM is an oriented manifold with boundary M\partial M and ω\omega is a differential form. When M=[a,b]M = [a,b], this reduces to abf(x)dx=f(b)f(a)\int_a^b f'(x)\,dx = f(b) - f(a).

The FTC, Green's theorem, the divergence theorem, and the classical Stokes theorem are all special cases of this single formula.


Why It Matters

The Fundamental Theorem of Calculus is arguably the most important theorem in all of mathematics:

  1. Computation. It transforms integral evaluation from infinite summation into simple substitution.
  2. Physics. It underlies conservation laws, work-energy theorems, and the relationship between force and potential energy.
  3. Engineering. Signal processing, control theory, and fluid mechanics all rely on freely converting between derivatives and integrals.
  4. Unification. It reveals that differentiation and integration are inverse operations — a fact that is the starting point for all of analysis.

Summary

Part I: ddxaxf(t)dt=f(x)Part II: abf(x)dx=G(b)G(a)where G=fKey idea: Differentiation and integration are inverse operations\begin{aligned} &\textbf{Part I: } \frac{d}{dx}\int_a^x f(t)\,dt = f(x) \\[8pt] &\textbf{Part II: } \int_a^b f(x)\,dx = G(b) - G(a) \quad \text{where } G' = f \\[8pt] &\textbf{Key idea: } \text{Differentiation and integration are inverse operations} \end{aligned}

References