Ir al contenido principal

Maclaurin Series: Formula, Expansion, and Examples

A practical guide to Maclaurin series covering the core formula, common expansions, convergence rules, and real-world applications in numerical methods, physics, and machine learning.
9 abr 2026  · 9 min leer

Some functions are too complex to work with directly - so mathematicians figured out how to fake them with polynomials.

That's the basic idea behind a Maclaurin series. It represents a function as an infinite sum of polynomial terms, each built from the function's derivatives at zero. The result is something you can compute with, even when the original function is too complex.

You can think of a Maclaurin series as a special case of the Taylor series, just centered at zero. That constraint makes it simpler to derive and easier to apply.

In this article, I'll cover the Maclaurin series formula, walk through the most common expansions, and show you how to interpret and apply them.

What Is a Maclaurin Series?

A Maclaurin series represents a function as an infinite sum of terms built from its derivatives at zero.

Each term is a polynomial - a power of x scaled by a derivative value. When you combine enough of these terms together, you get a polynomial that behaves just like the original function, at least near zero.

Approximating a complex function with a polynomial is the core idea behind the Maclaurin series. Polynomials are easy to compute, differentiate, and integrate. Most other functions aren't.

Maclaurin Series vs. Taylor Series

A Taylor series approximates a function as an infinite polynomial centered at any point a. You pick the point, build the series around it, and get a polynomial that works well near that point.

A Maclaurin series is just a Taylor series where a = 0. That's the only difference.

Centering at zero simplifies the math because the polynomial terms drop the (x - a) offset and become plain powers of x. Most standard functions you'll work with in calculus, physics, and machine learning have clean, well-known Maclaurin expansions as a result.

Taylor and Maclaurin series comparison

Taylor and Maclaurin series comparison

To wrap this section, use a Taylor series when you need to approximate a function near a specific point other than zero. Use a Maclaurin series when zero is a starting point - which it often is.

Maclaurin Series Formula

The Maclaurin series formula expresses any function f(x) as an infinite sum:

Maclaurin series formula

Maclaurin series formula

Expanded, it looks like this:

Expanded Maclaurin series formula

Expanded Maclaurin series formula

Each term has three parts:

  • f⁽ⁿ⁾(0) - the nth derivative of f, evaluated at zero. This shows how the function behaves at that point

  • n! - the factorial of n, which scales each term down so the series stays well-behaved as n grows

  • xⁿ - the nth power of x, which determines how far from zero each term reaches

The first term f(0) sets the polynomial to the function's value at zero. Each following term adds a correction - adjusting the slope, the curvature, and so on - until the polynomial matches the original function as closely as you need.

In a nutshell, the more terms you include, the better the approximation.

How Maclaurin Series Works

Building a Maclaurin series comes down to one repeated action: evaluate derivatives at zero, then stack the results into a polynomial.

Here's how it works, step by step.

  1. Evaluate the function at zero: Plug x = 0 into f(x). This gives you the first term - the constant that sets the polynomial's starting value
  2. Take the derivatives: Compute f'(x), f''(x), f'''(x), and so on. At each step, evaluate the result at zero. Each value tells you something about the function's behavior - its slope, its curvature, how fast the curvature changes
  3. Build the polynomial. Take each derivative value, divide it by the corresponding factorial, and multiply by the matching power of x

Now add all the terms together:

How Maclaurin series works

How Maclaurin series works

Each term improves the approximation. The first term gets the value. The second gets the slope. The third gets the curvature. And so on.

You stop this process when the approximation is close enough for your needs - or keep going for more precision.

Common Maclaurin Series Expansions

A couple of functions come up so often that their Maclaurin expansions are worth memorizing. Here are the four you'll see most.

The exponential function is the simplest case - every derivative of is still , which means every derivative evaluated at zero equals 1.

eˆx expansion

ex expansion

The coefficients are just 1/n!. The series converges for all values of x, which makes it one of the most useful expansions in practice.

sin(x)

The sine function produces a series with only odd powers of x, and the signs alternate between positive and negative.

sin(x) expansion

sin(x) expansion

The even-order derivatives of sin(x) at zero are all zero, so those terms drop out. What's left is odd powers, factorial denominators, alternating signs. Like , this series converges for all x.

cos(x)

The cosine expansion is the mirror image of sine - only even powers of x appear, with the same alternating sign pattern.

cos(x) expansion

cos(x) expansion

This makes sense since cos(x) is the derivative of sin(x), and you can get this series by differentiating the sin(x) expansion term by term. The odd-power terms disappear for the same reason even-power terms disappear in sine - the derivatives at zero cancel them out. It converges for all x.

1 / (1 − x)

This one has the simplest pattern of the four: every coefficient is just 1, with no factorials and no alternating signs.

1/1-x expansion

1/(1-x) expansion

It's a geometric series, which is why the pattern looks so clean. But unlike the three functions above, this series only converges when |x| < 1. If you set x outside that range, the terms grow without bound instead of shrinking toward zero.

Finally, for the visual learners, here’s a chart comparison of all four series expansions with multiple terms:

Common Maclaurin series

Common Maclaurin series

Approximating Functions Using Maclaurin Series

A Maclaurin series rarely needs all its infinite terms to be useful. In practice, you take a partial sum - the first few terms - and use that as your approximation.

The more terms you include, the closer the partial sum tracks the original function. If you cut it at two terms, you get a rough fit near zero. When you add a couple more, the approximation holds further out. Each new term corrects what the previous ones missed.

Take sin(x) as a concrete example. The full series is:

sin(x) approximation formula

sin(x) approximation formula

Let's approximate sin(0.3) using partial sums and see how each one compares to the exact value.

  • 1 term: 0.3 - error of ~0.0045

  • 2 terms: 0.3 - (0.3³/6) = 0.2955 - error of ~0.0000196

  • 3 terms: adds (0.3⁵/120) = 0.29552 - error of ~0.0000000239

Three terms get you to six decimal places of accuracy, which should be enough. In most cases, you don’t need to go further than that.

Here's that same idea in Python:

import numpy as np
from math import factorial

def maclaurin_sin(x, n_terms):
    return sum(((-1)**n * x**(2*n+1)) / factorial(2*n+1) for n in range(n_terms))

vec_sin = np.vectorize(maclaurin_sin)
x_val = 0.3

print(f"Approximating sin({x_val}):")
print(f"  Exact value : {np.sin(x_val):.10f}")
for n in [1, 2, 3, 4]:
    approx = maclaurin_sin(x_val, n)
    error = abs(np.sin(x_val) - approx)
    print(f"  {n} term(s)   : {approx:.10f}  |  error: {error:.2e}"

Running this prints the partial sum values and errors at x = 0.3:

Python example of sin(x) Maclaurin approximation

Python example of sin(x) Maclaurin approximation

You can also inspect this visually:

Chart of sin(x) Maclaurin approximation

Chart of sin(x) Maclaurin approximation

You can see just how well each approximation tracks sin(x) function.

Convergence of Maclaurin Series

A Maclaurin series doesn't always work for every value of x. For some functions, the series converges to the correct value only within a specific range around zero. Outside that range, the partial sums grow without bound instead.

This range is called the radius of convergence. It tells you how far from zero the series stays reliable.

The behavior varies depending on the function:

  • , sin(x), cos(x) - converge for all values of x. You can plug in any number and the series will give you the right answer

  • 1/(1-x) - only converges when |x| < 1. At x = 1 the function itself blows up, and the series reflects that by failing to converge near that point

Think of the radius of convergence as a circle of trust centered at zero. The series is a valid approximation only inside it.

You don't always need to compute the radius of convergence. For the standard functions, it's a known quantity. But when you're working with a less familiar function, checking convergence before relying on a Maclaurin approximation is a good habit.

Why Maclaurin Series Matter

Maclaurin series show up in real computational work across math, physics, and machine learning.

Numerical methods

Computers can't evaluate most functions symbolically. They evaluate polynomials. When a library computes sin(x) or , it's often using a polynomial approximation - one derived from the function's Maclaurin or Taylor expansion. The series gives you a form that hardware can actually calculate with, fast and without infinite loops.

Physics approximations

Physics uses Maclaurin series whenever an exact solution is too complex to work with. The most common example is the small-angle approximation: for small values of θ, sin(θ) ≈ θ. That's just the first term of the sin(x) Maclaurin series. It simplifies pendulum equations, optics calculations, and wave models - turning nonlinear problems into linear ones that are actually solvable.

Machine learning and optimization

In machine learning, Taylor and Maclaurin expansions are behind a lot of the math you interact with daily. Gradient descent uses first-order approximations of the loss function to decide which direction to step. Second-order methods like Newton's method use the curvature term. When researchers analyze how a model's loss surface behaves locally, they're often thinking in terms of Taylor expansions around a point.

The Maclaurin series is also how activation functions like sigmoid and tanh get approximated in theoretical analysis. Expanding them as polynomials makes it easier to reason about gradients and saturation behavior.

Conclusion

A Maclaurin series does one thing: it approximates a function as a polynomial centered at zero. That's a simple idea with a long reach.

From numerical computing to physics to machine learning, the pattern is always the same: take a complex function, replace it with a polynomial that's close enough, and get on with the actual problem. The math behind gradient descent, small-angle approximations, and built-in library functions all go back to this same core idea.

The expansions for , sin(x), cos(x), and 1/(1-x) are worth remembering. They come up often enough that recognizing them on sight saves real time, especially if you’re reading research papers.

Build Machine Learning Skills

Elevate your machine learning skills to production level.
Start Learning for Free

Dario Radečić's photo
Author
Dario Radečić
LinkedIn
Senior Data Scientist based in Croatia. Top Tech Writer with over 700 articles published, generating more than 10M views. Book Author of Machine Learning Automation with TPOT.

Maclaurin Series FAQs

What is a Maclaurin series in simple terms?

A Maclaurin series represents a function as an infinite sum of polynomial terms, each built from the function's derivatives evaluated at zero. The more terms you include, the closer the polynomial gets to the original function. It's a way of swapping something complex for something a computer - or a human - can actually compute with.

What's the difference between a Maclaurin series and a Taylor series?

A Taylor series approximates a function as a polynomial centered at any point a. A Maclaurin series is just a Taylor series where that point is fixed at zero.

Where are Maclaurin series used in practice?

Maclaurin series show up in numerical computing, physics, and machine learning. Programming libraries use polynomial approximations to evaluate functions like sin(x) and efficiently. In ML, Taylor expansions are behind optimization methods like gradient descent and Newton's method.

What does the radius of convergence mean for a Maclaurin series?

The radius of convergence defines how far from zero a Maclaurin series stays accurate. Inside that range, the partial sums close in on the exact function. Outside it, the terms grow instead of shrinking, and the approximation breaks down. Functions like and sin(x) converge everywhere, but others - like 1/(1-x) - only converge within a limited range.

How many terms do you need for a good Maclaurin approximation?

It depends on how much precision you need and how far from zero you're evaluating the function. Close to zero, just a couple of terms often get you within a small margin of error. Further out, you'll need more terms to maintain accuracy - and if you're outside the radius of convergence, no number of terms will give you a correct result.

Temas

Learn with DataCamp

Curso

Álgebra lineal para data science en R

4 h
20.5K
Este curso es una introducción al álgebra lineal, uno de los temas matemáticos más importantes que sustentan la ciencia de datos.
Ver detallesRight Arrow
Iniciar curso
Ver másRight Arrow
Relacionado

Tutorial

Cofactor Expansion (Laplace Expansion): A Useful Guide

A step-by-step guide to cofactor expansion (Laplace expansion), covering the core definitions, worked examples, key properties, and its connection to matrix inversion via the adjugate matrix.
Dario Radečić's photo

Dario Radečić

Tutorial

KL-Divergence Explained: Intuition, Formula, and Examples

Explore KL-Divergence, one of the most common yet essential tools used in machine learning.
Vaibhav Mehra's photo

Vaibhav Mehra

Tutorial

Taylor Series: From Approximations to Optimization

Learn how polynomial approximations power gradient descent, XGBoost, and the functions your computer calculates every day.
Dario Radečić's photo

Dario Radečić

Tutorial

Differential Equations: From Basics to ML Applications

A practical introduction to differential equations covering core types, classification, analytical and numerical solution methods, and their real-world role in gradient descent, regression, and time series modeling.
Dario Radečić's photo

Dario Radečić

Tutorial

Gaussian Elimination: A Method to Solve Systems of Equations

Learn the Gaussian elimination algorithm through step-by-step examples, code implementations, and practical applications in data science.
Arunn Thevapalan's photo

Arunn Thevapalan

Tutorial

Euler's Number (e) Explained: Its Significance and Applications

Discover why Euler’s number is everywhere—from banking and biology to machine learning and meteorology—and how this constant powers continuous growth and change.
Amberle McKee's photo

Amberle McKee

Ver másVer más