The Magic of Dynamic Programming: Stop Doing the Same Work Twice

Picture this: You write a function that works perfectly. The logic is clean, the code is elegant. You test it with small inputs, and everything runs instantly. Then you try something bigger and… suddenly you're waiting. And waiting. What happened?

Welcome to the sneaky world of exponential time complexity, where tiny inputs act like angels—until they suddenly morph into chaos. To really see what’s going on under the hood, let’s look at a tiny, innocent-looking puzzle (the staircase problem) that starts off manageable and then quickly becomes a performance nightmare if we don’t use dynamic programming.

The Staircase Problem

Imagine you're at the bottom of a staircase. You can climb either one step or two steps at a time. How many different ways can you reach the top?

For a 3-step staircase, the possible paths are:

  • 1 + 1 + 1

  • 1 + 2

  • 2 + 1

That's 3 ways! Not too bad.

Now, here’s a clever way to think about it: work backwards.

On a 5-step staircase, any path that ends on step 5 must come from either step 4 (one step) or step 3 (two steps). That means, the number of ways to reach step 5 is the number of ways to reach step 4 plus the number of ways to reach step 3.

In other words: ways(5) = ways(4) + ways(3)

This naturally leads us to write a recursive solution—meaning we define the solution to climb(n)in terms of smaller versions of the same problem:

Beautiful, right? Just 4 lines of code that perfectly capture the logic. But beauty can be deceptive: as n grows, this function quietly explodes in runtime.

The Hidden Cost: Why the Recursive Solution Has Exponential Time Complexity

Look at the diagram below showing what happens when we call climb(5). See how the tree branches out? Every node represents a function call.

Notice anything suspicious? Look closely at the numbers:

  • climb(3) appears twice

  • climb(2) appears three times

  • climb(1) appears five times

  • climb(0) appears three times

We're doing the exact same calculations over and over! The diagram shows just 5 steps—imagine what a tree for 50 or 500 steps would look like.

The time complexity is O(2n) - exponential growth. If you’re not familiar with Big O notation, this is not a direct measurement of the time the function takes, but rather how that time scales. That means a 10-step staircase takes about 32 times longer than a 5-step staircase, even though it's only twice as tall!

This is why your code seems fine at first, but becomes impossibly slow with larger inputs. Try running climb(40), and you might wait longer than it takes to actually climb a real staircase.

Enter Dynamic Programming

The problem isn't our logic—it's that we keep solving the same subproblems over and over. What if we could remember our answers? That's exactly what dynamic programming does. It's just a fancy term for "save your work so you don't have to do it again." But knowing the idea is one thing—putting it into practice is where the magic happens. 

There are two main approaches we can use to solve this problem with dynamic programming:

  1. Memoization 

  2. Tabulation 

Approach 1: Memoization (Top-Down)

We’ll start with our original recursive function and add a small upgrade: a dictionary (called memo) that remembers answers we’ve already calculated.

Now, when we calculate climb(3), we save it. The next time we need it, we just look it up instead of recalculating.

Look at the diagram below; see how much simpler the tree is?

Instead of the massive branching tree from before, we now have a clean path. Each value from 0 to 5 is calculated exactly once. When the function tries to call climb(3)a second time, it finds the answer already stored in the memo and returns it immediately without any additional recursion.

The result? Our time complexity drops from O(2n) to O(n) - linear time! Now, a 500-step staircase is just a bigger but manageable problem, not an exponential explosion. In short, memoization gives us an efficient top-down dynamic programming solution.

Approach 2: Tabulation (Bottom-Up)

If recursion feels confusing, there’s another way: build the answer from the bottom up.

Instead of starting at n and breaking it into smaller problems, we start with the smallest cases and build our way up to n.

We start at step 0 and work our way up, calculating each step once using the previous two. Same time complexity, but no recursion needed.

One More Trick: Space Optimization

In both dynamic programming versions so far, we use O(n) extra space because we’re storing a whole list of answers for all steps from 0 to n. But stop and think for a second: do we actually need to keep every one of those values?

Look closely at the code. At each step, we only use the previous two values. Once we're past them, we never look back. 

Now we're down to O(1) space complexity - constant space. Whether you have 5 steps or 5 million, you're using the same tiny amount of memory.

The Big Picture

Dynamic programming isn’t about memorizing formulas or mastering a bag of fancy tricks. It’s about noticing when you’re solving the same problem again and again—and being clever enough to save your results so you only do the hard work once.

Next time you write a recursive solution that feels elegant but runs slowly, ask yourself: "Am I doing the same work twice?" If the answer is yes, you've just found a perfect opportunity for dynamic programming.

The staircase problem might feel artificial at first, but the underlying pattern is everywhere: calculating Fibonacci numbers, finding shortest paths, optimizing decisions, and even in machine learning algorithms. Once you learn to spot the pattern, you'll see it all over computer science.

And that's the real magic: turning code that takes years to run into code that finishes in seconds, simply by being smarter about what you choose to remember.

Dynamic programming is just one of many patterns that keep popping up in coding interviews and real-world systems. If you want a structured path through all the core ideas—arrays, trees, graphs, DP, and more—check out Data Structures and Algorithms Essentials You Always Wanted to Know.

You’ll learn how to:

  • Analyze code with Big O notation

  • Use classic data structures like stacks, queues, linked lists, trees, and graphs

  • Write efficient recursive functions

  • Apply greedy strategies and dynamic programming to real problems

Every topic is grounded in practical, relatable examples—like managing a music library, checking palindromes, and caching results for speed. Plus, there are coding tasks and downloadable code in Java, C++, and JavaScript alongside the Python examples, so you can practice in the language you’re most comfortable with.

If you want to write code that not only works but also scales, this book will help you get there—step by step.

This blog is written by Shawn Peters, author of Data Structures and Algorithms Essentials You Always Wanted to Know.

Also read:
Machine Learning 101: The Big 3 Paradigms You Need To Know
Descriptive, Predictive, or Prescriptive? Choosing the Right Analytics for Your Business
Don’t Believe These 7 Myths About Blockchain