Why Your Python Code Is Slow And How To Optimize It

You’ve learned Python. You can write functions, use loops, and build small projects. Maybe you’re analyzing data or automating tasks at work. You’ve stopped second-guessing every line of code and started thinking in a more “Pythonic” way. You finally feel worthy of the title programmer.

But then it happens.

You write a script that feels smooth and elegant. No errors, no warnings, no issues. That is, until you run it on real data—and it grinds to a halt.

When Code Works—But Doesn’t Scale

As beginner programmers, we usually test our code with small inputs. Let’s say you’re writing a script to check for duplicate charges in your bank account. A common early solution is a nested for loop that compares each transaction to every other transaction by vendor, date, and amount.

This brute force approach works. But it’s inefficient.

Brute force solutions check every possible combination until the goal is reached and tend to be very inefficient.

With 10 transactions, your program performs 100 comparisons. With 20, it jumps to 400. At 1,000? Now you're looking at a million comparisons. Your program hasn’t broken—your approach just doesn’t scale.

This is a pivotal moment in every programmer’s journey: realizing that correct code isn’t enough. It has to be efficient.

Enter: Algorithmic Complexity

Most beginner tutorials focus on writing code that works, not code that works well. The inefficiencies are hidden by the speed of modern machines—until they aren’t. Once you’re comfortable with basic syntax, it’s time to learn about scalability and performance.

Algorithm complexity is typically broken into two parts:

  • Time complexity: How the runtime grows as input size increases.
  • Space complexity: How memory usage grows with input size.

Of the two, time complexity is usually the first wall intermediate programmers hit. When considering time complexity, it is not important to actually time our programs. We are not concerned with how long the program takes to run, but rather how the time changes as the input grows.

Time complexity is typically written in Big O notation. You may remember function notation from math class, where we would write f(n) to indicate a function. Big O notation uses O(n) where the “n” is the input size. You may see other variables used to indicate other factors as well. Some common time complexities are:

Comparison of Time Complexity, Notations, and Their Growth with Input SizeUnderstanding how input size impacts performance is key to writing scalable code.

Back to our bank example: the nested loop approach results in O(n²) time complexity. That’s fine for 10 transactions—but terrible for 10,000.

Scaling Smarter: From Brute Force to Sets

So how do we reduce complexity? We rethink the problem.Do we need to compare every transaction to every other one? Absolutely not. What we really want to know is whether the same vendor, date, and amount appear more than once.While there are many different ways to do this, a set would allow us to complete this simple task.Here’s the approach:
  • Start with an empty set.
  • Loop through each transaction.
  • Store each as a tuple (vendor, date, amount).
  • If it’s already in the set, it’s a duplicate.If not, add it to the set.
Sets in Python are highly optimized, with the average lookup being constant, or O(1). And because there’s only one loop, the total time complexity drops to O(n). This allows the program to be much more scalable.

The Tradeoff: Time vs. Space

Improving time performance often comes with a tradeoff: increased memory use.

Our nested loop didn’t store any intermediate data, so its space complexity was O(1)—constant. In contrast, our set-based approach stores a unique record for each transaction, so space complexity becomes O(n)—linear.

That might sound like a step backward. But on modern hardware, linear memory usage is rarely a concern. The performance gain in time complexity is far more valuable.

Just be cautious of anything that grows faster than linear in space—quadratic or exponential memory usage can quickly overwhelm any system.

Build Code That Scales

By stepping back and thinking about complexity, you can write code that’s not only correct, but also scalable. That’s what separates scripts that break under pressure from ones that power real-world applications.

So yes, celebrate when your Python code works. But then ask: will it still work when the input grows?

That’s the mindset that turns a programmer into a software engineer.

Want to Go Deeper?

If you found this helpful and want to truly build the foundation for scalable, high-performance code, check out my book: Data Structures and Algorithms Essentials You Always Wanted to Know.

It’s written with clarity for intermediate programmers and packed with practical examples, real-world use cases, and the kind of insights that unlock better problem-solving. Whether you're preparing for interviews or simply ready to level up your coding game, this book will help you understand how and why to write code that performs under pressure.

This blog is written by Shawn Peters, author of Data Structures and Algorithms Essentials You Always Wanted to Know.

Galley cover of Data Structures and Algorithms Essentials You Always Wanted to Know by Vibrant Publishers.

Galley cover of Data Structures and Algorithms Essentials You Always Wanted to Know by Vibrant Publishers. 

Also read:
AI Can Code, So Do You Still Need to Learn Programming?
Demystifying Machine Learning: A Practical Guide for Beginners
The Power of Data Visualization: Bringing Data to Life