Returning to the simplest complicated problem: a deep look at Fibonacci and two ways of writing it in Rust and Python
Why are we still looking at Fibonacci in 2025?
Some problems repeat so much in programming that they turn into a kind of mirror—a mirror that reflects our skill, patience, and even our philosophical view of structure. The Fibonacci sequence is one of those mirrors. A simple series of numbers: 0, 1, 1, 2, 3, 5, 8, 13…
Simple, but not meaningless. Fibonacci isn’t really an exercise; it’s a collision point between simplicity and complexity, between recursion and optimization, between raw power and structural intelligence.
If we’re being honest, nobody uses the naive recursive approach to compute the 150th Fibonacci number. But every one of us writes it once, just to face the true shape of time complexity—where the machine doesn’t stay quiet and the CPU starts groaning under the load.
And that experience is exactly what pushes us toward smarter solutions. But before that, we have to pass through the beginner stage: the method that works, but is designed wrong.
The simple path: raw, beautiful recursion… but slow
The naive Fibonacci function is like a poem. Two lines of if, and then a recursive jump.
Strangely simple, strangely human, and strangely impractical.
In Rust it looks like this:
fn fib(num: i32) -> i128 {
if num <= 0 {return 0}
else if num == 1 {return 1}
fib(num -1) + fib(num-2)
}
fn main() {
let num = 150;
println!("fib number of {} is {}", num, fib(num));
}
And in Python it’s just as raw:
def fib(n):
if n <= 0:
return 0
elif n == 1:
return 1
return fib(n-1) + fib(n-2)
print(fib(150))
This version is slow. Not normal slow… Exponential slow — the same O(2ⁿ) we fear in interviews.
Why?
Because to compute fib(150), it recalculates fib(149) and fib(148) thousands of times.
It has no memory, no long-term awareness.
Every call starts from zero — like someone who wakes up every morning with no memory of yesterday.
But this version has one important advantage: It reveals the essence of the problem. We learn why recursion can be dangerous, why memoization exists, and why programming is not just “writing code.”
To understand the right solution, we must fully experience the wrong one.
The hard path: memoization and turning a simple idea into a real algorithm
A professional programmer always has one extra thing: memory. Not mental memory—memory in the sense of never letting the computer repeat unnecessary work.
This is where memoization appears. Its idea is simple: “When you compute a result once, keep it.”
In the optimized Rust version, we use a HashMap to store previous results:
use std::collections::HashMap;
fn fib(num: i32) -> i128 {
if num <= 0 {return 0}
else if num == 1 {return 1}
let mut m = HashMap::new();
rec_fib(num -1, &mut m) + rec_fib(num-2, &mut m)
}
fn rec_fib(num: i32, memo: &mut HashMap<i32, i128>) -> i128 {
if num <= 0 {return 0}
else if num == 1 {return 1}
if let Some(&val) = memo.get(&num) {
return val;
}
let val = rec_fib(num -1, memo) + rec_fib(num-2, memo);
memo.insert(num, val);
val
}
fn main() {
let num = 150;
println!("fib number of {} is {}", num, fib(num));
}
And if you really want everything in a single function:
use std::collections::HashMap;
fn fib(num: i32) -> i128 {
if num <= 0 {return 0}
else if num == 1 {return 1}
let mut m = HashMap::new();
rec_fib(num -1, &mut m) + rec_fib(num-2, &mut m)
}
fn rec_fib(num: i32, memo: &mut HashMap<i32, i128>) -> i128 {
if num <= 0 {return 0}
else if num == 1 {return 1}
if let Some(&val) = memo.get(&num) {
return val;
}
let val = rec_fib(num -1, memo) + rec_fib(num-2, memo);
memo.insert(num, val);
val
}
fn main() {
let num = 150;
println!("fib number of {} is {}", num, fib(num));
}
Python has the same pattern:
memo = {}
def fib(n):
if n <= 0:
return 0
elif n == 1:
return 1
if n in memo:
return memo[n]
value = fib(n-1) + fib(n-2)
memo[n] = value
return value
print(fib(150))
Here something important happens: The time complexity drops from O(2ⁿ) to O(n). It means you move from a path full of pointless repetition to a direct, intentional path.
This is the point where recursion shifts from “dangerous” to “beautiful.”
Programmers figured out long ago that recursion without memoization is like writing a beautiful sentence in the sand: the next wave forces you to write it again.
When memoization is added, the problem finally makes sense: now memory is on your side; now the system finally understands it shouldn’t forget its past.
Final words — an algorithm is more than code
On the surface, Fibonacci is a simple exercise. But if you look a bit deeper, you’ll see how a small problem becomes an intersection of ideas like:
- exponential growth
- memory
- recursion
- structure
- and even the relationship between humans and complexity
A tiny algorithm is a tiny world. And maybe this is why Fibonacci is so timeless: it starts simple, but forces you to think about how you think.
Whether you burn all your CPU power on the naive method, or shorten the path a thousandfold with a bit of memory… in both cases you’re practicing the same essence of programming: choice.
And Fibonacci is just an excuse to practice that choice.
Comments
Be the first to share your thoughts.