Error Management in async Rust — from practical experience to architectural depth

By leen 03 Dec 2025 · 13:40

The world that won’t wait: why we even need to run multiple requests at the same time

The world no longer waits. Services operate so fast that if we pause even a little, the result becomes useless. In the architecture of modern systems, concurrency isn’t a luxury — it’s a necessity. And any programming language that claims to survive in the real world must offer a reliable, predictable, and safe way to execute multiple tasks in parallel.

Rust takes this seriously: Instead of allowing programmers to “do anything,” Rust forces them to “do things the right way.”

In the video below, we focused on a single request: one URL, one response, one error, one context — a world that’s easy to understand.

But the moment we try to check three URLs at once… or assign a different timeout to each… or ensure that the main program doesn’t crash if one of the tasks panics… we suddenly enter a layer of reality that many tutorials conveniently ignore.

This is where asynchronous programming stops being a “trick” and becomes an architectural tool. And that’s the goal here: a serious encounter with reality.

We build a standalone project: async-joinset-timeout to see how to fire off three HTTP requests simultaneously, apply timeouts, and classify the results cleanly and transparently.

This is exactly where the difference between a beginner programmer and a software engineer becomes obvious.


The real world has no place for “one request”: building the project and analyzing its core

Here’s the YouTube video for this small project:

We start simple: a fresh project and a few dependencies.

cargo new async-joinset-timeout
cd async-joinset-timeout

And here’s the Cargo.toml:

[package]
name = "async-joinset-timeout"
version = "0.1.0"
edition = "2021"

[dependencies]
anyhow = "1"
reqwest = { version = "0.11", default-features = false, features = ["rustls-tls"] }
tokio = { version = "1", features = ["macros", "rt-multi-thread", "time"] }

Three libraries, three pillars of the program:

  • tokio: the beating heart of the async runtime
  • reqwest: HTTP requests using rustls
  • anyhow: readable, humane error handling

But the real action happens in main.rs: we create a JoinSet, run three requests in parallel, assign each a timeout, and classify results into four outcomes:

  • success
  • timeout
  • network error
  • panic

Here is the entire code:

use anyhow::{Context, Result};
use reqwest::Client;
use tokio::{
    task::JoinSet,
    time::{timeout, Duration},
};

#[tokio::main]
async fn main() -> Result<()> {
    println!("🚀 Async JoinSet + Timeout demo");

    let urls = vec![
        "https://httpbin.org/status/200".to_string(),
        "https://httpbin.org/delay/5".to_string(),
        "https://httpbin.org/status/503".to_string(),
    ];

    let client = Client::builder()
        .user_agent("drunkleen-joinset-demo/0.1")
        .build()
        .context("failed to build HTTP client")?;

    let mut set = JoinSet::new();

    for url in urls {
        let client = client.clone();
        let url_clone = url.clone();

        set.spawn(async move {
            let result = timeout(Duration::from_secs(3), async {
                let response = client
                    .get(&url_clone)
                    .send()
                    .await
                    .with_context(|| format!("request to {} failed", &url_clone))?;

                Ok::<u16, anyhow::Error>(response.status().as_u16())
            })
            .await
            .context("request timed out")??;

            Ok::<(String, u16), anyhow::Error>((url_clone, result))
        });
    }

    while let Some(join_result) = set.join_next().await {
        match join_result {
            Ok(inner_result) => match inner_result {
                Ok((url, status)) => {
                    println!("✅ {url} -> HTTP {status}");
                }
                Err(err) => {
                    eprintln!("❌ worker error:");
                    eprintln!("{err:#}");
                }
            },
            Err(join_err) => {
                eprintln!("💥 task panicked:");
                eprintln!("{join_err:#}");
            }
        }
    }

    println!("✨ Done.");
    Ok(())
}

It might look simple at first, but it combines several critical concepts:

  • parallel task execution
  • per-task timeout
  • structured error management
  • panic isolation
  • predictable behavior under load

Exactly the type of foundation a real back-end engineer must understand — not from reading, but from experience.


Under the skin of architecture: what “error” really means in async land

The interesting part is that in async systems there’s no such thing as just “one type of error.” Error is a tree:

  • one branch is timeout
  • another is network failure
  • another is a task panic
  • and the last one is how you present the error to a human

Rust doesn’t just “run things concurrently”; it gives meaning to failure.

For example:

If a request takes more than three seconds:

Error: request timed out

If the task panics:

💥 task panicked

This separation is where the system becomes observable. A system that explains its errors instead of hiding them is a system you can trust.

In real projects, an architecture that categorizes errors in a human-readable way is often more valuable than one that is merely fast. Speed matters; but meaning matters more.

This is exactly the philosophy that ties Rust and async together: raw speed + human structure.

And this project is a small-scale version of a huge world — a compressed model of real systems that must operate under pressure and still behave logically.


In the end, this text isn’t an introduction; it’s a mental map for anyone ready to move beyond “writing async code” and into “understanding async.”

The rule is simple:

When running multiple tasks, don’t chase only speed — chase meaning too. And Rust demands exactly that from us.

Comments

Be the first to share your thoughts.

Leave a comment