post python series · Python idioms · 2025-02-20 · 5 min read

Python idioms I reach for daily, part 1: decorators that earn their keep

#python#decorators#idioms#series-python-idioms

Part 1 of a 3-part series on Python idioms I reach for daily in AI / data engineering work.   Part 1 (this post): decorators that earn their keep   Part 2: context managers beyond with open()   Part 3: generators for streaming and composition

Decorators are over-explained at the language level and under-explained at the production level. Most tutorials show you @my_decorator wrapping a function, you nod, and then you don’t actually reach for them in real code. This post is the four shapes I genuinely use weekly, with the production-grade code I copy and paste.

A brief refresher

A decorator is a function that takes a function and returns a function. The @ syntax is sugar:

@cache
def expensive(n): ...
# is exactly the same as
def expensive(n): ...
expensive = cache(expensive)

That’s it. Once you internalise that, every “decorator factory” or “decorator with arguments” pattern is just a function returning a function returning a function. No magic.

Two practical rules that prevent every common bug:

  1. Always use functools.wraps to preserve the wrapped function’s metadata.
  2. Use *args, **kwargs in the wrapper so it works on any signature.
from functools import wraps
def my_decorator(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
# do something before
result = fn(*args, **kwargs)
# do something after
return result
return wrapper

Memorise this template. The four decorators below are variations on it.

Shape 1: retry on failure

The single most-used decorator in any code that talks to a network. A flaky API, a rate-limited endpoint, a transient database error: rather than wrap every call site in try/except, decorate the function once.

from functools import wraps
import time
import random
def retry(attempts: int = 3, backoff_seconds: float = 1.0,
exceptions: tuple = (Exception,)):
"""Retry the wrapped function up to `attempts` times with exponential backoff + jitter."""
def decorator(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
last_exc = None
for attempt in range(attempts):
try:
return fn(*args, **kwargs)
except exceptions as e:
last_exc = e
if attempt == attempts - 1:
raise
sleep = backoff_seconds * (2 ** attempt) + random.uniform(0, 0.1)
time.sleep(sleep)
raise last_exc
return wrapper
return decorator

Use:

@retry(attempts=5, backoff_seconds=0.5, exceptions=(httpx.HTTPError, asyncio.TimeoutError))
def fetch_user(user_id: int) -> dict:
return httpx.get(f"https://api/users/{user_id}").json()

Three things this template gets right that most “retry” gists do not:

For an async equivalent, replace time.sleep with await asyncio.sleep. Same shape.

Shape 2: timing instrumentation

In production AI/data code, knowing where time is going is half the battle. A timing decorator that logs to your structured logger:

from functools import wraps
import time
import logging
log = logging.getLogger(__name__)
def timed(label: str | None = None):
"""Log how long the wrapped function took, in ms."""
def decorator(fn):
actual_label = label or fn.__qualname__
@wraps(fn)
def wrapper(*args, **kwargs):
start = time.perf_counter()
try:
return fn(*args, **kwargs)
finally:
ms = (time.perf_counter() - start) * 1000
log.info("timed", extra={"label": actual_label, "ms": round(ms, 2)})
return wrapper
return decorator

Use:

@timed("user_lookup")
def fetch_user(user_id: int) -> dict:
...

Three lessons learned the hard way:

Shape 3: feature-flag gating

You’re rolling out a new behaviour. You want to flip the new path on/off without redeploying, by config. A decorator that swaps the implementation based on a flag:

from functools import wraps
def feature_flag(flag_name: str, fallback_fn):
"""If the named flag is on, run the wrapped function; else run the fallback."""
def decorator(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
if is_flag_enabled(flag_name):
return fn(*args, **kwargs)
return fallback_fn(*args, **kwargs)
return wrapper
return decorator

Use:

def legacy_search(query):
return old_es_search(query)
@feature_flag("vector_search_v2", fallback_fn=legacy_search)
def search(query):
return new_pgvector_search(query)

Now search(query) calls the new implementation when the flag is on, the old one when off. You ship the code with the flag default-off, enable for 5% of traffic, watch metrics, ramp up. The call site never changes.

Variant: A/B by user-id hash:

def feature_ramp(flag_name: str, fallback_fn, ramp_arg: str = "user_id"):
def decorator(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
user_id = kwargs.get(ramp_arg) or args[0]
ramp_pct = get_ramp_percentage(flag_name)
bucket = hash(str(user_id)) % 100
if bucket < ramp_pct:
return fn(*args, **kwargs)
return fallback_fn(*args, **kwargs)
return wrapper
return decorator

Shape 4: lightweight memoisation with TTL

functools.lru_cache is the standard answer for “cache this function”. But it has no TTL — entries live forever (or until LRU eviction). For real-time-changing data (current traffic, live prices), you want time-bounded caching.

from functools import wraps
import time
def cached_with_ttl(seconds: float):
"""Cache the wrapped function's output for `seconds`, then re-evaluate."""
def decorator(fn):
cache: dict = {}
@wraps(fn)
def wrapper(*args, **kwargs):
key = (args, tuple(sorted(kwargs.items())))
now = time.monotonic()
if key in cache:
value, expires_at = cache[key]
if now < expires_at:
return value
result = fn(*args, **kwargs)
cache[key] = (result, now + seconds)
return result
return wrapper
return decorator

Use:

@cached_with_ttl(seconds=30)
def get_traffic_state(road_id: str) -> dict:
return tomtom_api.get_flow(road_id)

Now get_traffic_state hits the upstream at most once per 30 seconds per road id. Reasonable for a UI that polls every few seconds.

Caveats with this template:

For production-grade caching with TTL + LRU + thread-safety, just use cachetools.TTLCache. The decorator above is for code where pulling in a dependency is overkill.

Shape 5 (bonus): context-aware logging

Pass per-request context (request_id, user_id) into a logger automatically without threading it through every function:

from functools import wraps
import logging
from contextvars import ContextVar
current_request_id: ContextVar[str | None] = ContextVar("request_id", default=None)
class ContextFilter(logging.Filter):
def filter(self, record):
record.request_id = current_request_id.get()
return True
def with_request_context(fn):
@wraps(fn)
def wrapper(request_id: str, *args, **kwargs):
token = current_request_id.set(request_id)
try:
return fn(*args, **kwargs)
finally:
current_request_id.reset(token)
return wrapper

Use:

@with_request_context
def handle_request(*args, **kwargs):
log.info("starting work") # log line includes request_id automatically
do_thing() # any log.info() inside also gets the request_id

Combines contextvars (thread-safe, async-safe) with the decorator pattern. Saves you from passing request_id as the first arg of every function in your codebase.

When NOT to use a decorator

Some patterns get reached for as decorators where they shouldn’t:

Closing

Decorators are a tool, not a goal. Five shapes that earn their keep: retry, timing, feature-flag, TTL-cache, request-context. Each one removes a specific class of repetitive code from every call site. None of them require advanced metaprogramming.

Next post in the series: context managers beyond with open() — how to use __enter__/__exit__ for transactions, sessions, ExitStack, and the async equivalents.