v1.0 · MIT · Zero dependencies

pytrackio

Zero-dependency Python performance tracker. Decorate, time, count — then report. No servers. No config. Just Python.

View on GitHub → PyPI →
pip install pytrackio click to copy
30s
Setup Time
Zero
Dependencies
3.10+
Python
Yes
Thread Safe
MIT
License
Demo

See it in action

Watch pytrackio track a real Python pipeline from zero to report.

Quick start

From zero to metrics in 60 seconds

Copy this, run it, see your first performance report.

hello_pytrackio.py
from pytrackio import track, timer, counter, report

# 1. Decorate any function
@track
def fetch_user(user_id: int) -> dict:
    return {"id": user_id, "name": f"User-{user_id}"}

# 2. Time any block of code
with timer("data_processing"):
    users = [fetch_user(i) for i in range(20)]

# 3. Named counters
counter("api_calls").increment(20)

# 4. See everything
report()
╔══════════════════════════════════════════════════════════════╗ ║ pytrackio — Performance Report uptime: 0.03s ║ ╠══════════════════════════════════════════════════════════════╣ ║ Function / Block Calls Avg(ms) Min(ms) Max(ms) ║ ╠══════════════════════════════════════════════════════════════╣ ║ fetch_user 20 0.01 0.01 0.04 ║ ║ data_processing 1 0.31 0.31 0.31 ║ ╠══════════════════════════════════════════════════════════════╣ ║ Counters ║ ╠──────────────────────────────────────────────────────────────╣ ║ api_calls: 20 ║ ╚══════════════════════════════════════════════════════════════╝
API reference

Four primitives, infinite visibility

Everything pytrackio exposes — no hidden complexity.

@track
decorator
Wrap any function to auto-record call count, avg/min/max timing, and error rate. Exceptions always propagate. Supports @track(name="custom") for clean metric names.
timer()
context manager
Measure any arbitrary code block. Perfect for DB queries, API calls, file I/O, or any multi-line operation. Nested timers fully supported.
counter()
named object
Thread-safe integer counters. Created on first use, no declaration needed. Methods: .increment(n) .decrement(n) .reset() .value
report()
formatter
Print a formatted table of all metrics. report(colour=False) for log files. Returns the report as a string for custom logging or storage.
get_registry()
accessor
Direct access to the MetricsRegistry singleton. Iterate all summaries, build alerting pipelines, and call .reset() between test runs.
Examples

Real-world use cases

Complete, runnable examples across five domains.

ecommerce_pipeline.py
from pytrackio import track, timer, counter, report
import time, random

@track
def validate_cart(cart_id: str) -> bool:
    time.sleep(random.uniform(0.01, 0.05))
    return True

@track(name="payment_gateway")
def charge_card(amount: float, token: str) -> dict:
    time.sleep(random.uniform(0.08, 0.20))
    if random.random() < 0.05:  # 5% failure rate
        raise ValueError("Card declined")
    counter("revenue_cents").increment(int(amount * 100))
    return {"status": "captured"}

@track
def send_confirmation_email(email: str) -> None:
    time.sleep(random.uniform(0.02, 0.04))
    counter("emails_sent").increment()

for i in range(100):
    with timer("full_order_pipeline"):
        try:
            validate_cart(f"CART-{i:04d}")
            charge_card(49.99, f"tok_{i}")
            send_confirmation_email(f"customer{i}@example.com")
            counter("orders_completed").increment()
        except ValueError:
            counter("orders_failed").increment()

report()
# error_rate on payment_gateway shows ~5%
# avg_ms on full_order_pipeline = end-to-end latency
ml_pipeline.py
from pytrackio import track, timer, counter, report
import time, random

@track
def load_dataset(path: str) -> list:
    time.sleep(0.12)
    counter("rows_ingested").increment(50_000)
    return list(range(50_000))

@track
def engineer_features(data: list) -> list:
    time.sleep(0.09)  # often the slowest step
    return [x / len(data) for x in data]

@track(name="model_training")
def train_model(features: list, labels: list) -> dict:
    time.sleep(0.45)
    counter("models_trained").increment()
    return {"accuracy": 0.923, "f1_score": 0.918}

with timer("end_to_end_pipeline"):
    raw = load_dataset("/data/train.csv")
    features = engineer_features(raw)
    metrics = train_model(features, labels)

report()
# model_training is ~45% of total pipeline time
fastapi_app.py
from fastapi import FastAPI
from pytrackio import track, counter, get_registry

app = FastAPI()

# Use sync helpers — @track doesn't support async def yet
@track
def _get_user_logic(user_id: int) -> dict:
    counter("user_reads").increment()
    return {"id": user_id, "name": f"User-{user_id}"}

@app.get("/users/{user_id}")
async def get_user(user_id: int):
    return _get_user_logic(user_id)

@app.get("/metrics")
def get_metrics():
    # Expose live metrics — add auth before production!
    registry = get_registry()
    return {
        "uptime_seconds": registry.uptime_seconds(),
        "functions": [
            {"name": s.name, "calls": s.calls,
             "avg_ms": round(s.avg_ms, 2),
             "error_rate": round(s.error_rate, 2)}
            for s in registry.all_summaries()
        ]
    }
# uvicorn fastapi_app:app --reload
kaggle_pipeline.py
from pytrackio import track, timer, counter, report
import time, random

@track
def load_train_csv(path: str) -> dict:
    time.sleep(0.08)
    counter("train_rows").increment(10_000)
    return {"shape": (10_000, 47)}

@track
def feature_engineering(train, test) -> tuple:
    time.sleep(0.15)  # often the slowest step
    return train, test

@track(name="xgboost_cv")
def cross_validate(data, folds: int = 5) -> float:
    time.sleep(0.6 * folds / 5)
    counter("cv_folds_run").increment(folds)
    return 0.82 + random.uniform(-0.02, 0.02)

BASE = "/kaggle/input/competition"
with timer("full_competition_pipeline"):
    train = load_train_csv(f"{BASE}/train.csv")
    test = load_train_csv(f"{BASE}/test.csv")
    train, test = feature_engineering(train, test)
    score = cross_validate(train, folds=5)
    print(f"CV Score: {score:.4f}")

report()
# Immediately see: feature_engineering + xgboost_cv are bottlenecks
production_alerting.py
from pytrackio import track, get_registry, counter
import threading, time, random

ERROR_RATE_THRESHOLD = 5.0   # percent
LATENCY_THRESHOLD_MS = 500   # milliseconds

def continuous_monitor() -> None:
    registry = get_registry()
    while True:
        for s in registry.all_summaries():
            if s.error_rate > ERROR_RATE_THRESHOLD:
                print(f"[CRITICAL] {s.name}: {s.error_rate:.1f}% errors")
            if s.avg_ms > LATENCY_THRESHOLD_MS:
                print(f"[WARNING] {s.name}: {s.avg_ms:.0f}ms latency")
        time.sleep(30)

# Start as daemon — dies with the main process
threading.Thread(target=continuous_monitor, daemon=True).start()

@track
def process_payment(txn_id: str, amount: float) -> bool:
    time.sleep(random.uniform(0.05, 0.6))
    if random.random() < 0.08:  # 8% — will trigger alert
        raise RuntimeError("Payment processor timeout")
    return True
Design principles

Built to stay out of your way

Every technical decision is deliberate.

🔒
Thread safe
threading.Lock guards every write. Works with gunicorn, uvicorn, any multi-threaded server.
🌐
Zero network
All data stays in-process. No HTTP, no disk, no sockets. Nothing can fail or add latency.
Exception transparency
pytrackio records the error and re-raises it. It never hides your bugs — it counts them.
💡
Memory efficient
Single in-process dict. 10M calls across 1,000 functions stays well under 5 MB.
Comparison

pytrackio vs the alternatives

Choose the right tool for your context.

Feature pytrackio Prometheus StatsD Datadog cProfile
Setup time 30 seconds Hours 30 min Hours 0 sec
pip packages needed 1 2+ 1+ 3+ 0
External server None Yes Yes Yes None
Zero dependencies
Error tracking Limited
Production ready
Thread safe
Contribute

Good first issues — open now

Pick a scope that matches your experience level. Every PR welcome.

Easy
@track for class methods
OOP projects can't use it today — well-scoped first PR
Easy
JSON / CSV export
report() as structured data for logging pipelines
Easy
Better docstrings
Improve IDE autocomplete and type hints
Easy
More real-world examples
Help newcomers understand use cases fast
Medium
Async @track support
The #1 open gap — modern Python is async
Medium
p95 / p99 latency
Critical for SLA monitoring in production
Medium
HTML dashboard report
Beautiful shareable reports for stakeholders
Hard
Prometheus adapter
Bridge to existing monitoring stacks
contribute in 3 steps
# 1. Fork & clone
git clone https://github.com/YOUR-USERNAME/pytrackio.git
cd pytrackio && pip install -e . && pip install pytest pytest-cov

# 2. Create a branch & build
git checkout -b feature/async-track-support
# make your changes in pytrackio/ and tests/
python -m pytest tests/ -v   # all must pass

# 3. Submit your PR
git commit -m "feat: add async support for @track decorator"
git push origin feature/async-track-support
# then open a PR at github.com/danshu3007-lang/pytrackio

Star it. Fork it. Ship it.

Built by Deepanshu — Data Analyst & open source developer.
Contributions, issues, and PRs are always welcome.