ML System Design: Build a Fraud Detection System

Fraud detection is one of the highest-stakes ML applications — a false negative costs money, a false positive costs a customer. Companies like Stripe, PayPal, Square, and every major bank run sophisticated fraud detection systems. This is a frequent ML system design question at fintech companies and major tech firms.

Step 1: Problem Scoping

Clarifying questions:

  • What type of fraud? Payment fraud (card-not-present), account takeover, promo abuse, identity fraud?
  • When is the decision made? Before authorization (synchronous, <100ms), or post-hoc review (asynchronous)?
  • What’s the cost asymmetry? Fraud loss vs. false-positive friction cost for legitimate users.
  • What regulatory requirements apply? PCI DSS, BSA/AML, GDPR — affect data retention and explainability.

Assume: real-time payment fraud detection, <100ms decision required, 10M transactions/day.

Step 2: What Makes Fraud Hard

  • Extreme class imbalance: <0.1% of transactions are fraudulent. Standard accuracy is useless (99.9% by predicting “not fraud” always).
  • Adversarial adaptation: Fraudsters observe model behavior and adapt. Static models degrade within weeks.
  • Delayed labels: Chargebacks arrive 30-90 days after transaction. Real-time model trains on stale labels.
  • Distribution shift: Holiday spending, new merchant categories, international expansion all shift feature distributions.
  • Explainability requirements: Regulation requires you to tell a customer why their transaction was declined.

Step 3: Feature Engineering

Transaction-level features:

  • Amount, merchant category, time of day, device type, IP geolocation
  • Distance between billing address and IP geolocation
  • Is this a new merchant for this cardholder?
  • Velocity: number of transactions in last 1h / 24h / 7d

Account-level features (computed features, not raw data):

  • Average transaction amount in last 30 days
  • Deviation of current amount from personal baseline (z-score)
  • International transaction ratio for this account
  • Time since last transaction
  • Device fingerprint: new device? Seen on other fraud accounts?

Graph features:

  • Is this merchant IP seen in other fraud cases?
  • Is this card connected to other fraud-flagged accounts via shared device/email?
  • Graph Neural Networks can embed the transaction graph for rich features

Feature computation pipeline:

import redis
from datetime import datetime

class FeatureStore:
    def __init__(self):
        self.redis = redis.Redis(host='feature-store-redis')

    def get_velocity_features(self, account_id: str, transaction_time: datetime) -> dict:
        """Compute velocity features from real-time event stream."""
        pipe = self.redis.pipeline()

        # Count transactions in sliding windows
        now_ts = int(transaction_time.timestamp())
        windows = {'1h': 3600, '24h': 86400, '7d': 604800}

        for window_name, window_seconds in windows.items():
            key = f"txn_count:{account_id}"
            cutoff = now_ts - window_seconds
            pipe.zcount(key, cutoff, now_ts)

            amount_key = f"txn_amount:{account_id}"
            pipe.zrangebyscore(amount_key, cutoff, now_ts, withscores=True)

        results = pipe.execute()

        features = {}
        for i, (window_name, _) in enumerate(windows.items()):
            count = results[i * 2]
            amount_data = results[i * 2 + 1]
            amounts = [score for _, score in amount_data]

            features[f'txn_count_{window_name}'] = count
            features[f'txn_amount_sum_{window_name}'] = sum(amounts) if amounts else 0
            features[f'txn_amount_mean_{window_name}'] = (
                sum(amounts) / len(amounts) if amounts else 0
            )

        return features

    def get_account_baseline(self, account_id: str) -> dict:
        """Retrieve pre-computed account statistics."""
        baseline = self.redis.hgetall(f"baseline:{account_id}")
        return {k.decode(): float(v) for k, v in baseline.items()}

Step 4: Model Architecture

Three-layer decision system:

Layer 1 — Hard rules (block/allow, <1ms):

  • Deny: card on blocklist, transaction from OFAC-sanctioned country
  • Allow: verified recurring subscription, whitelisted merchant
  • Handles ~30% of volume with zero ML cost

Layer 2 — Real-time ML model (<50ms):

  • LightGBM or XGBoost on 100-200 engineered features
  • Score between 0-1; apply threshold for approve/decline/review
  • Model served via ONNX runtime for fast inference

Layer 3 — Async review queue:

  • Transactions with medium scores (e.g., 0.3-0.6) go to human review
  • Deep learning model (LSTM on transaction sequence, or Transformer) runs asynchronously
  • Result updates account standing for future transactions

Step 5: Handling Class Imbalance

import lightgbm as lgb
from sklearn.model_selection import StratifiedKFold
import numpy as np

def train_fraud_model(X_train, y_train):
    # Calculate class imbalance ratio
    fraud_rate = y_train.mean()
    scale_pos_weight = (1 - fraud_rate) / fraud_rate  # ~1000 for 0.1% fraud rate

    params = {
        'objective': 'binary',
        'metric': ['auc', 'binary_logloss'],
        'scale_pos_weight': scale_pos_weight,  # Weight positive class
        'learning_rate': 0.05,
        'num_leaves': 63,
        'min_child_samples': 50,  # Prevent overfit on rare fraud cases
        'feature_fraction': 0.8,
        'bagging_fraction': 0.8,
        'bagging_freq': 5,
        'verbose': -1
    }

    # Use stratified k-fold to preserve fraud ratio in each fold
    skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
    models = []

    for fold, (train_idx, val_idx) in enumerate(skf.split(X_train, y_train)):
        X_fold_train, X_fold_val = X_train[train_idx], X_train[val_idx]
        y_fold_train, y_fold_val = y_train[train_idx], y_train[val_idx]

        dtrain = lgb.Dataset(X_fold_train, label=y_fold_train)
        dval = lgb.Dataset(X_fold_val, label=y_fold_val, reference=dtrain)

        model = lgb.train(
            params,
            dtrain,
            num_boost_round=1000,
            valid_sets=[dval],
            callbacks=[lgb.early_stopping(50), lgb.log_evaluation(100)]
        )
        models.append(model)

    return models  # Ensemble predictions for robustness

def predict_ensemble(models, X):
    """Average predictions from all fold models."""
    predictions = np.array([m.predict(X) for m in models])
    return predictions.mean(axis=0)

Step 6: Explainability

Regulatory and customer service requirements demand explanations:

import shap

def explain_fraud_decision(model, transaction_features, feature_names):
    """Generate human-readable explanation for fraud decision."""
    explainer = shap.TreeExplainer(model)
    shap_values = explainer.shap_values(transaction_features)

    # Get top factors
    feature_impacts = list(zip(feature_names, shap_values[0]))
    feature_impacts.sort(key=lambda x: abs(x[1]), reverse=True)

    top_factors = feature_impacts[:5]

    # Map to customer-facing messages
    explanations = []
    for feature, impact in top_factors:
        if impact > 0:  # Increases fraud probability
            if 'velocity' in feature:
                explanations.append("Unusual number of recent transactions")
            elif 'amount' in feature:
                explanations.append("Transaction amount higher than your typical spending")
            elif 'location' in feature:
                explanations.append("Transaction location differs from usual patterns")

    return {
        'fraud_score': float(model.predict(transaction_features)[0]),
        'top_factors': top_factors,
        'customer_explanation': explanations
    }

Step 7: Monitoring and Retraining

Key metrics to monitor:

  • Fraud detection rate (recall on labeled fraud) — primary effectiveness metric
  • False positive rate — legitimate transactions declined; direct customer impact
  • Chargeback rate — lagging indicator of missed fraud
  • Score distribution PSI — early warning of distribution shift

Retraining cadence: Weekly with a sliding 90-day window. Use champion/challenger setup — new model runs in shadow mode for 1 week before promotion.

Depth Levels

Junior: Choose features, explain class imbalance handling, describe precision/recall trade-off.

Senior: Design three-layer system, velocity features via Redis, SHAP for explainability, retraining pipeline.

Staff: Graph neural networks for connected fraud ring detection, online learning for real-time adaptation, regulatory compliance (AML, PCI DSS), false positive cost modeling for threshold optimization.

Related ML Topics

  • Handling Imbalanced Datasets — fraud rates of 0.01-0.1 percent require scale_pos_weight, SMOTE, or focal loss; standard accuracy is meaningless at this imbalance
  • Classification Metrics — fraud detection operates at a fixed false positive rate budget; find the threshold that maximizes recall under that constraint
  • How to Detect Model Drift in Production — fraud models degrade faster than most due to adversarial adaptation; PSI on score distribution and weekly retraining are standard practice
  • Design a Payment System — fraud detection is a core component of payment system design; the synchronous pre-authorization decision maps to the ML inference path
  • ML System Design: Build a Spam Classifier — spam and fraud share the same three-layer architecture: hard rules, real-time ML, async deep model; compare the different latency constraints
Scroll to Top