← Time

How Scoring Works

Time v1.0

Your error is measured as a ratio, not a gap.

The short version

You hear a duration, then try to recreate it. We don't measure how many milliseconds you were off — we measure how proportionally off you were. Being 200ms late on a 500ms target is a bigger error than being 200ms late on a 3000ms target. Scoring uses a log-ratio curve that mirrors how time perception actually works: your brain tracks relative durations, not absolute ones. Five rounds, 0–10 per round, max 50.

Why not just measure milliseconds?

Raw millisecond error fails the fairness test. Consider two guesses that are both 200ms off: one on a 400ms target, one on a 3000ms target. The first guess is 50% wrong. The second is less than 7% wrong. They feel completely different — and they should score differently.

This isn't just a game design decision. It reflects how human time perception actually works. The “Weber-Fechner law” describes a pattern across many senses: we detect differences proportionally, not absolutely. A just-noticeable difference in duration is roughly a fixed percentage of the duration, regardless of whether the interval is short or long. Scoring on a ratio scale means a 10% miss costs the same points whether the target was 1 second or 4 seconds.

~7%
Just-noticeable difference for time. Most people can detect a duration difference of about 7% under ideal conditions. Below that threshold, guesses feel exact.
Log
Symmetric over/under. Being 20% too long costs the same as being 20% too short. Log-ratio scoring is symmetric by definition — |log(1.2)| equals |log(0.833)|.

The Formula

Four steps: compute the ratio, take the log to get a proportional error, apply any mode-specific grace, then use a Gaussian curve to turn that adjusted error into a score.

ratio = guess ÷ target logError = |log(ratio)| adjustedLogError = max(0, logError − grace) score = 10 × exp(−k × adjustedLogError²)

Easy uses grace = 0 and k = 12. Hard uses grace = 0 and k = 16.

k
Steepness. Higher k = sharper dropoff. Easy now penalizes visible misses more aggressively. Hard uses the same no-grace curve with its own tuning and wider duration range.

Because log(ratio) is used instead of the raw ratio, the curve is symmetric: 1.5× too long costs the same as 0.67× too short. Neither direction has an advantage.

The Curve

Score vs. ratio. The peak is at 1× (perfect match). The curve falls symmetrically in both directions on the log scale. Toggle the demo below to see Easy vs. Hard.

Score (0–10) vs. ratio (guess ÷ target). White dot tracks your position in the demo below. The x-axis is logarithmic — equal distance means equal proportional error.

Try It

Set a target duration and a guess. See how the score changes in each mode.

10.00/ 10
1.000×
Ratio
0%
% Off
0ms
Gap
Target1.5s
Guess1.5s

Score Reference

What different percentage errors translate to, regardless of the target duration.

ErrorEasyHardRead as
Perfect10.0010.00Near-perfect
±5%9.729.63Near-perfect
±10%8.978.65Strong
±15%7.917.32Decent
±20%6.715.88Rough
±25%5.504.51Rough
±33%3.772.72Very rough
±50%1.390.72Essentially wrong

Why a Gaussian?

The Gaussian (bell curve) shape has a flat top near perfect matches — small errors near zero don't cost much. This rewards precision without punishing the tiny timing jitter that comes from releasing a button. The curve then drops steeply, compressing all bad guesses toward zero rather than spreading them out linearly. This makes the high end of the score range feel like a real achievement.

Contrast this with a linear system: if 50ms off = 1 point deducted, then a 500ms error = 10 points gone. That's too harsh for short targets, and too forgiving for long ones. The log-Gaussian avoids both problems by tying the error to the magnitude of the target.