April 20, 2026
I was playing around in the JavaScript console the other day when I stumbled on something that made me do a double-take.
Try this in your browser's DevTools:
Math.floor(49.9) // → 49 ✓
Math.floor(49.99) // → 49 ✓
Math.floor(49.999) // → 49 ✓
Math.floor(49.9999999999999) // → 49 ✓ (13 nines)
Math.floor(49.99999999999999) // → 49 ✓ (14 nines)
Math.floor(49.999999999999999) // → **50** ← WHAT?!Add just one more nine—the 15th—and suddenly Math.floor() snaps to 50. But why?
The Real Culprit: IEEE 754 Double Precision
Here's the thing: it's not Math.floor() that's broken.It's the number itself—specifically, how computers store floating-point numbers.
How JavaScript Actually Stores Numbers
JavaScript (like Python, Java, C++, and virtually all modern languages) stores all numbers as IEEE 754 double-precision binary64 values. Every number occupies exactly 64 bits, divided into three parts:
- 1 bit for the sign (positive or negative)
- 11 bits for the exponent (biased by 1023)
- 52 bits for the significand (also called mantissa or fraction)
Here's the crucial detail that catches most people off guard: there's an implicit leading 1. The 52 stored bits represent the fractional part after a leading 1. that the computer assumes is always there (for normal numbers). This gives you effectively 53 bits of precision, not 52.
The value of any normal double-precision float is calculated as:
$value = (-1)^{sign} \times 1.fraction \times 2^{(exponent - 1023)}$
According to Wikipedia's IEEE 754 double-precision specification, those 53 bits give you approximately 15 to 17 significant decimal digits of precision. The exact math: $53 \\times \\log_10(2) \\approx 15.95$ decimal digits.
Why Decimal Fractions Fail
Here's where the real problem begins: computers work in binary (base-2), but we write numbers in decimal (base-10). Many decimal fractions have no exact binary representation—they become infinite repeating binary fractions, just like $1/3 = 0.333...$ in decimal.
For example, the decimal number 0.1 in binary is:
$0.1_10 = 0.0001100110011..._2$
Those 0011 digits repeat forever. But your computer only has 52 bits to store the fraction. It must cut off—or round—the infinite sequence somewhere.
The Phenomenon: When Precision Runs Out
Machine Epsilon and the Gap Between Numbers
As explained in Wikipedia's article on machine epsilon, there is a fundamental limit to how precisely we can represent numbers. Machine epsilon (ε) is defined as the gap between 1.0 and the next larger representable floating-point number.
For double-precision floats:
$\varepsilon = 2^{-52} \approx 2.220446049250313 \times 10^{-16}$
This is approximately 16 decimal places. This isn't a coincidence—it's exactly why 15-16 nines is where things break down.
ULPs: Units in the Last Place
As detailed in this guide on machine epsilon and ULPs, a ULP (Unit in the Last Place) measures the distance between two adjacent representable floating-point numbers at a specific magnitude.
The key insight: the spacing between representable numbers is not constant. It depends on the exponent:
- Near 1.0, the gap is approximately $2^-52 \\approx 2.22 \\times 10^-16$
- Near 50, the gap is $50 \\times 2^-52 \\approx 1.11 \\times 10^-14$
- Near 1,000,000, the gap grows to about $1.11 \\times 10^-10$
Floating-point numbers are logarithmically spaced—the larger the value, the wider the gap between representable neighbors.
The Exact Breakdown at 15 Nines
Let's trace through what happens when JavaScript parses 49.999999999999999:
- The parser reads 16 nines after the decimal point
- It attempts to convert this decimal string to binary64 format
- The true mathematical value is $50 - 10^-15$ (approximately 49.999999999999999...)
- The gap (ULP) around 50 is approximately $1.11 \\times 10^-14$
- But our number differs from 50 by only $10^-15$—less than 1/10th of the ULP
- IEEE 754's "round to nearest, ties to even" rule kicks in
- The number rounds to the nearest representable float: exactly 50.0
At 14 nines (49.99999999999999), the difference from 50 is about $10^-14$, which is still larger than the gap. At 15-16 nines, we cross below the threshold.
Why 15-17 Digits?
As explained in this Stack Overflow answer on floating-point precision, the 15-17 digit range comes from the mathematical properties of base conversion:
- 15 digits: Any decimal string with ≤15 significant digits, converted to double-precision and back, will match the original
- 17 digits: Any double-precision value, converted to a 17-digit decimal string and back, will match the original
The gap exists because some 16-digit numbers can't be represented exactly, but round-trip conversion with 17 digits guarantees preservation.
What Math.floor() Actually Receives
As MDN's Math.floor() documentation notes, floor() returns the largest integer ≤ x. But by the time floor() executes, the damage is already done.
Here's the sequence:
// You write:
Math.floor(49.999999999999999)
// JavaScript parses the literal and rounds to nearest representable float:
// → 50.0 (exactly, as a binary64 value)
// Math.floor receives exactly 50.0:
Math.floor(50.0) // → 50The quirk isn't in the function—it's in the literal parsing that happens before any code executes. The IEEE 754 format simply cannot distinguish 49.999999999999999 from 50.0, so it rounds to the nearest representable value.
The Binary Reality
Let's look at what the actual bit patterns look like. According to Cornell's CS 357 floating-point notes, the representation of 50.0 is:
50.0 in binary scientific notation:
= 1.5625 × 2^5
= 1.10001 × 2^5 (binary)
Sign: 0 (positive)
Exponent: 5 + 1023 = 1028 = 10000000100 (binary)
Mantissa: .10001 (the 1. is implicit)
Full 64-bit representation:
0 10000000100 1000100000000000000000000000000000000000000000000000In proper notation: $50 = 1.5625 \\times 2^5 = 1.10001_2 \\times 2^5$
Exponent calculation: $5 + 1023 = 1028$ (bias of $1023$)
The number 49.99999999999999 (14 nines) rounds to a slightly different bit pattern that's still less than 50. But add one more nine, and the rounding algorithm decides 50.0 is closer.
Why This Matters Beyond JavaScript
This isn't a JavaScript quirk—it's universal to all IEEE 754-compliant systems. You can reproduce this in:
- Python:
import math; math.floor(49.999999999999999)→ 50 - C/C++:
floor(49.999999999999999)→ 50.0 - Java:
Math.floor(49.999999999999999)→ 50.0
The phenomenon extends to other "almost integers." Try these:
// These all round to the nearest integer due to precision limits:
0.1 + 0.2 === 0.3 // false (0.30000000000000004)
1.0000000000000001 === 1 // true (difference too small)
9999999999999999 === 10000000000000000 // true (!!)The Takeaway
If you're ever debugging why a calculation "should" be 49.999... but rounds to 50, remember: you're not actually passing 49.999999999999999 to the function.You're passing the closest binary64 approximation that IEEE 754 can represent—and at the limit of precision, that approximation is sometimes exactly 50.
The 15th nine is where the illusion shatters. It's not a bug in your code, or in JavaScript—it's a fundamental property of finite binary arithmetic trying to approximate infinite decimal precision.
Practical Implications
- Never compare floating-point numbers with
===for equality - Use epsilon-based comparisons:
Math.abs(a - b) < 1e-10 - Be suspicious of any decimal with more than 15 significant digits
- Financial calculations should use decimal arithmetic libraries, not binary floats
Further Reading
- Double-precision floating-point format - Wikipedia
- Machine epsilon - Wikipedia
- Why IEEE 754 double precision is only accurate to ~15 digits - Stack Overflow
- Math.floor() - MDN Web Docs
- IEEE 754 Floating Point Converter — see the bit patterns for yourself
- Machine Epsilon, Rounding, and ULPs Explained
- Floating Point Representation - Cornell CS 357