Floating Point Numbers
Floating-Point Representation: Computers natively store integers, but representing decimal numbers accurately is a challenge. The format used for this purpose is called floating-point arithmetic. Unfortunately, due to the way computers store numbers, 0.1, 0.2, and 0.3 cannot be precisely represented in base 2 floating-point. IEEE 754 Standard: JavaScript, for instance, always stores numbers as double precision floating point numbers following the international IEEE 754 standard. This standard uses 64 bits to store numbers, with the fraction stored in bits 0 to 51, the exponent in bits 52 to 62, and the sign in bit 63. The Case of 0.1: Let’s represent 0.1 in 64-bit binary following the IEEE 754 standard. First, convert 0.1 from base 10 to its binary equivalent (base 2). Repeating this process for 64 bits, we get the mantissa (fractional part) rounded off to 52 bits. The result? 0.1 in binary is approximately 0.00011001100110011001100110011001100110011001100110011… (and so on). Adding 0.1 and 0.2: When we add 0.1 and 0.2, the result is approximately 0.30000000000000004. This tiny discrepancy occurs due to the limitations of floating-point representation.