One of the reasons I love computers so much is because I hate math. Math and I do not get along, so having a magic device that can do most any math problem I throw at it instantly and accurately has a great deal of appeal for me.
Except, well, sometimes that accuracy piece isn’t all it’s cracked up to be.
Let’s take a look at an extremely simple bit of code:
var result = 0.1 + 0.2;
After running this code you’d expect the
result variable to contain the value
0.3, but that’s not what happens. When this code is run the value stored in the
result variable will, in fact, be
The reason for this behavior has to do with how computers translate back and forth between the base 10 numbering system we humans are used to and the base 2 numbering system they use internally. If you’re curious about it check out the aptly named 0.30000000000000004.com website for the technical details (and links to even more technical details).
Suffice it to say, you should be careful when doing floating point math. Run some tests and make sure you’re getting the results you expect. If you’re not, make changes to how you do math based on what you now know about how computers handle floating point numbers.