DINKLORD (dual income, no kids, lots of retro devices)
the swift programming language is my fault to some degree. mostly here to see dogs, shitpost, fix old computers, and/or talk about math and weird computer programming things. for effortposts check the #longpost pinned tag. asks are open.
i love to read The Great scasb, by fitzgerald
Double precision is not magic - consider using fixed-point in many cases.
Context: this is part of a series of reposts of some of my more popular blog posts. It was originally posted in May 2006: http://tomforsyth1000.github.io/blog.wiki.html#%5B%5BA%20matter%20of%20precision%5D%5D. This version has been edited slightly to update for the intervening 17 years!
In retrospect it's really two separate posts in one. It starts as a rant against double precision, and then turns into a love-letter about fixed point. Even if you don't hate double precision with quite the burning ferocity that I do, please stay for the talk about the merits of fixed precision, because it keeps being useful in all sorts of places. Anyway, to the post...
Floating-point numbers are brilliant - they have decent precision and a large range. But they still only have 32 bits, so there's still only 4billion different ones (actually there's fewer than that if you ignore all the varieties of NANs and infs). So they have tradeoffs and weaknesses just like everything else. You need to know what they are, and what happens for example when you subtract one small number from another - you get imprecision.