• he/him

Coder, pun perpetrator
Grumpiness elemental
Hyperbole abuser


Tools programmer
Writer-wannabe
Did translations once upon a time
I contain multitudes


(TurfsterNTE off Twitter)


Trans rights
Black lives matter


Be excellent to each other


UE4/5 Plugins on Itch
nte.itch.io/

tomforsyth
@tomforsyth

Double precision is not magic - consider using fixed-point in many cases.

Context: this is part of a series of reposts of some of my more popular blog posts. It was originally posted in May 2006: http://tomforsyth1000.github.io/blog.wiki.html#%5B%5BA%20matter%20of%20precision%5D%5D. This version has been edited slightly to update for the intervening 17 years!

In retrospect it's really two separate posts in one. It starts as a rant against double precision, and then turns into a love-letter about fixed point. Even if you don't hate double precision with quite the burning ferocity that I do, please stay for the talk about the merits of fixed precision, because it keeps being useful in all sorts of places. Anyway, to the post...

The problem with floats

Floating-point numbers are brilliant - they have decent precision and a large range. But they still only have 32 bits, so there's still only 4billion different ones (actually there's fewer than that if you ignore all the varieties of NANs and infs). So they have tradeoffs and weaknesses just like everything else. You need to know what they are, and what happens for example when you subtract one small number from another - you get imprecision.



You must log in to comment.

in reply to @tomforsyth's post:

Yes this is a known-bad use of floats. The financial folks absolutely shun storing actual money using anything but fixed-point (and still BCD in some cases!), because of course you're constantly adding and subtracting small sums from big ones, and dropping LSBs is... bad.

People will not goddamn shut up about posits - they're pretty much a meme now. They're marginally better in some contexts, worse in others, and quite a bit more expensive to implement than IEEE. I don't see any interesting reason to switch.

oki dokie

e: i meant that as in i read you, i didnt want to argue with you but ok i guess i got your opinion. which i wanted. :host-stare: didnt expect that though.

+: when i get my grubby fingers on an fpga, ill build a coprocessor just out of spite

I'm not sure what "move your origin" would accomplish, though? If the problem is that "at X distance from the origin, the problems begin", then moving your origin will allow you to more easily encounter THOSE problems, but the NEW problems that you'll encounter when THAT resolution drops at X units away from the NEW origin, will just be happening instead... and in fact, it'll make those problems harder to encounter, because half of the directions that you can travel in will actually be INCREASING your resolution (which is arguably a good thing for making less problems occur, but I think the idea here was to make problems easier to occur in order to catch them, right?)

but the NEW problems that you'll encounter when THAT resolution drops at X units away from the NEW origin

No, we're not "moving the origin". Fixed-point has constant precision. There is no special direction or position. You get the same precision at (0,0) as at (10000,10000). You will discover your precision problems instantly, and know that you either need to deal with the precision you have, or you need to use more bits.

No, I mean when in your TLDR you said "Don't start your times at zero. Start them at something big - a couple of hours at least. Ideally do the same with position - by default set your origin a long way away." That will reveal the old origin's precision problems, but you'll still have new ones instead. Or did you mean just move it temporarily to find the issues, then move it back?

Oh I see what you mean. The reason to move the origin is that it shows up precision problems in bits of code that you had not previously considered. The fix for those problems should be an actual FIX, not just fudging the epsilons so it kinda works again at this new specific origin. Like... be a professional, don't do that :-)

We don't really need them. Add-with-carry scales linearly with number of bits, so all that building actual hardware does is reduce instruction cache pressure, which isn't usually a big factor.

Multiplies and divides scale badly (roughly O(n^2), though there are are tricks you can play in some cases) - but so does building actual hardware. And of course the trouble with building native hardware support is that when you're not using it, it's still costing area.

I do think there's some scope for building hardware to support arbitrary-sized integer math better (the BIGNUM stuff) - just little tweaks here and there about how we deal with carries and suchlike. It doesn't help that most programming languages absolutely do not understand what a "carry flag" is, and it makes rolling your own libraries a real hassle.

Am I correct in understanding that you think 64bit fixedpoint is faster than (or roughly as fast as) doubles? (on a modern PC) That would be cool!

A negative of fixed points is that they need a special implementation and you have to jump through a few hoops to make them as debuggable as floats.

Also multiplying positions actually make total sense - distance calculations require multiplying the elements :) Of course sometimes manhattan is good enough but sometimes it's not..

Oh and it's more severe than one might initially think. We used (I think) 16.15 (one bit for sign) on Hammerting and that meant that if you tried to compare distances further away than 180m in any direction you'd overflow.

You never multiply a POSITION by another POSITION. First you take the difference between them, and there's no problem having THAT as a float32 - because zero actually IS a special meaning - it means the two things are in the same place - so it's extremely useful to have more precision when two objects are close to each other.

Ah fair enough! We couldn't do that because we wanted to be 100% deterministic and well floats are a bit scary in that respect. But we kept it 'simple' and luckily manhattan distance was good enough whenever distances could be long.