You must log in to comment.

in reply to @0xabad1dea's post:

Every time I see something like this, I get angry that, on my Commodore 64 in the early 1980s, the BASIC interpreter would just...quietly fix that. They called it "de-fuzzing," and apparently everybody has since agreed that was a terrible idea and it's a much better use of time to require programmers to explicitly round things or store money as pennies and manually insert decimal points, instead of fixing the problem once.

How does "de-fuzzing" work? Is the BASIC interpreter just assuming nobody actually ever wants the value 0.30000000000000004, or is it quietly switching to a decimal type in the background because when a human types 0.1 they mean the value one-tenth which can't be exactly represented with a finite number of tokens in base 2? The former behaviour is wrong (the whole point of floating-point is being able to simultaneously handle very small numbers with high precision and very large numbers with lower precision) and the latter is the kind of thing you probably should invoke deliberately rather than have the machine arbitrarily and magically decide on a data type for you based on what it thinks you want.

I can't find a real reference right now, unfortunately, but my old programming languages lecture notes suggest that the print statement checked for bit-patterns (in their custom floating-point format) that indicated that integer values went into the floating-point operation and non-integers came out. And yes, I remember seeing articles about how you can see it in action by asking it to print the results of floating-point addition and watching it incorrectly round the number.

But at the time, I think the idea was that, if you needed the extra control, it was time to learn assembly language or invest in a compiler for a different language.