arbitrandomuser 9 minutes ago

Julia gets this right. casting to both double and int, it does both a floating point compare and an integer compare , then AND them and return

gopalv 6 hours ago

> For double/bigint joins that leads to observable differences between joins and plain comparisons, which is very bad.

This was one of the bigger hidden performance issues when I was working on Hive - the default coercion goes to Double, which has a bad hash code implementation [1] & causes joins to cluster & chain, which caused every miss on the hashtable to probe that many away from the original index.

The hashCode itself was smeared to make values near Machine epsilon to hash to the same hash bucket so that .equals could do its join, but all of this really messed up the folks who needed 22 digit numeric keys (eventually Decimal implementation handled it by adding a big fixed integer).

Databases and Double join keys was one of the red-flags in a SQL query, mostly if you see it someone messed up something.

[1] - https://issues.apache.org/jira/browse/HADOOP-12217

zokier 4 hours ago

One simple solution would be to convert both operands to 80/128 bit float, which should avoid any precision loss, and compare those?

millipede 7 hours ago

Both ints and floats represent real, rational values, but every operation in no way matches math. Associative? No. Commutative? No. Partially Ordered? No. Weakly Ordered? No. Symmetric? No. Reflexive? No. Antisymmetric? No. Nothing.

The only reasonable way to compare rationals is the decimal expansion of the string.

  • tadfisher 6 hours ago

    > The only reasonable way to compare rationals is the decimal expansion of the string.

    Careful, someone is liable to throw this in an LLM prompt and get back code expanding the ASCII characters for string values like "1/346".

  • layer8 4 hours ago

    It’s not straightforward to compare numerical ordering using the decimal expansion.

pestatije 7 days ago

or you could learn about how to do comparisons with floating point numbers

  • stronglikedan 7 hours ago

    like multiplying them by the precision that you'd like to compare and comparing them as integers? /s

    • thaumasiotes 4 hours ago

      That won't work; as integers, 100.02 and 99.997 are unequal, but 1.0002 and 0.99997 are equal at 0.01 precision. (And indeed also equal at 0.001 precision!) You'd need to round.

      I had the impression that the usual way to compare floats is to define a precision and check for -p < (a - b) < p. In this case 0.99997 - 1.0002 = -0.00023, which correctly tells us that the two numbers are equal at 0.001 precision and unequal at 0.0001.

      • wiml 3 hours ago

        Rounding won't work either, at least if you're trying to find a way to do a hash join on float-comparison-within-epsilon. You would need to have a function such that |a-b|<p implies f(a)=f(b) and there is none, except the useless trivial one.

        You can do it if you produce two hash values for each key (and clean up your duplicates later), but not if you produce only one.

        Of course most of the time if you are doing equality comparisons on floats you have a fundamental conceptual problem with your code.