Walking around the compiler

(bernsteinbear.com)

52 points | by chunkles 5 days ago ago

11 comments

  • mtklein 3 days ago ago

    I don't understand why this article invents and explains a phony ranged-float fix when the real fix from the footnotes would have been just as simple to explain. The deception needlessly undermines the main point of the article, which I completely agree with.

    • tekknolagi 3 days ago ago

      The real fix felt more complicated when I drafted this. Seems like it isn't; I'll think about updating the post

    • stassats 3 days ago ago

      That fix has limited applicability. x * x is also a non-negative float. But abs(x * x) is not optimized. Or abs(abs(x)+1). GCC, for example, does know that.

  • Someone 3 days ago ago

    I know about nothing about PyPy internals but

      case float_abs(_):
      -          return float
      +          return float.with_range(low=0, high=None)
    
    to me, looks like a risky change. I would fear this could introduce a bug when computing

      isNaN(abs(NaN))
    
    That should return true, but with that change, I fear it could return false because it informs the optimizer that abs never returns a NaN.
    • stassats 3 days ago ago

      That shouldn't mean that it doesn't return a NaN. Things are generally not optimized away because of NaNs. E.g. in GCC, abs(c) > -1.0 is not folded, unless building with -ffast-math

    • oasisaimlessly 3 days ago ago

      Where'd you get that patch from? I can't find it in the blog post.

      EDIT: Ah, other comment mentions article was edited.

  • etyp 3 days ago ago

    Random note since Godbolt was mentioned: It's also fun to hop on play.rust-lang.org and see what different IRs look like via the "..." next to "RUN." Just look at how simple the HIR is pretty simple for "Hello world" - then check out the MIR ;)

  • zabzonk 3 days ago ago

    > surely the optimizer can reason that the float_abs operation produces a positive number!

    how?

    • pm215 3 days ago ago

      By having knowledge baked into it about its properties (which it can validly do because the behaviour of the float_abs operation is specified by the language; it's not calling an arbitrary external function). The blog post sketches out one approach to this: as the optimizer is working on a tree of expressions, it has attached to the nodes extra information about the values in it (e.g. "this is a constant X" or "this value is definitely in the range A..B"); then as it simplifies the tree, it can say "abs of a value that's already definitely not negative is a no-op, don't do anything", in the same way it can say "multiplying X by 1 is a no-op".

      The actual bug-fix linked to in the footnote does it in a slightly different way: it says "as I'm walking through optimizing my tree of expressions, if I see 'abs(abs(X))' then simplify the tree to just 'abs(X)'".

      • zabzonk 3 days ago ago

        > the behaviour of the float_abs operation is specified by the language

        Which language? Where? I just googled for float_abs and got nothing.

        • pm215 3 days ago ago

          PyPy intermediate representation. I assume this is what PyPy turns Python "abs(x)" into in codepaths where it knows x is a float.