Currently the result of certain floating-point casts are Undefined (as in can cause Undefined Behaviour):
-
https://github.com/rust-lang/rust/issues/15536: f64 as f32 will produce an Undefined result if input cannot be represented by the output. From discussing on the #llvm irc, my understanding is that this generally means that the input is finite, but exceeds the minimum or maximum finite value of the output type. ex:
1e300f64 as f32
-
https://github.com/rust-lang/rust/issues/10184: f* as i/u* will produce an Undefined result if the input cannot be represented by the output when rounded to the nearest integer (rounding towards 0, signed or unsigned as appropriate). ex:
1e10f32 as u8
. Note that e.g.-10.0f32 as u8
is defined as 0.
This is an annoying wart on Rust’s current implementation, and we should fix it. Note that at least on x86_64 linux
the example f64 as f32 cast just produces inf
(which is is pretty reasonable IMHO), while the f32 to u8 example seems to produce completely random results (not sure if actual undefs are being made, but that seems believable).
I’m happy with these “nonsense” casts having unspecified behaviour so that we can e.g. inherit whatever the platform decides to do, as long as it doesn’t violate memory safety like the current design can. A solution that doesn’t add overhead seems ideal to me. Having to specify that e.g. 1000.0 as u8 == u8::MAX
may be too cumbersome. Although note that this has a complex interaction with cross-compilation and const-evaluation.
I lack the requisite familiarity with LLVM to know what the best way forward is, though. I’d also be interested to hear if there are usecases for these casts having specified behaviour.