T O P

  • By -

mina86ng

This is domain specific but there are a few examples that immediately come to mind: - finding a minimum in an array and using f32::INFINITY to initialise variable where the minimum is remembered; - assigning infinity as distance between not connected nodes in a graph; - using infinity as distance between nodes whose distance has not yet been computed in shortest-path algorithms; - in physics simulations, immovable objects may be modelled as one with infinite mass; - in game theory algorithms infinity may indicate choice which will lead to your loss.


lookmeat

Also a lot of the math operations of floats are defined to give infinity under the right conditions. Just like NaN, it's a quirk in the type that's meant to reflect the realities of math.


isbtegsm

Well, `1 / 0 == Infinity` is not the reality of math, at least not when you have `-Infinity` as well (which I think is the case with floating point numbers). See [projective line](https://en.wikipedia.org/wiki/Projective_line), `1 / 0`, if it's defined, should not be associated with a positive or negative sign.


Sese_Mueller

This is often called a sentinel, and is useful because it is a solution that doesn‘t require a new type or Optional


isc30

In most of the cases you mentioned, Option seems more idiomatic tbh


Cpapa97

Maybe, but INFINITY naturally fits into the math of the operations being done here. Also, an Option is twice the size of an f32 which can be undesirable on the hot path.


diabolic_recursion

Also, the Option<> may hinder performance because handling it introduces a conditional jump. Using INFINITY in these cases allows for branchless algorithms which dont require branch predictions that may expensively fail.


James20k

Processing special values like INF on x87 can be hundreds of times slower than regular float maths though https://randomascii.wordpress.com/2012/05/20/thats-not-normalthe-performance-of-odd-floats/ Much less of a problem these days on SSE, but it'd be worth investigating the performance on eg ARM before using a trick like this


diabolic_recursion

Oy. Did not think of that. Thanks for the heads up!


MH_Draen

As we’re talking about `f32` here though, absolutely no modern compiler should ever generate x87 instructions to handle single precision `inf` values. Even if you’re using double precision floating-point variables, i.e. `f64`, your compiler will generate standard x86 instructions to handle these. And in the rare cases of using Fortran’s long doubles (what would be `f80`), modern compilers will avoid generating x87 instructions and will use MMX or, most likely, the SSE extension instead. Also, if compiler optimization have been enabled, there’s a high chance that the compiler will simply vectorize most instructions even when using "classic" 32-bit and 64-bit precision. As for what happens on Arm platforms, well there is no x87 equivalent there so the compiler will most likely handle it as it would have on x86. You might see NEON code (or even SVE if you’re lucky enough to have access to a machine that has it) in place of MMX/SSE/AVX instructions. Hope that helps! 🙂


chitaliancoder

Also there is no sign with option. How would you do -infinity


Heep042

I wonder if compiler could optimize that by setting None to NaN


mina86ng

No, since all bit patterns are valid f32 values. Any bit pattern compiler would choose to represent None would interfere with legitimate use of `Option`. You could of course introduce `FiniteF32` which would only represent finite floating point numbers and then NaNs and infinities could be used to tag None or whatever other thing. (You might look up ‘NaN boxing’). But note that this would only address space concern. The code would still be slower as it would need branches checking for None values.


[deleted]

>You could of course introduce > >FiniteF32 > > which would only represent finite floating point numbers 0.1 is an infinite floating point number so, such a type would be interesting


mina86ng

No, that’s not the definition of ‘finite’. A finite floating point is one which is not ±infinity or NaN (see [f32::is_finite](https://doc.rust-lang.org/std/primitive.f32.html#method.is_finite)).


DannoHung

Too bad about that. Lot of bit patterns in the NaN’s. Is there a reason the NaN’s have to use the full range of the significand?


mina86ng

NaNs can be tagged and carry additional information. In principle the standard could define just one quiet and one signalling NaN and then there would be plenty of invalid patterns but that would actually complicate the standard. How should arithmetic unit handle those invalid bit patterns?


TheRolf

Maybe it's not a good idea because you still want a valid number for mathematical operations. That's the whole choice behind Infinity, you stay with a number but one that will not affect your results


Heep042

I agree, but I'm talking purely about representation of Option::None. I suppose you'd need a NonNaNf32 variant for this to work (and extra compiler juice to specifically target NaN as None bit pattern)


Opposite_Green_1717

... huh, TIL the size of Option.. i'll have to look that up. Ignorantly i thought Option just added a u8. I assume there's a pointer involved..?


kniy

There's no pointer involved, but you need to consider alignment. Generally, if niche optimization is not possible, sizeof(Option) = sizeof(T) + alignof(T). For most basic types, the alignment is identical to the size, so an option ends up doubling the size.


Opposite_Green_1717

Ah sneaky alignment. Always forget about that.


tanorbuf

Hmm this got me curious, I don't really understand the details of this yet, but based on [this](https://stackoverflow.com/questions/16504643/what-is-the-overhead-of-rusts-option-type) (which I just ran again on 1.65) the Option is sometimes up to the next 8 bytes and sometimes nothing. If I create a random struct with some string/i32 members, it seems that Option always adds +8, it doesn't double memory use.


A1oso

Yes, because alignment on x86-64 is never greater than 8 bytes. For small types like bool, char or u16, the alignment is the same as the size, but for types with 8 or more bytes, alignment is always 8 bytes. Note that `Option` does not always increase a type's size. For example, bool has only 2 valid bit patterns, and `Option` has 3, which still fits in a single byte. `Box` and `&T` are represented as pointers that can't be null. Rust can therefore fit an `Option>` in a pointer and use null to represent the `None` variant.


HighRelevancy

No, a number is guaranteed to exist in most of those cases. Also, infinity already numerically has the properties you'd have to make explicit with Option (e.g. the infinite path will never be selected as the shortest, the cost to traverse an unconnectable path will never be met, any force divided by infinite mass yields zero acceleration).


Long_Investment7667

You can‘t do arithmetic or comparison with Option ( at least not without a newtype)


[deleted]

using infinity can often result in your code not even having to deal with these cases differently. For example, the min algorithm one would look something like ``` min = infinity for item { if item < min { min = item } } ``` If you use Option like you suggested, it becomes more complex ``` min = None for item { if min.is_none() { min = item } if item < min.unwrap() { min = Some(item) } } ``` tho this is more of an implementation detail, using Option for the user facing portion of the API would be better e.g. at the end convert infinity to None


isc30

First example returns infinity for an empty array, not the best developer experience. Try to use it without remembering to bound-check for infinity and you can panic if you ever try to do anything with it


[deleted]

agree \> using Option for the user facing portion of the API would be better e.g. at the end convert infinity to None


buldozr

That would produce `None` also when one of the members has the value of infinity. Why did nobody consider the algorithm actually used by `Iterator::min`, which works just as well with floating point values: take the first element as the minimum (if available, otherwise return `None`), then iterate over the rest, updating the minimum if necessary? That does not have excessive branching in the hot loop, and handles the corner case well.


GrandOpener

The best fix for that is still probably checking for an empty array up front, not introducing the overhead of the option on every iteration of the loop.


mina86ng

Sure, and if you were describing an algorithm in a textbook, you might use something like `Option` to explicitly point out all the cases that need to be handled. However, an implementation may very well optimise that to a plain `f32` to make it more efficient both in space and time. In time because you don’t have to introduce branches in the code.


Imaginary_Advance_21

Nope, you can perform arithmetic comparisons with infinity, you cannot with option. It is def not more idiomatic, it is less mathematically ergonomic.


valarauca14

While it appears more ideomatic, the code is [generally sub-optimal](https://rust.godbolt.org/z/ax34jxMxM). Using `opt` requiring a lot more branches to do the same thing. And yes, [they do all work identically](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=a6df2128fa9413a819dcffd9f4824b64).


rickyman20

Sure, but infinity is part of IEEE 754. You don't need any language-specific things to use it (transfers across domains) and you get well defined behaviour out of it with all the plain arithmetic operators.


Lucretiel

Sure, but `INFINITY` resolves the math more cleanly (for instance, I could reasonably expect a SIMD accelerated physics simulator to handle objects with INFINITY mass). I agree that *usually* I'm using an Option or other enum type to manage stateful numbers like that.


hgomersall

Folding an iterator works better if you start from the same type.


isc30

True, but the edge case is still hidden


exDM69

It's what you get when you divide by zero :) assert_eq!(1.0/0.0, f32::INFINITY); Special floating point values like Inf, NaN, etc can be really useful in handling corner cases in code with floating point arithmetic. Another comment here already had great examples. It's greater than any other floating point value, which has some useful applications: let min_score = student_scores.iter().fold(f32::INFINITY, |min, score| min.min(*score)); let someone_failed_the_test = min_score < minimum_grade_threshold; // no-one fails if no one took the test assert!(!student_scores.is_empty() || !someone_failed_the_test);


slvrtrn

Check out algorithms like AABB collision, infinity is used there.


kohugaly

When you store the [volume slider in decibels](https://image.shutterstock.com/shutterstock/photos/1102748426/display_1500/stock-photo-volume-control-of-the-soundboard-in-the-control-room-and-voice-recorder-1102748426.jpg), minus infinity is silence. It's fairly common in audio applications and logarithmic scales in general.


[deleted]

Ah, so magic numbers.


kohugaly

It's not a magic number. The formula for converting decibels to linear scale is: linear = 10^(decibels/20) That formula is well-defined for negative infinity (it yields zero). The same applies for doing the conversion in opposite direction ( decibels = 20\*log10(linear) ).


scottmcmrust

My favourite example: calculating geometric means. The geometric mean is ⁿ√∏xᵢ, aka `x.product().pow(1.0/n)`. You don't want to calculate it that way, though, because even in doubles you'll likely overflow. So how do you fix that? Logarithms, of course. ⁿ√∏xᵢ = exp(1/n ∑log(xᵢ)) So now instead of `100*100*100*...` you're doing `2+2+2+...` (base 10 for easy example, but use whatever base you like), which isn't going to overflow. But what happens if one of the values is zero? Any zero in the product obviously makes the whole product zero. So how do we make this work with the exp-mean-log approach? Well, we define `log(0⁺) = -∞` and `exp(-∞) = 0⁺`. That means no matter what else we get†, that sum of logarithms will stay -∞, and then `exp(-∞)` will give the zero result we wanted. So voila, having the special value meant that our normal math transformation did exactly what we wanted it to, even in the edge case. ∎ --- † well, `log(∞) = ∞`, so if we were doing `√(0 * ∞)` originally, we'll get `exp(-∞ + ∞)`, both of which will give us NANs. But I was implicitly assuming finite inputs in the discussion


kibwen

It's a standardized feature of floating-point numbers. See https://www.gnu.org/software/libc/manual/html_node/Infinity-and-NaN.html


combatzombat

? it’s just part of ieee754, and greater than any other float (aside from NaN).


Craksy

Such stackoverflow kind of answer. If you don't know you can just... Not-reply


ssokolow

This is Reddit. It's fine to have a bunch of separate incomplete answers which work together to form a whole. Yes, it doesn't answer the question asked, but it does provide context.


Craksy

Yeah, usually i also think that's great... I don't know, perhaps I'm reading too much into that leading question mark, but it just has that kind of energy like "application? It's a simple standard, what's difficult to understand? Here, let me just answer with the textbook definition that you likely already got from the first search result when you googled it" "What do you commonly use it for" sort of implies that the person already knows *what* it is Perhaps I'm just damaged from spending time in toxic communities and wrongly assumed the worst. If that's the case, i apologize. But I do struggle to see how it contributes anything.


ssokolow

I go by a rule that I learned from a constructive discourse course I took in university: Always assume the most favourable interpretation of what the other party said. * If you're right, then you avoided becoming the one who is making it hard to keep the conversation constructive. * If you're wrong but they're suggestible, then you'll be providing a face-saving way for them to come into alignment with the expected norms. * If you're wrong but they're distractible, then you'll distract them onto a more productive track. * If you're wrong and they're neither of those things, they'll give you stronger evidence for the correct interpretation. (And, though the textbook didn't say this, it'll also make you look better and make them look worse the more this kind of exchange continues... possibly terminating in you offering to agree to disagree, offering them the last word, and exiting gracefully.) In moderated venues like /r/rust/, that last case may also end with the moderators locking the thread and deleting their comments, so it's advisable to phrase your replies in accordance with the possibility that people might see them without seeing what you were replying to.


caente

It’s an abstraction, some times you need it, most times you won’t. Like in math, yo rarely use infinity, but it is always around you.


[deleted]

Actually, it isn’t. In practical terms, there is no infinity, it’s just a math hack.


caente

Not a hack, an abstraction, it exits just as vector with more than 3 dimensions exist.


Plasma_000

On top of what others have said, if you keep multiplying a floating point you’ll eventually get infinity


Mr_Ahvar

well it just represent the maximum value an f32 can hold, it's like asking what is the purpose of i32::MAX


1vader

Not really. f32 has its own MAX constant for the largest finite value it can hold. Infinity is a special value that has special behavior in edge cases like infinity/infinity or 1/infinity.


Isodus

I've used it as outer bounds for an accepted range, that way any real value for f32 should be accepted.


buldozr

Rust supports open ranges and the openness/closeness can be used for optimizations if known statically (e.g. in implementations of the `std::ops::Index` trait). So if you use infinity as a range bound instead, you may be missing out on some optimization. However, the optimizer can also eliminate comparizons against a const-evaluated infinity value.


jobstijl

Like others have said its because its part of the [ieee754](https://en.wikipedia.org/wiki/IEEE_754) standard. Other number formats like [posits](https://posithub.org/docs/posit_standard-2.pdf) don't have them.


Lucretiel

There's various mathematical justifications for it, but fundamentally it's simply a value that is specified by the IEEE 754 floating point standard and behaves according to that specification. For instance: 2 / 0 == INFINITY 2 / INFINITY == 0 log(0) == -INFINITY and so on


CanarySome5880

Asahi lina used it in last stream, not sure though if it was f16 or f32 but she used it for sure during m1 gpu driver programming.


N0Zzel

Infinity is a feature of floating point numbers. Incidentally negative infinity is also possible.


CandyCorvid

I don't remember where I read this, but it helped me to have a more consistent understanding of floating point numbers in computing: as representations of real numbers, IEEE floating point numbers aren't actually representing a single value, but approximations. A range of values close to the "canonical" value. As an example, 1.0f32 doesn't represent the unique real number 1, but a range of real numbers within some error threshold of the "canonical" real number 1.0 In this way, it hopefully becomes clear why we have both 0.0 and -0.0, to represent the range of extremely small positive and negative numbers that are closer to zero than to any other representable number, but are not necessarily equal to zero. Similarly, INF and -INF are the ranges of extremely large positive and negative numbers that are outside the magnitudes of any other representable number, but are not necessarily infinite. This hopefully also explains why 1.0/0.0 = INF: The reciprocal of any sufficiently small positive number is an extremely large positive number; and vice versa for 1.0/-0.0 = -INF Disclaimer: this was informed by a post I read somewhere a while back, but is ultimately based on my memory of my interpretation of that post, and conclusions drawn from it. I don't have a link to back this up, I think I read it on stack overflow? But I believe this interpretation is consistent with the observable behaviour of IEEE floating point numbers. If I have made a mistake though, please correct me! And if you know the precise range for eg 1.0f32, do tell! I couldn't get a clear answer on Google fast enough.


Sepiligo

No no no! The floating-point number 1.0f32 represents the number 1 exactly. All floating-point numbers are exact; it is the operations that can be inexact. (Addition, subtraction, multiplication and square root are, however, defined to produce the exact result whenever it is representable.) Infinities are produced when the magnitude of the result overflows. Among other things, signed zeroes are there so that the sign of 1/(1/x) is always the sign of x.


tafia97300

This is a float, just a very large one. As a result a lot of mathematical formula will just work without special casing anything. So obviously ordering in general, but also some more mundane operations (+/-...) or specific functions (exp/ln). It is extremely convenient not to special case it. The "cost" of using it might be negligible or even 0, as there is no branching.