11

As I interpret it, MSDN's definition of numeric_limits::is_exactis almost always false:

[all] calculations done on [this] type are free of rounding errors.

And IBM's definition is almost always true: (Or a circular definition, depending on how you read it)

a type that has exact representations for all its values

What I'm certain of is that I could store a 2 in both a double and a long and they would both be represented exactly.

I could then divide them both by 10 and neither would hold the mathematical result exactly.

Given any numeric data type T, what is the correct way to define std::numeric_limits<T>::is_exact?

Edit: I've posted what I think is an accurate answer to this question from details supplied in many answers. This answer is not a contender for the bounty.

Community
  • 1
  • 1
Drew Dormann
  • 59,987
  • 13
  • 123
  • 180
  • 4
    `2 / 10` *is* stored exactly (for `long`) because the result of integer division is an integer. – GManNickG Jan 07 '13 at 20:42
  • 2
    @GManNickG Yah, and 2.0 / 10.0 is also stored exactly (for a double) because the result of double division is a double. `2/10` is _not_ 0, except by special rules. – James Kanze Jan 07 '13 at 21:03
  • I would consider _exact_ a numeric type for which `x + y != x && x + y != y` when `x != 0 && y != 0` – K-ballo Jan 07 '13 at 22:25
  • It should have been ::is_exact_as_long_as_it_does_not_overflow_otherwise_could_trigger_undefined_behavior but the name was a bit too long – aka.nice Jan 07 '13 at 22:27
  • @aka.nice: Good luck _unsigned_ types never overflow! – K-ballo Jan 07 '13 at 22:28
  • @K-ballo I once considered my account was at UINT_MAX-1000+1 but the bank kept considering it was -1000 despite the (low) risk of overflow... – aka.nice Jan 07 '13 at 22:37
  • 4
    @aka.nice: No, _unsigned_ types don't **ever** overflow, they wrap around. Their modulo arithmetic is mandated by the standard. – K-ballo Jan 07 '13 at 22:38
  • @K-ballo that's why they left signed int in, so that we could contemplate how perfect the standard could have been without. – aka.nice Jan 07 '13 at 23:00

6 Answers6

8

The definition in the standard (see NPE's answer) isn't very exact, is it? Instead, it's circular and vague.

Given that the IEC floating point standard has a concept of "inexact" numbers (and an inexact exception when a computation yields an inexact number), I suspect that this is the origin of the name is_exact. Note that of the standard types, is_exact is false only for float, double, and long double.

The intent is to indicate whether the type exactly represents all of the numbers of the underlying mathematical type. For integral types, the underlying mathematical type is some finite subset of the integers. Since each integral types exactly represents each and every one of the members of the subset of the integers targeted by that type, is_exact is true for all of the integral types. For floating point types, the underlying mathematical type is some finite range subset of the real numbers. (An example of a finite range subset is "all real numbers between 0 and 1".) There's no way to represent even a finite range subset of the reals exactly; almost all are uncomputable. The IEC/IEEE format makes matters even worse. With that format, computers can't even represent a finite range subset of the rational numbers exactly (let alone a finite range subset of the computable numbers).

I suspect that the origin of the term is_exact is the long-standing concept of "inexact" numbers in various floating point representation models. Perhaps a better name would have been is_complete.

Addendum
The numeric types defined by the language aren't the be-all and end-all of representations of "numbers". A fixed point representation is essentially the integers, so they too would be exact (no holes in the representation). Representing the rationals as a pair of standard integral types (e.g., int/int) would not be exact, but a class that represented the rationals as a Bignum pair would, at least theoretically, be "exact".

What about the reals? There's no way to represent the reals exactly because almost all of the reals are not computable. The best we could possibly do with computers is the computable numbers. That would require representing a number as some algorithm. While this might be useful theoretically, from a practical standpoint, it's not that useful at all.

Second Addendum
The place to start is with the standard. Both C++03 and C++11 define is_exact as being

True if the type uses an exact representation.

That is both vague and circular. It's meaningless. Not quite so meaningless is that integer types (char, short, int, long, etc.) are "exact" by fiat:

All integer types are exact, ...

What about other arithmetic types? The first thing to note is that the only other arithmetic types are the floating point types float, double, and long double (3.9.1/8):

There are three floating point types: float, double, and long double. ... The value representation of floating-point types is implementation-defined. Integral and floating types are collectively called arithmetic types.

The meaning of the floating point types in C++ is markedly murky. Compare with Fortran:

A real datum is a processor approximation to the value of a real number.

Compare with ISO/IEC 10967-1, Language independent arithmetic (which the C++ standards reference in footnotes, but never as a normative reference):

A floating point type F shall be a finite subset of ℝ.

C++ on the other hand is moot with regard to what the floating point types are supposed to represent. As far as I can tell, an implementation could get away with making float a synonym for int, double a synonym for long, and long double a synonym for long long.

Once more from the standards on is_exact:

... but not all exact types are integer. For example, rational and fixed-exponent representations are exact but not integer.

This obviously doesn't apply to user-developed extensions for the simple reason that users are not allowed to define std::whatever<MyType>. Do that and you're invoking undefined behavior. This final clause can only pertain to implementations that

  • Define float, double, and long double in some peculiar way, or
  • Provide some non-standard rational or fixed point type as an arithmetic type and decide to provide a std::numeric_limits<non_standard_type> for these non-standard extensions.
David Hammen
  • 32,454
  • 9
  • 60
  • 108
  • Interesting. And possibly dead-on. Could you comment on rational and fixed-exponent being exact? Are `int/int` and `int*2^k` reasonable forms for an underlying mathematical type, while `int*2^int` is not? There seems to be an element of human interpretation in the definition, as if `isnt_intimidating` could have been just as appropriate. – Drew Dormann Jan 07 '13 at 22:57
  • The problem is: if you consider the underlying type of floating point reals, then you don't understand machine floating point. And of course, rationals suffer from the same problems as floating point, even if you consider the underlying type rational, and not real. – James Kanze Jan 07 '13 at 23:46
  • I'm not sure how to merge the standard's "rational representations are exact" with your "representing the rationals as a pair of standard integral types would not be exact". Am I misunderstanding the standard? – Drew Dormann Jan 16 '13 at 20:41
  • 1
    @DrewDormann - It depends on how one represents the rationals. A representation that uses pair of fixed length integers (e.g., a pair of `int` numbers) will suffer the same problems as do `float` and `double`. On the other hand, a representation that uses a pair of arbitrary length integers won't suffer those problems, at least not theoretically. (In practice it will. Memory is finite.) – David Hammen Jan 16 '13 at 21:19
  • 1
    Re “The intent is to indicate whether the type exactly represents all of the numbers of the underlying mathematical type.” If this were the meaning, `is_exact` would always be true. Per IEEE-754 and ISO/IEC 10967-1, a floating-point value is a single number. It does not represent an interval or any sort of fuzziness. What `is_exact` could indicate is whether **operations** always return exact results. On the face of it, this must always be false, since integer 7/3 returns result rounded down to the nearest integer. So `is_exsct` can have meaning only with some more complicated definition. – Eric Postpischil Jan 12 '18 at 14:13
  • 1
    I suspect the committee did not deliberate adequately over `is_exact` and has not given it a well defined meaning. To declare the integers to be exact, you have to define the `/` operator to be trunc(*x* / *y*) rather than the mathematical division *x* / *y*. But, if you allow that for the integers, then floating-point operations can be defined similarly: floating-point `x/y` is defined as the mathematical *x* / *y* rounded to the nearest representable value. So floating-point `x/y` is exact the same way integer `x/y` is exact. – Eric Postpischil Jan 12 '18 at 14:17
5

I suggest that is_exact is true iff all literals of that type have their exact value. So is_exact is false for the floating types because the value of literal 0.1 is not exactly 0.1.

Per Christian Rau's comment, we can instead define is_exact to be true when the results of the four arithmetic operations between any two values of the type are either out of range or can be represented exactly, using the definitions of the operations for that type (i.e., truncating integer division, unsigned wraparound). With this definition you can cavil that floating-point operations are defined to produce the nearest representable value. Don't :-)

Hyman Rosen
  • 657
  • 4
  • 7
  • 2
    But then again this is a mere result of the differences in bases between us humans and the computer. So a decimal floating point type would be exact because we don't specify floating point literals in binary format? – Christian Rau Jan 08 '13 at 16:22
  • I think I understand. So `is_exact` **doesn't** describe any aspect of the number format itself, it describes C++'s own decisions on how to type literals in that format. Because C++ describes doubles in base 10 instead of, e.g. `m13e-6`, doubles are not exact. Is that your meaning? – Drew Dormann Jan 08 '13 at 18:21
  • 1
    @Christian Rau - Well, maybe :-) Alternatively we can define is_exact as true for those types where the results of the four arithmetic operations between any pair of values of the type is either out of range or can be represented exactly (with the proviso that integer division is defined as giving integer result and unsigned arithmetic wraps). That's probably a better definition. – Hyman Rosen Jan 08 '13 at 20:53
3

The problem of exactnes is not restricted to C, so lets look further.

Germane dicussion about redaction of standards apart, inexact has to apply to mathematical operations that require rounding for representing the result with the same type. For example, Scheme has such kind of definition of exactness/inexactness by mean of exact operations and exact literal constants see R5RS §6. standard procedures from http://www.schemers.org/Documents/Standards/R5RS/HTML

For case of double x=0.1 we either consider that 0.1 is a well defined double literal, or as in Scheme, that the literal is an inexact constant formed by an inexact compile time operation (rounding to the nearest double the result of operation 1/10 which is well defined in Q). So we always end up on operations.

Let's concentrate on +, the others can be defined mathematically by mean of + and group property.

A possible definition of inexactness could then be:

If there exists any pair of values (a,b) of a type such that a+b-a-b != 0,
then this type is inexact (in the sense that + operation is inexact).

For every floating point representation we know of (trivial case of nan and inf apart) there obviously exist such pair, so we can tell that float (operations) are inexact.

For well defined unsigned arithmetic model, + is exact.

For signed int, we have the problem of UB in case of overflow, so no warranty of exactness... Unless we refine the rule to cope with this broken arithmetic model:

If there exists any pair (a,b) such that (a+b) is well defined
and a+b-a-b != 0,
then the + operation is inexact.

Above well definedness could help us extend to other operations as well, but it's not really necessary.
We would then have to consider the case of / as false polymorphism rather than inexactness
(/ being defined as the quotient of Euclidean division for int).

Of course, this is not an official rule, validity of this answer is limited to the effort of rational thinking

aka.nice
  • 9,100
  • 1
  • 28
  • 40
2

The definition given in the C++ standard seems fairly unambiguous:

static constexpr bool is_exact;

True if the type uses an exact representation. All integer types are exact, but not all exact types are integer. For example, rational and fixed-exponent representations are exact but not integer.

Meaningful for all specializations.

NPE
  • 486,780
  • 108
  • 951
  • 1,012
  • 2
    But what does an "exact representation" mean. All floating point values are an exact representation of the value they represent. And rational and fixed-exponent representations (not to mention integers) do not have an exact representation of pi. – James Kanze Jan 07 '13 at 21:05
  • @JamesKanze: I think the intention is to say every value between `min()` and `max()` can be represented exactly. – GManNickG Jan 07 '13 at 21:14
  • 3
    @JamesKanze - *All floating point values are an exact representation of the value they represent.* No, they're not. Each floating point value represents an interval on the real number line, not a specific number. One point in that interval will have an exact representation. All of the others won't, and getting one of those inexact values will raise the IEEE inexact exception if that exception is enabled. – David Hammen Jan 07 '13 at 21:26
  • `double` would be considered !`is_exact`, as would rationals of same. – Yakk - Adam Nevraumont Jan 07 '13 at 22:08
  • The standard's definition is circular, but, that's what it is. +1 for a standard-based answer. – David Hammen Jan 07 '13 at 22:42
  • @GManNickG In which case, no representation fits the bill. Pi is between `min()` and `max()` in all of the formats, and none can represent it exactly. – James Kanze Jan 07 '13 at 23:40
  • @DavidHammen Apparently, you don't understand machine floating point. Each floating point value represents a value exactly. It may not be the value you want, but hey, that's the way it is. Floating point arithmetic is _not_ interval arithmetic. – James Kanze Jan 07 '13 at 23:42
  • @Yakk That's what the standard seems to suggest, but there isn't any justification. If you take the words literally, they can be interpreted in two ways. In one case, all representations are exact, and in the other, none are. – James Kanze Jan 07 '13 at 23:43
  • @JamesKanze: Not for `long`. The domain for `long` is the integers. David's answer takes over my line of thought. – GManNickG Jan 07 '13 at 23:51
  • 2
    @JamesKanze - I understand machine floating point quite well. Your concept is one way to look at it, but it isn't the only way. Looking at the IEEE/IEC standard as representing intervals is essentially how Microsoft Excel looks at the things, hence their rounding. See http://stackoverflow.com/questions/6930786/how-does-excel-successfully-rounds-floating-numbers-even-though-they-are-impreci/7211688#7211688 . – David Hammen Jan 08 '13 at 00:31
  • The `int` types are representations of Z. They are perfect up to `max` and down to `min`. The `float` types are representations of R. They are each inexact. A rational type would be a representation of Q. It could easily do perfect math until it failed, and error out at that point. While formally describing the difference is challenging, the conceptual difference is not. A formal definition could be nice to have. – Yakk - Adam Nevraumont Jan 08 '13 at 03:24
  • 2
    @Yakk The float types are _not_ representations of R. One can also argue about whether `int` is a representation of Z, but as long as there is no overflow and no division, they are. Division is a problem, because Z isn't closed with regards to division: `1/3` is either `0.3333...` or a domain error. But in no case 0; 0 can only be considered a rounding error. – James Kanze Jan 08 '13 at 08:40
  • 2
    @DavidHammen I would hardly take Excel as a reference (but I seriously doubt that it does interval arithmetic). An IEEE float value is an exact value; operations on it obey exact rules, and also result in an exact value (which isn't necessarily the same as it would be in arithmetic over R). Interval arithmetic is something completely different. – James Kanze Jan 08 '13 at 08:43
  • @jameskanze so you disagree with the claim that `float` types are an approximation of R; that is a position to take. Why is your position interesting? I was demonstrating that things could make sense, not that theey must. `/` in Z is not field division: you can view `/` and ``%` as a pair of operators that express division in Z (exactly). And yes there is `min` and `max`: these are limitations on the model. – Yakk - Adam Nevraumont Jan 08 '13 at 12:06
  • Another way to look at it is whether operations are consistent. For int they are, I believe for float they are not. Compilers will sometimes do float computations with different precisions for the same code inlined into different locations -- e.g. in one place it may choose to use the x86 instruction taking a 64-bit memory operand but in another use a 128-bit register operand, so calling the same function from two different places with the same inputs can actually give different results. I think there are usually compiler options to force it to be consistent. – Joseph Garvin Jan 08 '13 at 15:55
  • 1
    @Yakk They are an "approximation" of R in the same sens that Q is an approximation of R. The problem is that they don't really behave like R: addition isn't associative, etc. Practically every problem I've seen in numeric processing boils down to the programmer supposing that machine floating point is R (or is a reasonably good approximation of R). – James Kanze Jan 08 '13 at 15:58
  • @JosephGarvin That is still another problem. Yes, on some processors, the results of `a + b` will depend on what you do with it. Or `a * b / c` may give a reasonably good approximation, where as `double x = a * b; x /= c;` returns infinity. – James Kanze Jan 08 '13 at 16:00
  • 1
    @JamesKanze you misunderstand: I'm not saying your interpretation is invalid. Pointing out why your interpretation is valid is not interesting: I know it is valid! I'm pointing out that there is more than one interpretation that is valid, as shocking as that is. Yours is valid. The one where `double` is an approximation of R is valid. The one where you think `double` is an accurate approximation of R is, as you have mentioned, invalid. Hence, `is_exact` being `false` corresponding to `double` is an approximation of R that is not exact (or all that accurate). – Yakk - Adam Nevraumont Jan 08 '13 at 16:05
  • @JosephGarvin Now that is a mere implementation limitation and doesn't really have anything to do with the conceptual discussion at all. – Christian Rau Jan 08 '13 at 16:19
  • @ChristianRau: I disagree, but maybe I'm not being clear enough. I think compilers that are doing what I described are technically standards conformant (at least I assume so since it's so common) whereas if they inconsistently handled the precision of integer operations they would obviously be noncomformant. So that might be the intent behind is_exact. – Joseph Garvin Jan 08 '13 at 18:13
  • 1
    `All integer types are exact, but not all exact types are integer.` LOL. Go home standard; ur drunk – Lightness Races in Orbit Jan 11 '13 at 18:19
  • @JamesKanze Let me backup David. Saying that floating point numbers are actually intervals does not imply that the arithmetic is that of intervals. Floating point numbers are intervals with arithmetic defined as done on some representative of the interval. For example, the IEEE 32-bit number 1 represents all the reals in the interval [1-ε/4,1+ε/2] where ε is the machine epsilon, yet the operations are defined as if they are done on the representative (1) and rounded to the representative of the interval the result falls into. And your interpretation of exactness is tautological, so has no use. – Yakov Galka Jan 16 '13 at 19:05
  • @ybungalobill: No, the IEEE 32-bit number 1 does not represent all the reals in that interval or any other. Clause 3 of IEEE 754-2008 describes what numbers are represented, and every floating-point datum that is not an infinity or a NaN represents a single number. The specific numbers represented are explicitly spelled out (according to parameters of the format, such as exponent range and significand width). – Eric Postpischil Jan 12 '18 at 14:24
  • @EricPostpischil: It does say that all real numbers in that interval are rounded to the floating point number `1`, and for the purpose of the standard they are all equal. That effectively defines an [equivalence relation](https://en.wikipedia.org/wiki/Equivalence_relation) in a purely mathematical sense, that interval an [equivalence class](https://en.wikipedia.org/wiki/Equivalence_class), and `1` a representative of its class given by the 'rounding' [section](https://en.wikipedia.org/wiki/Section_(category_theory)). – Yakov Galka Jan 12 '18 at 15:14
  • @EricPostpischil: My interpretation is of what are the emerged properties of the system described by the standard ended to be, not how some dude that wrote that standard thought of those things. If it said that blue is red it doesn't mean that we have to follow. – Yakov Galka Jan 12 '18 at 15:14
  • @ybungalobill: Then it might be correct to say “Some people use floating-point to represent intervals…” It is not correct to say floating-point numbers represent intervals, any more than it is correct to say that the C `+` operator performs multiplication or that the C integer `3` represents the interval (3.5, 4.5). And the “dude” that wrote the standard is a committee that includes the person who essentially designed modern floating-point arithmetic and many other very experienced and skilled people. – Eric Postpischil Jan 12 '18 at 16:16
  • @Eric: No. That floating point 1 is a representative of that interval is consistent with (in fact follows from) the standard. That operator `+` does multiplication is not consistent with the standard in conjunction with any accepted definition of addition and multiplication. – Yakov Galka Jan 12 '18 at 16:33
  • @ybungalobill: So you are reasoning that a “1” in floating-point is not a mathematical 1 (but rather is an interval) because that is necessary so that a “+” in floating-point is mathematical addition. But there are two problems with that reasoning. One, it does not give any reason for preferring that interpretation over having a “1” in floating-point representing a mathematical 1 while “+” in floating-point is not mathematical addition (but rather is addition rounded to a representable value). Two, your interpretation is inconsistent, because, if floating-point values `a` and `b` represent… – Eric Postpischil Jan 12 '18 at 18:08
  • … intervals, then the result that floating-point `a+b` gives is **not** generally the interval that equals the sum of the intervals represented by `a` and `b`. Floating-point `+`, `*`, or `/` will return a floating-point value `c`, and the interval purportedly represented by `c` will be determined by the distances to its nearest representable neighbors. That interval from the midpoint to one neighbor to the midpoint to another neighbor is generally not the interval that is the sum (or product or quotient) of the two intervals of `a` and `b`. So the arithmetic in your interpretation is wrong. – Eric Postpischil Jan 12 '18 at 18:09
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/163070/discussion-between-ybungalobill-and-eric-postpischil). – Yakov Galka Jan 12 '18 at 18:55
2

In C++ the int type is used to represent a mathematical integer type (i.e. one of the set of {..., -1, 0, 1, ...}). Due to the practical limitation of implementation, the language defines the minimum range of values that should be held by that type, and all valid values in that range must be represented without ambiguity on all known architectures.

The standard also defines types that are used to hold floating point numbers, each with their own range of valid values. What you won't find is the list of valid floating point numbers. Again, due to practical limitations the standard allows for approximations of these types. Many people try to say that only numbers that can be represented by the IEEE floating point standard are exact values for those types, but that's not part of the standard. Though it is true that the implementation of the language on binary computers has a standard for how double and float are represented, there is nothing in the language that says it has to be implemented on a binary computer. In other words float isn't defined by the IEEE standard, the IEEE standard is just an acceptable implementation. As such, if there were an implementation that could hold any value in the range of values that define double and float without rounding rules or estimation, you could say that is_exact is true for that platform.

Strictly speaking, T can't be your only argument to tell whether a type "is_exact", but we can infer some of the other arguments. Because you're probably using a binary computer with standard hardware and any publicly available C++ compiler, when you assign a double the value of .1 (which is in the acceptable range for the floating point types), that's not the number the computer will use in calculations with that variable. It uses the closest approximation as defined by the IEEE standard. Granted, if you compare a literal with itself your compiler should return true, because the IEEE standard is pretty explicit. We know that computers don't have infinite precision and therefore calculations that we expect to have a value of .1 won't necessarily end up with the same approximate representation that the literal value has. Enter the dreaded epsilon comparison.

To practically answer your question, I would say that for any type which requires an epsilon comparison to test for approximate equality, is_exact should return false. If strict comparison is sufficient for that type, it should return true.

pelletjl
  • 505
  • 3
  • 8
  • Of course strict binary comparison *can* be **sufficient** for an *"inexact"* type, it all depends on the context. In the same way does an `int` also **require** epsilon comparison to test for *approximate equality*, which comes from the definition of approximate equality. If I want to compare if two `int`s are approximately equal up to a tolerance of `3`, then, well, I have to use an epsilon comparision with an epsilon of `3`. The fact that you don't regard `3` small enough to bear the name *epsilon* doesn't change its nature. – Christian Rau Jan 11 '13 at 09:08
  • Just because is_exact returns false doesn't mean that there are no exact values for that type. If your application can limit the values it holds to the values that are exact or doesn't perform any calculations with those values then feel free to use strict (bitwise) comparison, but I would consider that an optimization for your application. As for `3`, I think you're conflating your application's epsilon with the type's epsilon – pelletjl Jan 11 '13 at 15:34
  • This is interesting. So unlike IEEE's well-defined standard, C++ defines `double` to encompass **all** real values in the allowed range. Even `0.1` is a valid double, by definition. And it's just my current hardware that *happens to* store that value inexactly. – Drew Dormann Jan 11 '13 at 17:04
  • @Drew: Actually, *no* floating-point format of finite size (such as `double`) can represent *all* real numbers in any range, for the set of all real numbers is infinitely large: even the set of all real numbers from 0 through 1 is infinitely large. That's because real numbers can include rational numbers with trillions of digits in the numerators and denominators, and irrational numbers such as pi, and the square root of two, that can only be approximated by rational numbers. – Peter O. Jan 11 '13 at 22:01
  • @PeterO. I don't think anyone disagrees with that. :-) – Drew Dormann Jan 12 '13 at 18:47
2

std::numeric_limits<T>::is_exact should be false if and only if T's definition allows values that may be unstorable.

C++ considers any floating point literal to be a valid value for its type. And implementations are allowed to decide which values have exact stored representation.

So for every real number in the allowed range (such as 2.0 or 0.2), C++ always promises that the number is a valid double and never promises that the value can be stored exactly.

This means that two assumptions made in the question - while true for the ubiquitous IEEE floating point standard - are incorrect for the C++ definition:

I'm certain that I could store a 2 in a double exactly.

I could then divide [it] by 10 and [the double would not] hold the mathematical result exactly.

Drew Dormann
  • 59,987
  • 13
  • 123
  • 180
  • Or, IBM's definition is viable if "representation" means "stored, not written" and "value" means "written, not stored". – Drew Dormann Jan 12 '13 at 20:21
  • 2
    Sounds like a good answer, if you could show me where *"C++ considers any floating point literal to be a valid value for its type"* (I'm sure this holds, but then again, I was also sure to know what `is_exact` means ;)) – Christian Rau Jan 16 '13 at 22:15
  • @ChristianRau I'll hunt down the standard blurb...It was statement by omission. The standard defined the types `float`, `double`, and `long double` (as "real"). It also defined the written notation. It then said [very little](http://stackoverflow.com/questions/1816552) about the storage. Since the types are defined and the storage is not, the types are not defined in terms of their storage. You could consider valid either all "real numbers" or what can be written. I took the conservative choice. – Drew Dormann Jan 17 '13 at 15:39