Affects Version/s: V4.0_OS
Fix Version/s: V4.0_ERRATA03
Component/s: CSDL XML
Clarify that CSDL Precision for Edm.Decimal is the number of significant decimal digits.
Allow Precision=Scale and don't count the "0." into the Precision in this special case.
A quick recap of relevant text from the spec:
4.4 Primitive Types ... Edm.Decimal ... Numeric values with fixed precision and scale
6.2.3 Attribute Precision ... For a decimal property the value of this attribute specifies the maximum number of digits allowed in the property’s value; it MUST be a positive integer. If no value is specified, the decimal property has unspecified precision.
... Note: service designers SHOULD be aware that some clients are unable to support a precision greater than 29 for decimal properties and 7 for temporal properties. Client developers MUST be aware of the potential for data loss when round-tripping values of greater precision. Updating via PATCH and exclusively specifying modified properties will reduce the risk for unintended data loss.
PROBLEM: By section 4.4, Edm.Decimal values have "fixed" precision. But later we allow that it may be "unspecified", so it is not "fixed". We might interpret this as: the precision can be "fixed" in a CSDL Property facet, or it may be "fixed" at runtime, i.e. in a sent or received value (according to the number of significant digits in the runtime value). But perhaps it indicates the wording in section 4.4 is inappropriate.
The section 6.2.3 text suggests that precision greater than 29 could result in "data loss during round-tripping".
For temporal properties that is reasonable, because it is suggestive that precision greater than 7 in the fractional seconds may result in truncation/rounding (losing some significant digits in the fraction, but not preventing receipt/storage of the value by the client).
However for decimal values, it could result in the inability of the client to even represent the value, let alone retain significant digits.
Try this out (C#):
decimal x = decimal.Parse("123456789012345678901234567890");
PROBLEM: You don't get a loss of significant digits - the value just cannot be represented. Furthermore, even a precision of 29 is too much for C# decimal type, consider "99999999999999999999999999999" (28 9 digits). That particular type has a binary mantissa, so the maximum representable value is between 28 and 29 decimal digits (about 7.8*10^28).
One might expect that when we talk about Precision in the CSDL spec, we are talking about the type akin to defining its Value Space (see http://www.w3.org/TR/xmlschema11-2/#value-space), not about its Lexical Space (since we have ATOM/JSON/ABNF docs to cover lexical representation).
The significant figures of a number are those digits that carry meaning contributing to its precision. This includes all digits except:
• All leading zeros;
• Trailing zeros when they are merely placeholders to indicate the scale of the number (exact rules are explained at identifying significant figures); and
• Spurious digits introduced, for example, by calculations carried out to greater precision than that of the original data, or measurements reported to a greater precision than the equipment supports.
From Computational Mathematics (by T.R.F. Nonweiler): http://www.jstor.org/discover/10.2307/2008016?sid=21105465034221&uid=2&uid=4&uid=3738776
• The maximum number of digits available to the mantissa is called the precision, or number of significant digits.
So you will see that both references equate precision with significant digits. (Not to be confused with the notion of precision as it applies to positions after the decimal point, which for Edm.Decimal is the Scale).
Now CSDL 6.2.3 allows for "unspecified precision". We might then reasonably assume that Edm.Decimal can thus accommodate values with arbitrary precision, i.e. an arbitrary number of significant digits. For example, as with java.math.BigDecimal.
PROBLEM: Folks seem happy to accept that IEEE decimal floating point (64-bit and 128-bit) is compatible with CSDL Precision greater than 16 (64-bit) or 34 (128-bit). That indicates that we are not in agreement that CSDL Precision (as it relates to Edm.Decimal) is to do with significant digits. (The alternative is to allow that DECFLOAT in Edm.Decimal can have negative scale, but that is prohibited by the CSDL spec in section 6.2.4).
Decimal data types in programming languages and databases
Common terminology is
Value = Mantissa * 10**(-Scale)
Precision = length of Mantissa
Positive Scale means fractional digits, negative scale avoids representing "trailing zeroes" for large integers, e.g. one million has mantissa = 1, scale = -6
Type Precision = length of mantissa Scale = -Exponent
C# decimal 96 bit ~ 28-29 decimal digits Instance, 0..28
Objective-C NSDecimalNumber 38 decimal digits Instance, -128…127
Java BigDecimal unlimited no of decimal digits Instance, -2,147,483,648…2,147,483,647(32 bit)
DECFLOAT34 34 decimal digits Instance, -384…383
DECFLOAT16 16 decimal digits Instance, -6144…6143
DB2 DECIMAL 1…31 decimal digits Column, 0…Precision
Sybase ASE DECIMAL 1…38 decimal digits Column, 0…Precision
Sybase IQ DECIMAL 1…126 decimal digits Column, 0…Precision
Postgres DECIMAL 1…1000 decimal digits Column, 0…Precision
Oracle NUMBER 1..38 decimal digits Column, -84…127
Problems when mapping these to Edm.Decimal
- missing exponential notation: the "floating-point decimals" can be declared as Scale="variable", but
- with Precision=internal precision not all internal values can be represented in OData
- with Precision=unspecified all internal values can be represented, using lots of zeroes, but not all OData values can be stored
- OData representation doesn't allow leading decimal point, numeric Scale has to be lower than precision
- DECIMAL/NUMBER columns with 0 <= scale < precision fit perfectly
- scale = precision runs into problems:
- with Precision=internal precision not all DECIMAL values can be represented in OData
- with Precision=internal precision plus 1 not all OData values can be stored as DECIMAL values
- Scale cannot be negative or larger than Precision minus 1
- NUMBER columns with negative scale or scale larger than precision minus 1 run into the same problems as "floating-point decimals"
- define Precision to be the number of significant digits
- allow exponential notation
- floating-point decimals can be minimally represented
- Precision can be used to exactly express the number of significant digits
- an annotation (or in future protocol versions a new facet) can indicate the presence of exponents and their value range
- allow Precision=Scale
- DECIMAL precision and scale can be exactly expressed in all cases
- number representation could be relaxed to allow .123, or we don't count the "0." into the Precision in this special case
- allow negative Scale
- NUMBER precision and scale can be exactly expressed in all cases