0

I am trying to understand data types more..here are few examples which are still puzzling me.

Decimal(p,s)--p is the total digits,s is number of digits after point

So given below example

declare @st decimal(38,38)
set @st=1.22121065111115211641641651616511616

I am getting below error..

Arithmetic overflow error converting numeric to data type numeric

len(digits) after point is 35,total digits including . are 37 ...shouldn't my decimal of (38,38) work

So my question is why i am getting the above error

Here is my research so far..

How do I interpret precision and scale of a number in a database?

Decreasing scale like below works..

declare @st decimal(38,37)
set @st=1.22121065111115211641641651616511616

Answer by boumbh points out an out of range error,but i am not getting error for same example

Community
  • 1
  • 1
TheGameiswar
  • 27,855
  • 8
  • 56
  • 94

2 Answers2

1

@TheGameisWar

According to the above example , you have specified decimal(38,38). and value = 1.22121065111115211641641651616511616, Sql Server checks whether they're any digits to left of point. As (p-s) gives the total number of digits to the left. Therefore if we do it for your case we get (38-38)= (0)

hence the value given by you doesn't satisfy the decimal precision and scale value. if you place 0 to left of the point ,then it works Ex:- 0.22121065111115211641641651616511616

And for the below case decimal(38,37), We get (38-37) = (1) therefore one digit can be accepted left of the point. hence the value is accepted and executed.

Precision and scale indirectly decides the number of digits to the left of point. :)

1

The answer is in MSDN.

decimal[ (p[ , s] )]

p (precision) The maximum total number of decimal digits that will be stored, both to the left and to the right of the decimal point. The precision must be a value from 1 through the maximum precision of 38. The default precision is 18.

s (scale) The number of decimal digits that will be stored to the right of the decimal point. This number is subtracted from p to determine the maximum number of digits to the left of the decimal point. The maximum number of decimal digits that can be stored to the right of the decimal point. Scale must be a value from 0 through p. Scale can be specified only if precision is specified. The default scale is 0; therefore, 0 <= s <= p.

I highlighted the relevant phrase.

When you specify the same value (38) for p and s you don't leave room for any digits before the decimal point.

Vladimir Baranov
  • 31,799
  • 5
  • 53
  • 90
  • :In my case p is 1 and scale can store 37,but len(scale) is 35 only – TheGameiswar Mar 11 '16 at 06:20
  • 1
    In your case `decimal(38,38)` p is 38. The first phrase from the quoted doc: "p (precision) The maximum total number of decimal digits that will be stored, **both to the left and to the right of the decimal point**. " – Vladimir Baranov Mar 11 '16 at 06:24