0

I have a C# program pulling some data and put into SQL Server

public void UpdateElementInfo()
{
    var _db = new MyEntities();

    var mylist = _db.elements.Where(x => x.id != null);

    IList<ele> plist = mylist.ToList<ele>();

    foreach (ele p in plist)
    {
        var query = from myRow in dtrr.AsEnumerable()
                    where myRow.Field<string>("num") == p.id.ToString()
                    select myRow;
        var rrrr = query.OrderByDescending(x => x.Field<DateTime>("updated_at")).FirstOrDefault();

        {
            p.my_interested_param = (double?)rrrr.Field<decimal?>("parameter") * 100;

So if I do Console.WriteLine() this my_interested_param, it shows as expected, 4.12 for example. However, when I looked in SQL Server, this number is actually converted into 4.1200000000000003. Moreover, this conversion does not happen to all of the rows. So it is very strange to me.

I read that this might be due to the FLOAT data type chosen to store this number. However, what we could do if we have to use FLOAT to store it.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459

0 Answers0