0

I am dividing extremely large integers, so to speak up to 1kb integers and I ran into 2 problems already. Either

OverflowError: integer division result too large for a float

or The float is rounded off to some digits and when I try to multiply back I get a slightly different number.

Is there any way in python to somehow prevent from dividing floats that have more than 20 digits after the decimal point?

smallest_floats = []

n1 = int(input())
n2 = int(input())

while n2 != 1:
  smallest_floats.append(str(n1/n2))
     n2 -= 1
print(min(smallest_floats, key=len))

I am thinking that the possible solutions is to somehow assert division or:

len(s.split(".")[-1]) > 20
Got To Figure
  • 390
  • 6
  • 13

2 Answers2

2

For rational number arithmetic without precision loss you can use the fractions.Fraction class from the fractions package. You can divide by another rational number and then multiply by it again in order to obtain the very same rational number you had at the beginning.

>>> from fractions import Fraction
>>> n1 = Fraction(large_numerator, denominator)
>>> n2 = n1 / some_rational_number
>>> assert n1 == n2 * some_rational_number
a_guest
  • 34,165
  • 12
  • 64
  • 118
1

import the decimal module (https://docs.python.org/2/library/decimal.html) it has abritrary precision

you can increase displayed decimal digits with

>>> from decimal import *
>>> getcontext().prec = 100
>>> Decimal(2).sqrt()
Decimal('1.414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641573') 100 decimal digits

how can i show an irrational number to 100 decimal places in python?

Community
  • 1
  • 1
ralf htp
  • 9,149
  • 4
  • 22
  • 34
  • 1
    Not the best tool if the original integer has to be reproduced exactly (although it probably works in most cases). Try the following: `d1 = Decimal(1.0); d2 = d1 / Decimal(3.0); assert d1 == d2 * Decimal(3.0)`. This will raise an `AssertionError` because `d2 * Decimal(3.0)` evaluates to `Decimal('0.9999999999999999999999999999')`. The point here is that `Decimal` provides (almost) arbitrary precision - but not _infinite_ precision. – a_guest Jan 15 '17 at 23:30
  • @a_guest So Decimal is Not infinitely precise while Fraction is correct? Meaning Using Fraction module I can get the original back? – Got To Figure Jan 15 '17 at 23:42
  • @Adminy Yes `Fraction` represents a rational number as numerator and denominator and can thus represent all rational numbers exactly (as long as their nums and denoms fit in memory of course). With the `decimal` module you can select an _arbitrary_ precision which (of course) cannot be _infinite_ (you cannot store an infinite number of decimal places for example). Consider the example `1/3`: `Fraction(1, 3)` is an exact representation while `Decimal(1.0) / Decimal(3.0)` will cut the decimal places after the precision which you've specified. – a_guest Jan 15 '17 at 23:47
  • I found this answer useful none the less ^^ It helps me too. – Got To Figure Jan 15 '17 at 23:54