4

Possible Duplicate:
What data-structure should I use to create my own “BigInteger” class?

Out of pure interest, I am trying to design a type that can hold an arbitrarily large integer. I want to support four basic operations [+, -, *, /] and optimise for speed of those operations.

I was thinking about some sort of a doubly-linked list and a bit flag to indicate positive or negative value. But I am not really sure how to add, for example, to large numbers of different sizes. Shall I walk to the last element of both numbers and then return back (using the second reverse pointer to the previous element).

 123456789     //one large number
+      123     //another large number with different size

Providing I can have an arbitrarily large memory, what is the best data structure for this task?

I would appreciate a small hint and any comments on worst-case complexity of the arithmetic operations. Thanks!

Community
  • 1
  • 1
oleksii
  • 35,458
  • 16
  • 93
  • 163
  • You mean like a BigInteger or BigDecimal in Java? – duffymo Jul 07 '12 at 22:22
  • if you're working in c (or c++, or just curious) then the book c interfaces and implementations develops a library like this - http://www.amazon.com/Interfaces-Implementations-Techniques-Creating-Reusable/dp/0201498413 – andrew cooke Jul 07 '12 at 22:26
  • http://www.google.de/search?q=java+bigint . First link says: "Immutable arbitrary-precision integers". – krlmlr Jul 07 '12 at 22:27
  • 1
    What I can't understand is why someone would use a linked-list for this – Alexander Jul 07 '12 at 22:35
  • To clarify - do you mean **arbitrarily large** numbers (numbers with no upper bound), or **infinitely large** numbers (as in set theory)? – templatetypedef Jul 07 '12 at 23:00
  • @oleksii, no, it's not. If you think about it it is not a good reason – Alexander Jul 08 '12 at 08:34

1 Answers1

3

Usually one would go for an array/vector in this case, perhaps little-endian (lowest-significant word first). If you implement in-place operations, use a constant factor when growing the array, then amortized complexity for the reallocation remains O(1).

All operations should be doable in O(n) run time where n is the size of the input. EDIT: No, of course, multiplication and division will need more, this answer says it's at least O(N log N).

Just out of curiosity: Why are you reimplementing the wheel? Find here an implementation in Java. C# has one with .NET 4.0, too. While it might be a good exercise to implement this yourself (I remember myself doing it once), if you just need the functionality then it's there in many computing environments already.

Community
  • 1
  • 1
krlmlr
  • 25,056
  • 14
  • 120
  • 217