Specifically: I have two unsigned integers (a,b) and I want to calculate (a*b)%UINT_MAX (UINT_MAX is defined as the maximal unsigned int). What is the best way to do so?
Background: I need to write a module for linux that will emulate a geometric sequence, reading from it will give me the next element (modulo UINT_MAX), the only solution I found is to add the current element to itself times, while adding is done using the following logic:(that I use for the arithmetic sequence)
for(int i=0; i<b; ++i){
if(UINT_MAX - current_value > difference) {
current_value += difference;
} else {
current_value = difference - (UINT_MAX - current_value);
}
when current_value = a in the first iteration (and is updated in every iteration, and difference = a (always). Obviously this is not a intelligent solution. How would an intelligent person achieve this?
Thanks!