There's two methods I've read about, divide by 10 repeatedly (or slightly more efficiently, divide in a way to minimize division operations) i.e.
int getFirstDigit(unsigned int num) {
if (num >= 100000000)
num /= 100000000;
if (num >= 10000)
num /= 10000;
if (num >= 100)
num /= 100;
if (num >= 10)
num /= 10;
return num;
}
Credit: How to retrieve the first decimal digit of number efficiently
With an unsigned int, this will at most divide 3 times, reducing division operations which are as far as I know, the most expensive assembly operation. My question is, how does that compare to the following string operation:
int getFirstDigit(unsigned int num) {
return (to_string(num))[0] - '0';
}
It converts to string, gets the character, and offsets by 48 to get the pure number the character would represent. I do know strings are associated with tremendous amounts of overhead, and I did a bit of looking around, but I couldn't find a concrete answer comparing the two.
Afterthought: I saw a log in the page I credited earlier, and I suppose I could take the log base 10 of the number, floor the result to see what order of 10 it is bound from below by, and use that number as the exponent for 10 where they divide the number I'm looking at, reducing it to a log operation, power operation, and division operation. If that's even reducing it. But again, I have no idea the relative run times.
Sorry in advance, I did see a lot of similar questions, but none that I felt answered my specific question of relative run times based on efficiency at the assembly level. My gut tells me the division is far better than string but I can't feel certain because a string is just contiguous ascii values in memory.