Why can't arbitrary base conversion be as fast as converting from base $b$ to base $b^k$ ?
Seems to be a big time complexity difference! I am also interested in reading material about it.
Old. Original detailed question
Conversion between power-2-radix can be done faster than between non-power-of-2 radix, they can be even done in parallel, as every digit (or some groups of them) can be decoded independently of the rest.
For example the binary number 00101001
can be converted to hexadecimal 0x29
nibble by nibble (0010
and 1001
), and vice versa (i.e. every hex-digit can be parsed to 4 bits independently of the rest), but doing that conversion from decimal (or any other non-power-of-2 radix) to binary, it's not so easy because digits affects each other!.
I've seen time complexity of maths operations in wikipedia, and there is also a related question in stackoverflow saying time complexity of conversions of arbitrary digit length to be $\mathcal{O}(M(n) log(n))$
I'm not interested in a "general time complexity bounds for any base conversion" but I would like to know more about these big differences in time complexity between "power-of-2 conversions" vs "any other base conversions".
I think there is a general fact about conversions that can be done faster if they are done between numbers where its bases are power among themselves, not only for 2, but the same to a base 10 to base 100.
Is there any known proof or materials around this ?