I think this question may be a bit dumb..
Since we store bits to indicate a number, and since we have a whole RAM, we should be able to have a infinity-sized (actually, custom-sized) number that could take up to the whole RAM (or a specified part of it).
Correct?
So if we were to do for example:
mov ax, 0xFFFF
add ax, 1
jc custom_function ; if I remember correctly jc checks the carry flag
And custom_function is a label/function that has an algorithm to set the next bits to create an infinite-sized number.
Questions
Would that be possible? If not infinite-sized numbers, a custom specified (that's longer or shorter than the defaults)
I suspect that, if it IS possible, it will have bad performance (even if the number if shorter). What do you think?
Do you think it would be good to have something like this? (Maybe that would be more effective in older systems or embedded systems were ram is limited?)
CodePudding user response:
- Would that be possible? If not infinite-sized numbers, a custom specified (that's longer or shorter than the defaults)
Yes, of course. You can define any representation and data type you want, only limited by the environment.
However, since the size of your memory is finite, the range of values is also finite. Even if you feel that gigabytes are "kind of" infinite. Just think about calculating the math constant PI with even more digits. At some point your RAM is full.
The standard term is arbitrary-precision arithmetic, or bignum / bigint. One way to handle it is a dynamic array of words (like C std::vector
), with an upper limit imposed by practical limits of the system, but only using as much storage as the number actually needs.
- I suspect that, if it IS possible, it will have bad performance (even if the number if shorter). What do you think?
It depends. If you implement some clever algorithm on an 8 bit machine, the performance with 6 bit values of such a special type is probably better than the performance with long
.
But for sure, at the latest when the special type has more bits than the longest supported "standard" type, the performance will degrade. If you store your bignums as dynamic arrays, you always have an extra level of indirection which adds overhead even for small numbers.
Small-number optimizations are possible, like a flag bit to treat the bits as a value, not a pointer. (Python implementations do something like this; Python 3 integers are always arbitrary precision, so the normal case is small integers that fit in one "chunk" of bits.) This still takes more code to check stuff, compared to just a simple add
instruction.
- Do you think it would be good to have something like this? (Maybe that would be more effective in older systems or embedded systems were ram is limited?)
There are already libraries providing such types, for example GMP, the GNU Multiprecision Library, which has hand-written asm for most architectures (including 32 and 64-bit x86).
In "the olde days" such an algorithm was standard even on PCs. For example, I started my career around 1980 with the Intel 8080 and the Z80 processors, where you commonly use 8 bit arithmetic. They had 16 bit addition and subtraction instructions, but their usage involved a lot of moving values in and out registers.
The instruction sets include addition instruction with and without taking a carry into account. And the same for subtraction. With these instructions, even these ancient processors can handle deliberately wide values.
Oh, 8 bit machines are still in use today. Look up Arduino with AVR, for example. But these are for a special hobby niche. There are probably more 16 bit (and less) processors in use in the world than PCs, commonly hidden as "embedded" controllers.