Home > database >  What do these two bit values mean in this register transfer notation instruction: ACC <-- [[CIR(1
What do these two bit values mean in this register transfer notation instruction: ACC <-- [[CIR(1

Time:12-17

The book states:

Section 6.01 introduced an extension to register transfer notation. We can use this to describe the execution of an instruction. For example, the LDD instruction is described by:

ACC ← [[CIR(15:0)]]

The instruction is in the CIR and only the 16-bit address needs to be examined to identify the location of the data in memory. The contents of that location are transferred into the accumulator

For example, I thought if it was the first 16 bits we would read, it would be (0:15), while if they were e.g. bits 16 to 32, it could be (16:31) or (16:16 (i.e. start at bit position 16 and read 16 bits))

I'm confused by the order 15:0, though. Does anyone know in particular what each of these numbers is referring to?

Thanks

CodePudding user response:

In the old days it was common to number bits from top/msb to bottom/lsb, so 0 was the first bit in an instruction or the sign bit in signed data.

From here: https://80character.wordpress.com/2018/12/10/pdp-8-instruction-set/

 0  1  2  3  4  5  6  7  8  9 10 11
 ------- --- --- ------------------ 
| Op    | I | Z |   Offset         |
 ------- --- --- ------------------ 

for example, a description of some of the PDP-8 instruction fields.

HP's PA-RISC as well as many other instruction sets described bits in this direction, from MSB at 0 to LSB at word size.

However, it is not only possible but also very reasonable to number the bits the other way.

In most cases the numbering does not affect anything but human readable text and such pictures.  However, some instruction sets have bit test or field extract instructions where these instructions use bit numbering and then it matters.

Bit numbering with the LSB at 0 makes sense, logically.  This way the LSB is always 0 no matter the size of the data, be it byte, word, or longer.  Mathematically, in data, the LSB represents 20, so naming the LSB as bit 0 also makes sense that way.

I'm confused by the order 15:0, though. Does anyone know in particular what each of these numbers is referring to?

This notation uses the more modern bit numbering where the LSB is bit 0.

  • Related