#include <stdio.h>
int main(){
char a[10]={0,1,0,1,0,1,0,1};
unsigned short *p;
p=(unsigned short *)&a[0];
*p=1024;
printf("%d",a[1]);
return 0;
}
Why answer is 4?? Isn't 1024 entered in array a[0] and a[1] remains? Why does it affect up to a[1]?
CodePudding user response:
i tested my self and ,if you test the value of a[0] you will see it never change the pointer is changing the value of a[1] but the @ in p is the @ of a[0]. interesting behavior and the answer is that using char, for char max value is 125 if i m not wrong so if you providing a huge value will overwrite the next value and this is interring
CodePudding user response:
I'm reasoning on a big endian system (most significant bit stored at the smallest address).
char a[10]={0,1,0,1,0,1,0,1};
In binary, your array initially looks like that in your memory : 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0000
A char only takes a byte in memory. That means, from right to left, the first 0000.0000 is a[0], then 0000.0001 is a[1], etc.
unsigned short *p;
p=(unsigned short *)&a[0];
*p=1024;
You assigned to p the address of the array. Then you dereferenced it and put in the address stored by p an unsigned short equal to 1024. In binary, 1024 looks like : 0000.0100.0000.0000 An unsigned short takes two bytes in memory.
So,this is what your array becomes after your modification : 0000.0100.0000.0000 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0001 0000.0000 0000.0000
==> What happened is, since you treated your p as a pointer to an unsigned int, you changed the first 2 bytes instead of just the first one (if it was a char). Then, when you want to access your data via your char array, it will be treated as a char, byte by byte. Then, 0000.0100 which is 4 in decimal is a[0] and 0000.0000 which is 0 in decimal is a[1].
Note : This is what would happen on a big endian system. Since the result you got was the opposite, I believe yours is little endian.