Home > Mobile >  converting grayscale 8-bit to 32-bit
converting grayscale 8-bit to 32-bit

Time:11-01

I have the following code

char c = 0xEE;
int color = 0xFF000000;
color  = (int)c;
printf("%x\n", color);    

I expected the result to be 0xFF0000EE; but instead the output was

-> feffffee

What am I missing, i thought simply calculating

(int)(0xFF << 24   c << 16   c << 8   c);

would give 0xFFEEEEEE but i get 0

EDIT:

the following code seems to work:

unsigned char c = 0xEE;
unsigned int color = 0xFF000000; /* full opacity */

color  = (unsigned int)c;
color  = (unsigned int)c << 8;
color  = (unsigned int)c << 16;    
printf("-> %x\n", color); 

    

CodePudding user response:

char can be a signed type or an unsigned type. For you, it's apparently a signed type. You end up assigning -18, which is ffffffee when extended to 32 bits on a 2's complement machine.

Fixed:

#include <stdio.h>

int main(void) {
   unsigned char c = 0xEE;
   unsigned int color = 0xFF000000;
   color |= c;
   printf("%x\n", color);   
   return 0;
}

Portable:

#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>

int main(void) {
   unsigned char c = 0xEE;
   uint32_t color = 0xFF000000;
   color |= c;
   printf("%" PRIx32 "\n", color);   
   return 0;
}

CodePudding user response:

The following modifications will also result in 0xFF0000EE:

unsigned int c = 0x000000EE;
unsigned int color = 0xFF000000;
color =  c | color;
  • Related