Home > database >  What do CS_BYTEALIGNCLIENT and CS_BYTEALIGNWINDOW mean?
What do CS_BYTEALIGNCLIENT and CS_BYTEALIGNWINDOW mean?

Time:01-30

I have trouble understanding these two class styles. The docs say that they align the window on a byte boundary, but I don't understand what that means.

I have tried to use them and yes, the position of the window upon creation is different, but what do they do and why would I use them is unclear to me.

CodePudding user response:

From Why did Windows 95 keep window coordinates at multiples of 8? by Raymond Chen:

The screen itself is a giant bitmap, and this means that copying data to the screen goes much faster if x-coordinate of the destination resides on a full byte boundary. And the most common x-coordinate is the left edge of a window’s contents (known as its client area).

Applications can request that Windows position their windows so that their client area began at these advantageous coordinates by setting the CS_BYTE­ALIGN­CLIENT style in their window class. And pretty much all applications did this because of the performance benefit it produced.

So what happened after Windows 95 that made this optimization go away?

Oh, the optimization is still there. You can still set the CS_BYTE­ALIGN­CLIENT style today, and the system will honor it.

The thing that changed wasn’t Windows. The thing that changed was your video card.

In the Windows 95 era, predominant graphics cards were the VGA (Video Graphics Array) and EGA (Enhanced Graphics Adapter). Older graphics cards were also supported, such as the CGA (Color Graphics Adapter) and the monochrome HGC (Hercules Graphics Card).

All of these graphics cards had something in common: They used a pixel format where multiple pixels were represented within a single byte,¹ and therefore provided an environment where byte alignment causes certain x-coordinates to become ineligible positions.

Once you upgraded your graphics card and set the color resolution to “256 colors” or higher, every pixel occupies at least a full byte,² so the requirement that the x-coordinate be byte-aligned is vacuously satisfied. Every coordinate is eligible.

Nowadays, all graphics cards use 32-bit color formats, and the requirement that the coordinate be aligned to a byte offset is satisfied by all x-coordinates.³ The multiples of 8 are no longer special.

CodePudding user response:

What do they do and why would I use them?

With modern display technology and GPUs, they (probably) do very little in terms of performance.

In older times, though, a (potentially slow) CPU would need to write blocks of RAM directly to display memory. In such cases, where a display and/or bitmap has a "colour depth" of less than one byte – like monochrome (1 bit-per-pixel) and low colour (say, 4 bpp) – windows and their clients could be aligned such that each row was not 'aligned' to an actual byte boundary; thus, block-copy operations (like BitBlt) would be very slow, because the first few pixels in each row would have to be set by manually setting some of the bits in the display memory according to some of the bits in the first bytes of the source (RAM). These slow operations would also be propagated along each row.

Forcing the display (be it the client area or the entire window) to have its x-origin (those flags/styles only affect the x-position) aligned to a true byte boundary allows much faster copying, because there would then be a direct correspondence between bytes in the source (RAM) and bytes in the target (display); thus, simple block-copying of a row of bytes can be performed (with something akin to memcpy), without the need for any manipulation of individual bits from different bytes.

As a vague analogy, consider the difference (in speed and simplicity) between: (a) copying one array of n bytes to another of the same size; and (b) replacing each byte in the second array with the combination of the lower 4 bits of one source element with the higher 4 bits of the following source element.

  • Related