I'm sorry if this is a stupid question, but I want to make sure that I'm right or not. suppose we have an 8x8 pixel screen and we want to represent a 2x2 square, a pixel can be black - 1 and white - 0. I would imagine this as an 8x8 matrix
[[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0]]
using this matrix, we paint over the pixels and update them (for example) every second. we also have the coordinates of the pixels representing the square : (4,4) (4,5) (5,4) (5,5) and if we want to move the square we add 1 to x part of coordinate.
is it true or not?
CodePudding user response:
Assuming you want to move a group of pixels to the right, then yes, all you need to do is identify the group of pixels and add 1 to their X coordinate. Of course you need to fill in the vacated spots with zeroes, otherwise that would have been a copy operation.
Keep in mind, my answer is a bit naive in the sense that when you reach the rightmost boundary, you have to wrap.
CodePudding user response:
Graphics Rendering is a complex mesh of art, mathematics, and hardware, assuming you're asking about how the screen actually works instead of a pet problem on simulating displays.
The buffer you described in the question is the interface which software uses to tell the hardware (video card) what to draw on the screen, and how it is actually done is in the realm of hardware. Hence, the logic for manipulating graphics objects (things you want drawn) is separate from the rendering process itself. Your program tells the buffer which pixels you want to update, and that's all; this can be done as often as you like, regardless of whether the hardware actually manages to flush its buffers onto the screen.
The software would be responsible for sorting out what exactly to draw on the screen; this is usually handled on multiple logical levels. Higher levels would construct a virtual worldspace for your objects and determine their interactions and attributes (position, velocity, collision, etc.), as well as a camera to determine the FOV the screen should display (if your world is 3D). Lower levels would then figure out the actual pixel values to write to the buffer, based on the camera FOV (3D), or just plain pixel coordinates after applying the desired transformations (rotation, shear, resize, etc.) to the associated image (2D).
It should be noted that virtual worldspace coordinates do not necessarily reflect pixel coordinates, even in 2D worlds. I'm not an expert on this subject, frankly, but I suspect it'll be easier if you first determine how far you want the object to move in virtual space first, and then apply the necessary transformations to show the results in a viewing window with customizable dimensions.
In short, you probably don't want to 'add 1 to x' when you want to move something on screen; you move it in a high abstraction layer, and then draw the results. This will save you a lot of trouble, especially if you have a complex scene with all kinds of stuff and a background.