I am trying to replicate a paper in which silhouette images are transformed in order to have jagged edge:
Unfortunately, the paper doesn't provide any detail about how these were generated. Do you guys have any idea how could I go around it? Doesn't need to be exactly the same, the important bit is to mess up with the local features of each object. Ideally I would like to do it in python with OpenCV/PIL/PyTorch, but anything is ok really. Thanks.
CodePudding user response:
Just scale the silhouettes down to say 10% of their original size then scale them back up again. Optionally, threshold the result at 50% if you want pure bi-level... or use "Nearest Neighbour" when scaling to avoid introducing shades of grey in the first place. Scale to a smaller percentage for more jaggies.
Your top half scaled down to 10% then back up:
And to 20%: