I'd like to take advantage of symmetry in my SVG designs, but most viewers (including Firefox and Chromium browsers) render gaps / lines where the numbers clearly tell me there shouldn't be any. Using zoom to render those designs bigger reduces the amount and thickness of those wrongly rendered lines.
Am I misunderstanding the svg format? Is it too resource intensive to implement svg rendering mathematically correct? Or does nobody care for such edge cases?
An example to better illustrate what I'm talking about:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" width="98" height="98" fill="red" fill-opacity="0.7">
<g id="2">
<g id="4">
<path id="p" d="m49,35c-3,0 -4.575713,0.605785 -6.9934,1.888339L31.330807,18.397364 37,15C38,7.76 38,6 39,0.7 39.7,0 44,0 49,0Z"/>
<use xlink:href="#p" transform="rotate(-60, 49, 49)"/>
<use xlink:href="#p" transform="translate(98) scale(-1, 1) rotate(60, 49, 49)"/>
</g>
<use xlink:href="#4" transform="translate(98) scale(-1, 1)"/>
</g>
<use xlink:href="#2" transform="rotate(180, 49, 49)"/>
</svg>
I found here: If two partially opaque shapes overlap, can I show only one shape where they overlap? that it's possible to use filters to make overlapping shapes behave as if they weren't when it comes to opacity, but it still bugs me that workarounds like that are required. It increases complexity, file-size and decreases compatibility with viewers that don't implement those filters.
Application of that workaround to above example:
<svg width="98" height="98" fill-opacity=".7" fill="red" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<filter id="f">
<feComponentTransfer>
<feFuncA type="table" tableValues="0 .7 .7" />
</feComponentTransfer>
</filter>
<g filter="url(#f)">
<g id="2">
<g id="4">
<path id="p" d="m51 35c-3-0-6 0-10 3l-15-15 11-7c2-8 1-8 2-14 0.7-0.7 7-0.7 12-0.7"/>
<use transform="rotate(-60 49 49)" xlink:href="#p"/>
<use transform="translate(98) scale(-1 1) rotate(60 49 49)" xlink:href="#p"/>
</g>
<use transform="translate(98) scale(-1 1)" xlink:href="#4"/>
</g>
<use transform="rotate(180 49 49)" xlink:href="#2"/>
</g>
</svg>
CodePudding user response:
Think about how any sort of rendering engine has to render your SVG. It steps through the SVG, drawing on element at a time.
Unless that shape is a line or rectangle that hits pixel boundaries exactly, it needs to smooth the edges of that shape. It does that using a technique called ant-aliasing. It draws semi-transparent pixels to approximate an edge that obnly partially covers a pixel.
For example, if a shape covers exactly half a pixel, the renderer will draw the colour into that pixel with 50% alpha.
It then moved on and draws the next shape. Even if that shape has a mathematically congruent border, you probably won't end up with a perfect join.
Here's why.
Picture three adjacent pixels where the edge of a shape passes exactly halfway through the centre pixel.
-------- -------- --------
First shape drawn: | 100% | 50% | 0% |
-------- -------- --------
The percentages here represent the amount of the shape's colour that is drawn into each pixel. 100% in the left pixel. 50% colour (alpha) in the middle pixel. And no colour drawn into the right pixel.
Now imagine a second shape that shares a border with the first shape. You might imagine that the following is what happens.
-------- -------- --------
First shape drawn: | 100% | 50% | 0% |
-------- -------- --------
Second shape drawn: | 0% | 50% | 100% |
-------- -------- --------
Resulting image: | 100% | 100% | 100% |
-------- -------- --------
But that isn't what happens. The first shape has already been rendered out as pixels. The renderer has no memory about the shape of previous things it has drawn. It only has the colour of the previously rendered pixels to go by.
When it goes to draw the middle pixel, in either step, it blends the 50% new colour with what the pixel value already is. The formula will be roughly the following:
result = 0.5 * old_pixel_colour 0.5 * new_pixel_colour
So, for example, let's take the pixel percentage examples from above and imagine we are drawing two red shapes onto a white background.
After the first shape is drawn, the pixels should look something like this.
rgb(255, 0, 0) rgb(255, 128, 128) rgb(255, 255, 255)
[0 * bg 1.0 * red] [0.5 * bg 0.5 * 50%_red] [1.0 * bg zero_red]
Where bg
represents the white background colour the pixels start with. And 50%_red
means the 50% transparent red that antialiasing is using to represent a half-covered pixel.
After the second pass, the pixels will look something like this:
rgb(255, 0, 0) rgb(255, 192, 192) rgb(255, 0, 0)
[1.0 * first no_red] [0.5 * first 0.5 * 50%_red] [0 * first 1.0 * red]
Where first
represents the colour of the pixel after the first shape is drawn. I hope this makes sense.
Or in terms of percentage of colour (red).
-------- -------- --------
First shape drawn: | 100% | 50% | 0% |
-------- -------- --------
Second shape drawn: | 0% | 50% | 100% |
-------- -------- --------
Resulting image: | 100% | 75% | 100% |
-------- -------- --------
So I hope you can see why those border pixels can end up showing a faint white line. It's due to the fact that you are blending two layers of antialiased pixels.
Theretically, a renderer could analyse the pixel coverage of a stack of shapes. But that is mathematically very complex. It would slow down the rendering process enormously.