I am currently trying to get my growth algorithm to work on a texture.
When running in the editor everything works as expected, however once I build the project, the whole RenderTexture becomes a solid color (red, green or blue [R8G8B8A8_UNORM] depending on the color format) with the simulation on top.
I have already tried to use an HDRP unlit texture shader instead of my custom transparency shader, which produced the same issue leading me to believe that my mistake lies somewhere within the compute shader drawing onto the texture. Also, I rebuilt the project using URP which unfortunately also produced the same result.
One other thing, I recently noticed that minimizing and maximizing the game window on runtime in the editor more than once crashes unity although I can't image how this has anything to do with the issue at hand.
EDIT: Just built the same project for windows (DX11) which works perfectly. This therefore seems to be an issue with the Metal API.
Interestingly, the maximizing/minimizing problem appears only if vsync is enabled and the touchpad gesture is used.
Unity Version 2021.2.12f1 using the HDRP on MacOS Monterey 12.2.1.
GitHub if you would like to reproduce the error: https://github.com/whatphilipcodes/seed
Compute Shader Code below.
// mean filter code (DrawTrails) courtesy of https://github.com/DenizBicer/Physarum
// (Modified)
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel DrawPoints
#pragma kernel DrawTrails
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
// texture
shared RWTexture2D<float4> Result;
//RWTexture2D<float> TrailBuffer;
int _texres;
int _colres;
float4 _pcol;
// testing
float _decay;
// buffer
StructuredBuffer<float2> pointsBuffer;
StructuredBuffer<float4> colorsBuffer;
[numthreads(64,1,1)]
void DrawPoints (uint3 id : SV_DispatchThreadID)
{
if ((pointsBuffer[id.x].x * pointsBuffer[id.x].y) > 0) Result[pointsBuffer[id.x].xy] = colorsBuffer[id.x % _colres];
}
[numthreads(16,16,1)]
void DrawTrails (uint3 id : SV_DispatchThreadID)
{
float3 value;
value = Result[id.xy].rgb;
float3 d = float3(1, -1, 0.0);
//mean filter red
value.r = value.r;
value.r = Result[id.xy - d.xx].r; // -1,-1
value.r = Result[id.xy - d.zx].r; // 0, -1
value.r = Result[id.xy - d.yx].r; // 1, -1
value.r = Result[id.xy - d.xz].r; // -1, 0
value.r = Result[id.xy d.xz].r; // 1, 0
value.r = Result[id.xy d.yx].r; // -1, 1
value.r = Result[id.xy d.zx].r; // 0, 1
value.r = Result[id.xy d.xx].r; // 1, 1
value.x = (value.x / 9) * (1 - _decay);
//mean filter green
value.g = value.g;
value.g = Result[id.xy - d.xx].g; // -1,-1
value.g = Result[id.xy - d.zx].g; // 0, -1
value.g = Result[id.xy - d.yx].g; // 1, -1
value.g = Result[id.xy - d.xz].g; // -1, 0
value.g = Result[id.xy d.xz].g; // 1, 0
value.g = Result[id.xy d.yx].g; // -1, 1
value.g = Result[id.xy d.zx].g; // 0, 1
value.g = Result[id.xy d.xx].g; // 1, 1
value.g = (value.g / 9) * (1 - _decay);
//mean filter blue
value.b = value.z;
value.b = Result[id.xy - d.xx].b; // -1,-1
value.b = Result[id.xy - d.zx].b; // 0, -1
value.b = Result[id.xy - d.yx].b; // 1, -1
value.b = Result[id.xy - d.xz].b; // -1, 0
value.b = Result[id.xy d.xz].b; // 1, 0
value.b = Result[id.xy d.yx].b; // -1, 1
value.b = Result[id.xy d.zx].b; // 0, 1
value.b = Result[id.xy d.xx].b; // 1, 1
value.b = (value.b / 9) * (1 - _decay);
Result[id.xy] = float4(value.r, value.g, value.b, 0.0);
}
Any idea is gratefully appreciated!
CodePudding user response:
The way I ended up solving my issue was to add a kernel that sets all pixels to black, dispatched from the start() method.
[numthreads(16,16,1)]
void SetTexture (uint3 id : SV_DispatchThreadID)
{
Result[id.xy] = float4(0.0, 0.0, 0.0, 0.0);
}
I'm pretty sure that there are far more elegant solutions but for now this will have to do.