I've been making a music game in which I use myAudioSource.getSpectrumData()
to get spectrum. But I noticed that the spectrum directly depends on the volume of the sound, in direct proportion: the bigger volume the bigger spectrum. And that brings problems like: zero volume makes game to stop, big volume makes it too sensitive and to work incorrectly. I've tried to normalize the spectrum but it's correlated anyway, just in [0,1]. Make some transformations with volume? It will correlate anyway.
Is here a way to get spectrum data or transforms it in some way so that it will be independent from volume?
For example, if a song is too loud so the biggest part of values from spectrum will be in [0.9, 1]
for normalized spectrum data. If the same song make quieter, it will be [0.1, 0.3]
interval.
Code:
private AudioSource src;
private void Start()
{
_samplesArray = new float[64];
}
private void Update()
{
// Every frame get spectrum into _samplesArray with size 64
src.GetSpectrumData(_samplesArray, 0, FFTWindow.Rectangular);
}
CodePudding user response:
That makes totally sense:
- If the volume is low then on the entire spectrum you get lower values / ranges / peaks
- If the volume is high then you can expect the entire spectrum to move to a higher range
You can / have to simply normalize the spectrum using e.g.
using System.Linq;
...
_samplesArray = _samplesArray.Select(s => s / src.volume).ToArray();
or without Linq maybe better to understand
for(var i = 0; i < _samplesArray.Length; i )
{
_samplesArray[i] /= src.volume;
}