Raw Files: Range of Sample Amplitudes #525
SullivanChrisJ
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
One of my project goals is to measure the amplitude of incoming FM signals. I run both RTL-SDR and RSP devices with AGC off. I've not found much helpful information on how to adjust the gain of the RSP for sensitivity and dynamic range, and am curious about how the range of amplitudes in raw files is calculated.
Most of the work I've done with signal processing with floating point values uses samples in the range of -1 to +1 (not inclusive), which I'll usually report as a negative value of dBFS. Complex samples that we see in the raw files can go a bit higher as the amplitude is the geometric sum of I & Q so when using that we scale down by the square root of two to bring it into range. For power measurement (akin to the S-Meter on a receiver) we take 10 times the base 10 logarithm of mean squared value. In Python, it looks like this.
In the many samples I have I seen a maximum value of about 40 dB on the RTL-SDR and 42 dB on the RSP. The RTL-SDR is running with 512 sample FFT and 2.048 MHz bandwidth, while the RSP is running with a 1024 sample FFT and 5.120 MHz bandwidth.
What I'd like to understand is what is driving these maximum sample values. I'd like to scale them down to the (-1, 1) range that I'm used to and/or calibrate to a signal generator but I'm stuck in both understanding what parameters if any affect the signal magnitude and on the RSP, how to properly adjust the device setup to maximize dynamic range and control sensitivity.
Beta Was this translation helpful? Give feedback.
All reactions