You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LMS_Calibrate uses channel bandwidth for calibration routines 1 and 2 i.e. DC offset and LO leakage (Tx), phase, gain imbalances
LMS_SetLPFBW uses LP BW and is for calibration routines 3 and 4 i.e. analog filter bandwidth
The code does LMS_Calibrate after LMS_SetLPFBW and uses LP bandwidth. This seems to be quite wrong moreover since LMS_SetLPFBW invokes calibration routines all streams should be suspended while it runs.
The text was updated successfully, but these errors were encountered:
Edouard, I have tried version 3.8.2. Changing LP filter to anything above 30MHz no longer causes buffer under run problems BUT the value of the LP filter does not seem to be applied. No matter what I set it to, nothing changes as far as apparent gain goes in spectrum display.
You have to have a very large host sample rate to be able to see any change in LP above 30 MHz i.e. you need a rate significantly above this figure in MS/s like 40 MS/s and I am not sure this is workable under all conditions.
If you try reasonable values like tuning the filter at a few MHz then with a 10 MS/s host sample rate without software decimation you will be able to see the difference (it does for me).
The calibration process is a little bit more complex than originally thought. See: https://wiki.myriadrf.org/LMS7002Mr3_Calibration_Using_MCU
There are actually 2 APIs for calibration:
The code does LMS_Calibrate after LMS_SetLPFBW and uses LP bandwidth. This seems to be quite wrong moreover since LMS_SetLPFBW invokes calibration routines all streams should be suspended while it runs.
The text was updated successfully, but these errors were encountered: