-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Processor load #52
Comments
does it appear after a long time or as soon as QLog is connected to DXC? |
Have you set "Spot Aging" time in Bandmap or you have set "Clear older than" to "never" in Bandmap widget? |
Unfortunately, I did not note down the spot rate during the measurement. You could make up for it at the next contest, hi.
The high processor load only occurs after some time when the spot rate is high. The value then remains relatively constant in the range of 25%. Right now the spot rate is around 16.322. The processor load is around 1% to 4%. I used the telnet DX cluster "telnet.reversebeacon.net:7300", because it gives the best results for CW operation. The total number of automatic sports is of course high here. In CQRLog I also use the function "Callsign alert", because I can see relatively fast, where known radio partners are active.
The spot aging period is set to "Clear older than 30 Minutes". This could certainly be set shorter for high spot rates. In CQRLog at high spot rates the lines in the Telnet DX cluster window are not displayed completely after some time and then the DX cluster window freezes completely. This is not the case in QLog. Saku has already improved the behavior a bit, but could not solve the problem completely. Apparently the processing of the DX-Cluster spots generally requires a high processor load. |
I have identified following bottlenecks in DXC processing
At this moment, I have no quick solution for it - the fix will not be a part of v0.9. |
I have forgotten, Callsign alerting will be implemented in the next release (hopefully) |
I can see that you have tested v0.9.0. do you see any improvement in CPU load? |
Hi, I have only had a quick look at testing_0.9 so far. The processor load is currently at 1% to 4%. But the spot rate is currently only about 10,200 spots per hour. You would have to check this once at the weekend when the spot rates are much higher. |
At the moment, RBN has a spot rate of about 61,600 spots/h. The CPU load fluctuates between approx. 5% and 12%. You can't see why the load fluctuates so differently. A better assessment is not possible at the moment. It would be necessary to wait for higher spot rates. I use a filter here: CW, all HF bands, all continents, spotter continent Europe only. This reduces the lines displayed in the DX Cluster widget. If I now reduce the spots to one band, e.g. 80m, the lines in the DX Cluster widget will be reduced even more. However, this does not reduce the CPU load. So my question: Are the database queries done for each spot received by RBN before the filter, or only for the filtered spots? If you put the database queries behind the filter, should it be possible to reduce the number of database queries? At least for those who use filters. Or do I have a thinking error here? |
Many thanks for your report. I have also tested the CPU Load. I used FT8 RBN (it produces also a huge traffic) and it looked good. My problem was, apparently my CPU is powerful than yours. To simulate your load, I had to change the CPU profile to minimum. And then I didn't reach the reported load. That's why I was curious about your measurements. To your question. The filter has no impact on received spots. It means that all spots are processes and stored in memory by QLog because when you want to change the filter, QLog has to show an old spots. Therefore spots cannot be filtered out immediately after receive function. To improve a performance, I added Memory Caches and other improvements. What I found out that the database is not such a bottleneck. There was an issue with redrawing the bandmap. The original QLog redrew the whole bandmap after EVERY received spot. Currently, Qlog redraw only the necessary elements. Unfortunately, if there is a large number of spots on the map, then redrawing again takes some CPU time. But I admit, it is necessary to look at it again later, because especially a memory management is not completely under control now. 61000 spots/h is OK when qlog runs for a few hours. However, if it runs, for example, a day, I think it could be a problem. |
Unfortunately, I can confirm your observation with one difference, I'm able to make it working, but my CPU (i7-11800H @ 2.30GHz) is 80-90% busy. Despite the fact, that you have set DXC Filter, QLog receives and handle all spots. It is important to know that only filtered QSOs are shown but all spots are processed by QLog (Qlog has to determine DXCC country, Logbook status -worked/new, determine Spotter DXCC and country and many other parameters for EVERY received spot, Based on these computed parameters, QLog can provide an extended DXC filter). QLog has never been designed to handle RBN traffic during CQ WW WPX Contest CW where a spot rate is over 100k spots/h (btw it means around 30 spots/s in average - with high peaks) if we would prepare QLog for this traffic, it would mean redesign the logic of QLog DXC processing. It would probably be necessary to use separate processes/threads for parsing QSOs, processing QSOs, repainting of bandmap, sending a notificaiton via Network etc. Currently, all these functions are done in one thread. I'll leave it as-in now. But if there will be a Pull Request that fixes/improves it, I would like to welcome it. |
Hi,
Yes, a good idea ... The high spot rate does not affect the system in general. You can replace RBN with a normal DX cluster in these cases. In these contest conditions RBN makes no sense anyway. From my point of view this is no problem for the use of QLog. |
I am closing the issue. Currently not observed the high CPU usage |
QLog generates a high processor load when DX-Cluster is switched on. With 'telnet.reversebeacon.net:7300' I measured up to 25%.
With the same settings it is with CQRLog up to approx. 6%.
The text was updated successfully, but these errors were encountered: