-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gnrc_netif: use irq_handler instead of IPC #12462
Conversation
#11256 seems to pass with this mechanism (add an |
if #12459 gets merged first, I will naturally rebase. |
This feels to me like the current network stack is changed to solve an issue within single radio. Why not use the irq_handler at at86rf2xx radio level then instead of modifying the behaviour for all radios? |
The fact is, this issue is present in all network devices. E.g for MRF24J40, if an RX IRQ is triggered while the CPU is processing this line, a NETDEV_MSG_TYPE_EVENT will be enqueued. That means, right after sending this lines will be processed. At that point, the FB was already overwritten by the TX procedure. Radio interrups should be processed asap, which is what the irq_handler mechanism is aimed for. |
Yes, he is right, some devices have different TX and RX buffers. So, those radios wouldn't be affected by this problem. I know this problem is present at least in the
To continue with the discussion before
Note this is not a change in the network stack but in the way how we process the radio events. Processing the radio IRQ shouldn't be a task of the network stack, otherwise we end up with a lot of duplication and stacks that only support a bunch of radios (Openthread only supports at86rf2xx, I think only a couple work for lwip, etc).
Adding this to the `at86rf2xx solves the issue.
|
This becomes stalled due to the discussion of #12459. However, the intention is still the same. Process the IRQ event from network devices from OS mechanisms instead of from the network stack. That way network events processing are independent of the chosen network stack. |
Since #13669 is imminent, I propose to close this one, |
Contribution description
This PR uses the
irq_handler
module to process network device events (NETDEV_EVENT_TYPE_ISR), instead of waiting for NETDEV_MSG_TYPE_EVENT IPC messages in the gnrc_netif thread.The implications of this are:
This is the root cause of #11256. Using the
irq_handler
module, the packet gets processed first.isr
field of thenetdev_driver_t
structure (that's the goal, but I'm trying to go there in small steps)I'm aware this increase the ram consumption because of the extra irq_handler thread, but the effect is mitigated if other modules (drivers, etc) also use the irq_handler mechanism. Also if the gnrc_netif events are processed from irq_handlers (or directly in the thread who calls gnrc's send, get and set stuff), then it's possible to remove the gnrc_netif thread.
Testing procedure
I would recommend to run some of the Release Specs tests for this one (e.g ICMP stress test)
Issues/PRs references
#11483