IEEE802.1Qbv scheduling #38
Replies: 18 comments
-
Hello Ananya,
Best, |
Beta Was this translation helpful? Give feedback.
-
Hello Timo, Thank you for the clarification. I have another question. There is a parameter 'payload' in TrafficSourceAppBase. So I can provide the payload for a frame for each periodic flow via this parameter. However, can you let me know the header size of a frame used in the model? I am using the total size of a frame for my calculations and formulations. So I need to subtract the header size to provide the payload size for each frame to CoRE4INET. Thanks and Regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Ananya, Yes, this is correct you can set the size of the payload transmitted periodically in the payload parameter of each source app.
Our applications directly transmit their data via Layer 2 Ethernet frames. If you would like to simulate higher-layer traffic, you can use the applications of the INET framework and configure their traffic class with our adapted network layer modules in Best. |
Beta Was this translation helpful? Give feedback.
-
Hello, I wish you all a happy new year. For my scenarios, I was getting the delays greater than the expected values. Therefore, I created a simple scenario: only 2 nodes connected directly to each other and transmitted packets of a certain size periodically. The delay came out to be 0.032us more than sum of transmission delay and propagation delay considering propagation speed of 2*10^8 m/s. Thanks and regards, |
Beta Was this translation helpful? Give feedback.
-
Hello, a happy new year for you as well. Apart from that our nodes allow setting additional processing delays which all default to 0, e.g. double hardware_delay @Unit(s) = default(0us); In my small test setup recreated from your description, I could verify that our modules do not introduce an additional delay. The delay is solely dependent on the channel. If you have opposing findings please send me the network so I can check out where the delay is created. There might be an error in your calculation though as 0.032us would be introduced by 4 additional bytes on a gigabit link. Maybe you did not account for the added header information in Q-frames of 4 Bytes. Best, |
Beta Was this translation helpful? Give feedback.
-
Hello Timo, I am sending 100 byte payload packets at 1 ms interval over a Eth1G link of 20metres. Considering standard Ethernet header size of 18 bytes and additional header size of Qframes of 4 bytes, the total frame size should be 100+18+4 = 122 bytes, right? So sum of transmission delay and propagation delay in microseconds = 122*8/1000 + 20/200 = 0.976+0.1 us = 1.076us. However, the rxlatency stats is 1.108 us (which is 0.032us higher). Thanks and Regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Ananya, I've built a network using your specifications and I think I found the mistake in the delay calculation you provided. For the calculation, it is important to note that the transmission is handled on the Ethernet physical layer (layer 1), which adds 7 Bytes preamble and 1 Byte SFD as a header. This needs to be taken into account for the calculation of transmitted bytes: Regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Timo, Consider the scenario of transmitting a single flow using IEEE8021QTrafficSourceApp from node0 to node1 via switch0. node0 and node1 are connected to switch0 via 20m Eth1G links. The flow is of priority 7 and 100 bytes payload (so effectively 130 bytes) and the gate for queue7 of switch0 is opened for exactly the transmission delay of a frame (130*8/1000 = 1.04us) at the beginning of cycle (cycle duration 97.72us). So, with default value of startTime=0s, if I send only 1 frame(I did so by making nodes[0].app[0].sendInterval in nodes.ini and sim-time-limit in omnetpp.ini both =1s), the rxlatency at node1 for 1 frame is 98.86us = 97.72(cycle length)+1.04(transmission delay) + 0.1(propagation delay). This can be understood by considering the packet missed the gate opening of the 1st cycle and hence transmitted at the beginning of the next cycle. 1st question: since both node0 and node1 are connected to switch by 20m links, for calculation of rxlatency, why is the propagation delay not taken 2 times. Also, shouldn't transmission delay be taken two times (transmitted from node0 and also from switch)? 2nd question: In no case should the rxlatency exceed 98.86us (considering the delay calculation is correct). However, if I run for longer simulation time and more frames of the flow (for example, sendInterval 20ms), it is coming upto as high as 110us from the statistics (histogram for rxlatency queue7). What could be the reason? I have changed the scheduler tick length to 1 ns and precission to 0.1ns in omnetpp.ini file. Keeping the scheduler tick = 80ns (as in the tsn example), delay was coming even higher than 98.86us for the scenario of sending 1 frame. Also, could the parameters oscillator max_drift and drift_change be causing this impact? 3rd question: Shouldn't the gate open duration per cycle be equal to the transmission delay of the flow? If I try with gate open duration much lesser than 1.04us, still the frame goes through. I am pasting parts of the configuration here for your reference: Configuration in nodes.ini file: **.nodes[].phy[].taggedVIDs = "1" Configuration in switches.ini file: **.switches[0].phy[].taggedVIDs = "1" connections in the network.ned file: connections: **.scheduler.tick = 1ns Thanks and Regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Timo,
Another question, how long should a gate be open to transmit a frame? Just the transmission delay or less or more?If I increase the gate open duration beyond 1.04us, the number of outliers decreases. But that should not be my approach since I want to keep the gate open times minimum. Also, since I am keeping cycle duration much smaller than the flow period 20ms, any frame should be transmitted by atmost the next cycle, therefore within 97.92+1.04+0.1 = 98.86 us, or if you consider transmission delay and propagation delay two times 97.92 + 2 * 1.04 + 2 * 0.1 = 100us. Thanks and regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Ananya,
Probably the problem with your calculations is the processing delay of the switches. If you disable it the results should match your expectations. Regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Ananya, to add on to my last comment, I forgot to comment on the clock jitter description. The clock jitter should have no impact on the max. transmission delay. On the other hand, it will influence how precise the application can hit the time slot in the gate controls. Best regard, |
Beta Was this translation helpful? Give feedback.
-
Hello Timo, I have another question. As I understand, in your implementation, the priority of the flow determines which queue of any switch the flow goes through. However, for the creation of my offline schedule, taking multiple switches into account, queue assignment for a flow is not fixed based on the flow's priority. Hence a flow can be assigned to queue7 of switch 1 and then queue6 of switch 2 and so on. Is this possible to do with your current implementation? Otherwise, could you point to me the portion of the code where I can edit this? This is regard to your clarification on point 3. So suppose a queue7 is opened from time 0 to time 1.04 and queue6 is opened from time 1.04 to 1.76. So if flow assigned to queue7 is received for example at time 1 and its transmission delay is 1.04, as you say it will still get transmitted even though the gate closes at 1.04. So in that case, it will get transmitted until 1+1.04=2.04. So, in that case the flow assigned to queue6 will not get transmitted at all since it was scheduled to be transmitted from 1.04 to 1.76, right? Thanks and regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Ananya, I do not quite understand what you actually want to do with this concept. Do you want to change the PCP encoded in the frame at a boundary port of your network? This would require a modification of the in-control modules of 802.1Q/Qci (src/core4inet/linklayer/inControl/IEEE8021Q). Or do you want to configure a custom queueing method, where on certain devices the PCPs are interpreted in a different manner? This would require a custom queueing module (src/core4inet/linklayer/shaper/IEEE8021Qbv/queueing). Regarding the schedule calculation. That is exactly the point that I was trying to make. It is common practice, to introduce guard bands/red phases (all gates closed) in size of a full ethernet frame to ensure that the band is free when the high priority scheduled message arrives. Another possibility is frame preemption, which I mentioned in my earlier comment that is currently not implemented in our simulation model. Usually, you would use time-synchronized hosts to ensure that the sending nodes hit their time slots precisely in such a tight schedule. Best, |
Beta Was this translation helpful? Give feedback.
-
Hello Timo, Thank you for the clarification. Regarding the point of changing the queue assignment at another switch, actually I am trying to create an offline schedule based on some formulations. I was trying to make the queue assignment/priority for every switch as output of the problem, but now I see, once assigned to a queue, it needs to be the same throughout. Configuration in omnetpp.ini: Using the 6th queue in switch.ini: Thanks and regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Timo, Thanks and regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Ananya, for the clock drift, you can check out our example simulations. most of the time we use a precision of 500ns which is enough if your schedules are in the area of multiple us. Yes, there is a gap between Ethernet frames called the interpacket gap: https://en.wikipedia.org/wiki/Interpacket_gap I suggest you try to find the reason why the packets are stuck in the queue using the simulation GUI, doing step by step simulation. Start your network, move into the switch and into the phy[] module which you expect to queue the packets. Then move to the shaper module. Check which states the gates are in when the packets arrive and how the frames are selected. All the calls should be animated and it should be very clear, why a packet can not be transmitted. As those are very specialized research questions I hope this helps you in finding out what's wrong with your calculation. Best regards, |
Beta Was this translation helpful? Give feedback.
-
Dear Timo, Thank you for explaining where to look. I think I understood the issue. I believe you have a lower limit on the payload size of the frame = 42 bytes. 42 bytes frame will give latency = 0.676 us (transmission delay is 0.576 us (72*8/1000), propagation delay is 0.1 us). If I want to use frame of size lower than 42 bytes, it also takes the delay as 0.676 us. Is there a way to take a lower payload size? The default value of the payload in TrafficSourceAppBase.ned was 46Byte. I have changed it to 1 byte but still the issue persists. Kindly suggest. Thanks and Regards, |
Beta Was this translation helpful? Give feedback.
-
Dear Ananya, this lower limit for the frame size is enforced by the Ethernet protocol. See https://en.wikipedia.org/wiki/Ethernet_frame. If the payload is smaller than 46 (BE) /42 Bytes (Q) it will be padded so the whole frame is no smaller than 64 Bytes on the line. Regards, |
Beta Was this translation helpful? Give feedback.
-
Hello Core Group,
I have created an offline schedule for IEEE802.1Qbv TSN and I want to use your implementation to test my required topologies and evaluate the delay characterisitcs. I have a couple of questions:
In the example for TSN small network, why do the hosts also have a gate control list? I believe only the switches are supposed to have a GCL.
The parameter 'sendInterval' is the periodicity of the application/flow from each node. However, can I randomize the starting time of the first frame of the flow? I want to introduce a random offset within a limit for the first frame of each flow. After that, the following packets will follow the periodicity of the flow.
Thanks and Regards,
Ananya
Beta Was this translation helpful? Give feedback.
All reactions