forked from OISF/suricata
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Function definition reformatting to coding standard #15
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Put open brace { for function on a new line to match coding standard. Changed: int foo(int x) { } to: int foo(int x) { }
Pushed to OISF#504 |
regit
added a commit
that referenced
this pull request
Jun 16, 2015
This patch fixes a partial long duration lock up in Suricata. The problem arises when max_pending_packet is reached in worker mode. In that condition some capture threads get blocked in FlowGetFlowFromHash call. The following backtrace shows an example of the lock up. The first thread is waiting on the flow bucket mutex and the second one is remaining stuck at PacketPoolWait because there is almost no signalling in the used worker mode: (gdb) bt #0 0x00007f5442a4ed5c in __lll_lock_wait () from /lib/x86_64-linux-gnu/libpthread.so.0 #1 0x00007f5442a4a3a9 in _L_lock_926 () from /lib/x86_64-linux-gnu/libpthread.so.0 #2 0x00007f5442a4a1cb in pthread_mutex_lock () from /lib/x86_64-linux-gnu/libpthread.so.0 #3 0x00000000005346c0 in FlowGetFlowFromHash (tv=0x2c9f68c80, dtv=0x2cc63cc30, p=0x2cc62a700) at flow-hash.c:653 #4 0x000000000053122c in FlowHandlePacket (tv=0x2c9f68c80, dtv=0x2cc63cc30, p=0x2cc62a700) at flow.c:340 #5 0x0000000000461cb0 in DecodeTCP (tv=0x2c9f68c80, dtv=0x2cc63cc30, p=0x2cc62a700, pkt=0x7f542ec00bd4 "\312?\037L\277h\257p", len=32, pq=0x2c9f68f10) at decode-tcp.c:206 #6 0x000000000045db38 in DecodeIPV4 (tv=0x2c9f68c80, dtv=0x2cc63cc30, p=0x2cc62a700, pkt=0x7f542ec00bc0 "E", len=61, pq=0x2c9f68f10) at decode-ipv4.c:561 #7 0x0000000000459887 in DecodeEthernet (tv=0x2c9f68c80, dtv=0x2cc63cc30, p=0x2cc62a700, pkt=0x7f542ec00bb2 "", len=75, pq=0x2c9f68f10) at decode-ethernet.c:60 #8 0x00000000005a928d in DecodeAFP (tv=0x2c9f68c80, p=0x2cc62a700, data=0x2cc63cc30, pq=0x2c9f68f10, postpq=0x0) at source-af-packet.c:1872 #9 0x00000000005db191 in TmThreadsSlotVarRun (tv=0x2c9f68c80, p=0x2cc62a700, slot=0x2c9f68ed0) at tm-threads.c:132 #10 0x00000000005a08a5 in TmThreadsSlotProcessPkt (tv=0x2c9f68c80, s=0x2c9f68ed0, p=0x2cc62a700) at tm-threads.h:147 #11 0x00000000005a2982 in AFPReadFromRing (ptv=0x2cc62b710) at source-af-packet.c:874 #12 0x00000000005a40c2 in ReceiveAFPLoop (tv=0x2c9f68c80, data=0x2cc62b710, slot=0x2c9f68d90) at source-af-packet.c:1214 #13 0x00000000005dbae6 in TmThreadsSlotPktAcqLoop (td=0x2c9f68c80) at tm-threads.c:336 #14 0x00007f5442a47b50 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #15 0x00007f544130f95d in clone () from /lib/x86_64-linux-gnu/libc.so.6 #16 0x0000000000000000 in ?? () (gdb) thread 5 [Switching to thread 5 (Thread 0x7f54325a6700 (LWP 9282))] #0 0x00007f5442a4c344 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0 (gdb) bt #0 0x00007f5442a4c344 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0 #1 0x00000000005d806e in PacketPoolWait () at tmqh-packetpool.c:152 #2 0x0000000000539443 in FlowForceReassemblyPseudoPacketGet (direction=1, f=0x83a7880, ssn=0x2dcea3680, dummy=0) at flow-timeout.c:257 #3 0x000000000053972f in FlowForceReassemblyForFlow (f=0x83a7880, server=2, client=1) at flow-timeout.c:377 #4 0x0000000000535197 in FlowManagerFlowTimedOut (f=0x83a7880, ts=0x7f54325a52a0) at flow-manager.c:246 #5 0x0000000000535231 in FlowManagerHashRowTimeout (f=0x83a7880, ts=0x7f54325a52a0, emergency=0, counters=0x7f54325a5280) at flow-manager.c:294 #6 0x00000000005354f8 in FlowTimeoutHash (ts=0x7f54325a52a0, try_cnt=0, hash_min=0, hash_max=1048576, counters=0x7f54325a5280) at flow-manager.c:389 #7 0x0000000000535e48 in FlowManager (th_v=0x2dd38e330, thread_data=0x2dd38de80) at flow-manager.c:612 #8 0x00000000005dc7a0 in TmThreadsManagement (td=0x2dd38e330) at tm-threads.c:600 #9 0x00007f5442a47b50 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #10 0x00007f544130f95d in clone () from /lib/x86_64-linux-gnu/libc.so.6 #11 0x0000000000000000 in ?? () This problem is due to the fact that the return_stack condition is not signaled if a packet is returned to the thread own PacketPool. So if the FlowManager try to get a packet and has to wait for some to be available then it can get stuck on the condition for a long time.
regit
added a commit
that referenced
this pull request
Mar 2, 2016
This patch fixes the following leak: Direct leak of 9982880 byte(s) in 2902 object(s) allocated from: #0 0x4c253b in malloc ??:? #1 0x10c39ac in MimeDecInitParser /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/util-decode-mime.c:2379 #2 0x6a0f91 in SMTPProcessRequest /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer-smtp.c:1085 #3 0x697658 in SMTPParse /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer-smtp.c:1185 #4 0x68fa7a in SMTPParseClientRecord /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer-smtp.c:1208 #5 0x6561c5 in AppLayerParserParse /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer-parser.c:908 #6 0x53dc2e in AppLayerHandleTCPData /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer.c:444 #7 0xf8e0af in DoReassemble /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp-reassemble.c:2635 #8 0xf8c3f8 in StreamTcpReassembleAppLayer /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp-reassemble.c:3028 #9 0xf94267 in StreamTcpReassembleHandleSegmentUpdateACK /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp-reassemble.c:3404 #10 0xf9643d in StreamTcpReassembleHandleSegment /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp-reassemble.c:3432 #11 0xf578b4 in HandleEstablishedPacketToClient /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp.c:2245 #12 0xeea3c7 in StreamTcpPacketStateEstablished /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp.c:2489 #13 0xec1d38 in StreamTcpPacket /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp.c:4568 #14 0xeb0e16 in StreamTcp /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp.c:5064 #15 0xff52a4 in TmThreadsSlotVarRun /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/tm-threads.c:130 #16 0xffdad1 in TmThreadsSlotVar /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/tm-threads.c:474 #17 0x7f7cd678d181 in start_thread /build/buildd/eglibc-2.19/nptl/pthread_create.c:312 (discriminator 2) We come to this case when a SMTP session contains at least 2 mails and then the ending of the first is not correctly detected. In that case, switching to a new tx seems a good solution. This way we still have partial logging.
regit
added a commit
that referenced
this pull request
Mar 3, 2016
This patch fixes the following leak: Direct leak of 9982880 byte(s) in 2902 object(s) allocated from: #0 0x4c253b in malloc ??:? #1 0x10c39ac in MimeDecInitParser /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/util-decode-mime.c:2379 #2 0x6a0f91 in SMTPProcessRequest /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer-smtp.c:1085 #3 0x697658 in SMTPParse /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer-smtp.c:1185 #4 0x68fa7a in SMTPParseClientRecord /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer-smtp.c:1208 #5 0x6561c5 in AppLayerParserParse /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer-parser.c:908 #6 0x53dc2e in AppLayerHandleTCPData /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/app-layer.c:444 #7 0xf8e0af in DoReassemble /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp-reassemble.c:2635 #8 0xf8c3f8 in StreamTcpReassembleAppLayer /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp-reassemble.c:3028 #9 0xf94267 in StreamTcpReassembleHandleSegmentUpdateACK /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp-reassemble.c:3404 #10 0xf9643d in StreamTcpReassembleHandleSegment /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp-reassemble.c:3432 #11 0xf578b4 in HandleEstablishedPacketToClient /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp.c:2245 #12 0xeea3c7 in StreamTcpPacketStateEstablished /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp.c:2489 #13 0xec1d38 in StreamTcpPacket /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp.c:4568 #14 0xeb0e16 in StreamTcp /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/stream-tcp.c:5064 #15 0xff52a4 in TmThreadsSlotVarRun /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/tm-threads.c:130 #16 0xffdad1 in TmThreadsSlotVar /home/victor/qa/buildbot/donkey/z600fuzz/Private/src/tm-threads.c:474 #17 0x7f7cd678d181 in start_thread /build/buildd/eglibc-2.19/nptl/pthread_create.c:312 (discriminator 2) We come to this case when a SMTP session contains at least 2 mails and then the ending of the first is not correctly detected. In that case, switching to a new tx seems a good solution. This way we still have partial logging.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.