-
-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kvm backend Got into OnDebugTrap with LastBreakpoint error #223
Comments
Hello!
Are you able to share more information about what you are doing / what you
are seeing?
I quickly skimmed through the piece of code you have linked but the code
change you are suggesting is a no-op as TF should be cleared automatically
after it triggered unless I am missing something 😅 If the code change
fixes your issue then I'll need to look into this more tonight.
Cheers
…On Wed, Jan 22, 2025 at 7:53 AM jasocrow ***@***.***> wrote:
I am consistently seeing the error message "Got into OnDebugTrap with
LastBreakpoint error" when using the KVM backend.
I suspect the trap flag needs to be explicitly disabled here
<https://github.com/0vercl0k/wtf/blob/ebdccdd03efc7df9c6145fee3fcee973624eac85/src/wtf/kvm_backend.cc#L1583C1-L1587C6>
:
Instead of:
if (TraceType_ == TraceType_t::Rip) {
TrapFlag(true);
} else {
KvmDebugPrint("Turning off RFLAGS.TF\n");
}
do:
if (TraceType_ == TraceType_t::Rip) {
TrapFlag(true);
} else {
KvmDebugPrint("Turning off RFLAGS.TF\n");
TrapFlag(false);
}
—
Reply to this email directly, view it on GitHub
<#223>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AALIORPB3NCQJAJIO3XYBYD2L65HLAVCNFSM6AAAAABVVIUHX6VHI2DSMVQWIX3LMV43ASLTON2WKOZSHAYDINZVGA3DKMI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Thanks for the quick response! I am seeing this every time I use the KVM backend when I fuzz using the latest wtf code. I see this error almost immediately on every fuzz iteration as soon as one of my breakpoints is executed.
If I look at the disassembly of the instructions, I see that
Adding the fix I suggested above in my private copy of wtf fixes the issue. I swear I've run into this issue before as well with a different custom hypervisor I was using. IIRC, the trap flag was not getting cleared automatically when a VM-exit related to it occurred, which was not the behavior that I had initially expected. |
Yikes this looks really bad then - I'll take a look after work tonight;
sorry for this 😔
Cheers
…On Wed, Jan 22, 2025 at 12:53 PM jasocrow ***@***.***> wrote:
Thanks for the quick response!
I am seeing this every time I use the KVM backend when I fuzz using the
latest wtf code. I see this error almost immediately on every fuzz
iteration as soon as one of my breakpoints is executed.
kvm: Handling bp @ 0x55c26332c200 |kvm: Handling bp @ 0x55c26332c200
kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200) |kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200)
kvm: Turning on RFLAGS.TF |kvm: Turning on RFLAGS.TF
kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201 |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201
kvm: Received debug trap @ 0x55c26332c201 |kvm: Received debug trap @ 0x55c26332c201
kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF |kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF
kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204 |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204
kvm: Received debug trap @ 0x55c26332c204 |kvm: Received debug trap @ 0x55c26332c204
Got into OnDebugTrap with LastBreakpointGpa_ = none
If I look at the disassembly of the instructions, I see that
0x55c26332c204 is the address of the instruction immediately following
0x55c26332c201, indicating that the TF is remaining set for some reason.
.text:0000000003047200 55 push rbp ; breakpoint here (0x55c26332c200)
.text:0000000003047201 48 89 E5 mov rbp, rsp ; first KVM_EXIT_DEBUG
.text:0000000003047204 41 57 push r15 ; second KVM_EXIT_DEBUG
Adding the fix I suggested above in my private copy of wtf fixes the issue.
I swear I've run into this issue before as well with a different custom
hypervisor I was using. IIRC, the trap flag was not getting cleared
automatically when a VM-exit related to it occurred, which was not the
behavior that I had initially expected.
—
Reply to this email directly, view it on GitHub
<#223 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AALIORMXOK6BFB6JNMQ2QN32MAAONAVCNFSM6AAAAABVVIUHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMBYGI2DCNBUHA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I am guessing you don't see the same offending behavior on v0.5.4 btw? I
just tried the HEVD sample against the latest trunk & it seems to run fine.
Also, do you see the same behavior if you run the HEVD sample in your
set-up? Asking to see if I should witness a repro or not.
Cheers
…On Wed, Jan 22, 2025 at 1:23 PM Axel ***@***.***> wrote:
Yikes this looks really bad then - I'll take a look after work tonight;
sorry for this 😔
Cheers
On Wed, Jan 22, 2025 at 12:53 PM jasocrow ***@***.***>
wrote:
> Thanks for the quick response!
>
> I am seeing this every time I use the KVM backend when I fuzz using the
> latest wtf code. I see this error almost immediately on every fuzz
> iteration as soon as one of my breakpoints is executed.
>
> kvm: Handling bp @ 0x55c26332c200 |kvm: Handling bp @ 0x55c26332c200
> kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200) |kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200)
> kvm: Turning on RFLAGS.TF |kvm: Turning on RFLAGS.TF
> kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201 |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201
> kvm: Received debug trap @ 0x55c26332c201 |kvm: Received debug trap @ 0x55c26332c201
> kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF |kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF
> kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204 |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204
> kvm: Received debug trap @ 0x55c26332c204 |kvm: Received debug trap @ 0x55c26332c204
> Got into OnDebugTrap with LastBreakpointGpa_ = none
>
> If I look at the disassembly of the instructions, I see that
> 0x55c26332c204 is the address of the instruction immediately following
> 0x55c26332c201, indicating that the TF is remaining set for some reason.
>
> .text:0000000003047200 55 push rbp ; breakpoint here (0x55c26332c200)
> .text:0000000003047201 48 89 E5 mov rbp, rsp ; first KVM_EXIT_DEBUG
> .text:0000000003047204 41 57 push r15 ; second KVM_EXIT_DEBUG
>
> Adding the fix I suggested above in my private copy of wtf fixes the
> issue.
>
> I swear I've run into this issue before as well with a different custom
> hypervisor I was using. IIRC, the trap flag was not getting cleared
> automatically when a VM-exit related to it occurred, which was not the
> behavior that I had initially expected.
>
> —
> Reply to this email directly, view it on GitHub
> <#223 (comment)>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AALIORMXOK6BFB6JNMQ2QN32MAAONAVCNFSM6AAAAABVVIUHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMBYGI2DCNBUHA>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
|
If I check the code before / after implementing RIP traces for KVM; I do
see the change in behavior you are referring to, and it doesn't use
directly TF but instead uses the interface that KVM exposes
(KVM_GUESTDBG_ENABLE / KVM_GUESTDBG_SINGLESTEP) which seems to need
resetting. This is all consistent with what you are seeing, what you fixed.
I am still puzzled on why I don't see this on the HEVD sample; maybe I
don't even hit that code path 🤔 anyways, will keep the thread updated once
I figure it out.
Cheers
…On Wed, Jan 22, 2025 at 1:42 PM Axel ***@***.***> wrote:
I am guessing you don't see the same offending behavior on v0.5.4 btw? I
just tried the HEVD sample against the latest trunk & it seems to run fine.
Also, do you see the same behavior if you run the HEVD sample in your
set-up? Asking to see if I should witness a repro or not.
Cheers
On Wed, Jan 22, 2025 at 1:23 PM Axel ***@***.***> wrote:
> Yikes this looks really bad then - I'll take a look after work tonight;
> sorry for this 😔
>
> Cheers
>
> On Wed, Jan 22, 2025 at 12:53 PM jasocrow ***@***.***>
> wrote:
>
>> Thanks for the quick response!
>>
>> I am seeing this every time I use the KVM backend when I fuzz using the
>> latest wtf code. I see this error almost immediately on every fuzz
>> iteration as soon as one of my breakpoints is executed.
>>
>> kvm: Handling bp @ 0x55c26332c200 |kvm: Handling bp @ 0x55c26332c200
>> kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200) |kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200)
>> kvm: Turning on RFLAGS.TF |kvm: Turning on RFLAGS.TF
>> kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201 |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201
>> kvm: Received debug trap @ 0x55c26332c201 |kvm: Received debug trap @ 0x55c26332c201
>> kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF |kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF
>> kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204 |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204
>> kvm: Received debug trap @ 0x55c26332c204 |kvm: Received debug trap @ 0x55c26332c204
>> Got into OnDebugTrap with LastBreakpointGpa_ = none
>>
>> If I look at the disassembly of the instructions, I see that
>> 0x55c26332c204 is the address of the instruction immediately following
>> 0x55c26332c201, indicating that the TF is remaining set for some reason.
>>
>> .text:0000000003047200 55 push rbp ; breakpoint here (0x55c26332c200)
>> .text:0000000003047201 48 89 E5 mov rbp, rsp ; first KVM_EXIT_DEBUG
>> .text:0000000003047204 41 57 push r15 ; second KVM_EXIT_DEBUG
>>
>> Adding the fix I suggested above in my private copy of wtf fixes the
>> issue.
>>
>> I swear I've run into this issue before as well with a different custom
>> hypervisor I was using. IIRC, the trap flag was not getting cleared
>> automatically when a VM-exit related to it occurred, which was not the
>> behavior that I had initially expected.
>>
>> —
>> Reply to this email directly, view it on GitHub
>> <#223 (comment)>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/AALIORMXOK6BFB6JNMQ2QN32MAAONAVCNFSM6AAAAABVVIUHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMBYGI2DCNBUHA>
>> .
>> You are receiving this because you commented.Message ID:
>> ***@***.***>
>>
>
|
Okay actually, my guess is that all the breakpoints I hit with HEVD either
directly kill the testcase, or move `@RIP` from the handler. In the first
case, the testcase ends so I don't even get to see this and in the second
one I don't need to use TF because RIP has moved so no need to step-over
the breakpoint. I'll verify this in a few hours!
Cheers
…On Wed, Jan 22, 2025 at 2:24 PM Axel ***@***.***> wrote:
If I check the code before / after implementing RIP traces for KVM; I do
see the change in behavior you are referring to, and it doesn't use
directly TF but instead uses the interface that KVM exposes
(KVM_GUESTDBG_ENABLE / KVM_GUESTDBG_SINGLESTEP) which seems to need
resetting. This is all consistent with what you are seeing, what you fixed.
I am still puzzled on why I don't see this on the HEVD sample; maybe I
don't even hit that code path 🤔 anyways, will keep the thread updated once
I figure it out.
Cheers
On Wed, Jan 22, 2025 at 1:42 PM Axel ***@***.***> wrote:
> I am guessing you don't see the same offending behavior on v0.5.4 btw? I
> just tried the HEVD sample against the latest trunk & it seems to run fine.
>
> Also, do you see the same behavior if you run the HEVD sample in your
> set-up? Asking to see if I should witness a repro or not.
>
> Cheers
>
> On Wed, Jan 22, 2025 at 1:23 PM Axel ***@***.***> wrote:
>
>> Yikes this looks really bad then - I'll take a look after work tonight;
>> sorry for this 😔
>>
>> Cheers
>>
>> On Wed, Jan 22, 2025 at 12:53 PM jasocrow ***@***.***>
>> wrote:
>>
>>> Thanks for the quick response!
>>>
>>> I am seeing this every time I use the KVM backend when I fuzz using the
>>> latest wtf code. I see this error almost immediately on every fuzz
>>> iteration as soon as one of my breakpoints is executed.
>>>
>>> kvm: Handling bp @ 0x55c26332c200 |kvm: Handling bp @ 0x55c26332c200
>>> kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200) |kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200)
>>> kvm: Turning on RFLAGS.TF |kvm: Turning on RFLAGS.TF
>>> kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201 |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201
>>> kvm: Received debug trap @ 0x55c26332c201 |kvm: Received debug trap @ 0x55c26332c201
>>> kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF |kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF
>>> kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204 |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204
>>> kvm: Received debug trap @ 0x55c26332c204 |kvm: Received debug trap @ 0x55c26332c204
>>> Got into OnDebugTrap with LastBreakpointGpa_ = none
>>>
>>> If I look at the disassembly of the instructions, I see that
>>> 0x55c26332c204 is the address of the instruction immediately following
>>> 0x55c26332c201, indicating that the TF is remaining set for some
>>> reason.
>>>
>>> .text:0000000003047200 55 push rbp ; breakpoint here (0x55c26332c200)
>>> .text:0000000003047201 48 89 E5 mov rbp, rsp ; first KVM_EXIT_DEBUG
>>> .text:0000000003047204 41 57 push r15 ; second KVM_EXIT_DEBUG
>>>
>>> Adding the fix I suggested above in my private copy of wtf fixes the
>>> issue.
>>>
>>> I swear I've run into this issue before as well with a different custom
>>> hypervisor I was using. IIRC, the trap flag was not getting cleared
>>> automatically when a VM-exit related to it occurred, which was not the
>>> behavior that I had initially expected.
>>>
>>> —
>>> Reply to this email directly, view it on GitHub
>>> <#223 (comment)>,
>>> or unsubscribe
>>> <https://github.com/notifications/unsubscribe-auth/AALIORMXOK6BFB6JNMQ2QN32MAAONAVCNFSM6AAAAABVVIUHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMMBYGI2DCNBUHA>
>>> .
>>> You are receiving this because you commented.Message ID:
>>> ***@***.***>
>>>
>>
|
… a breakpoint & re-enabled it (fix #223)
Okay yeah I confirmed the above; below outputs are w/o a breakpoint that needs single-stepping over it (so 'no bug')..:
..and w/ bp that needs single-stepping (triggers the bug):
|
I will be fixing this in #224 - I basically implemented your suggested fix. I need to run for now but will run more tests when I'm back / see if there's another similar issue elsewhere in the code / if it might affect WHV as well.. and if we're good I'll release Thanks again for the report as usual 🙏🏽🙏🏽 Cheers |
Okay https://github.com/0vercl0k/wtf/releases/tag/v0.5.6 is out - hope this works fine this time 😅 Apologies for the trouble and thank you again for finding this & filling a report (and a suggested fix!) 🙏🏽🙏🏽 Cheers |
Damn, thanks for getting a fix out so quickly! |
Heh, I had to to be honest - major regression 😥 Cheers |
I am consistently seeing the error message "Got into OnDebugTrap with LastBreakpoint error" when using the KVM backend.
I suspect the trap flag needs to be explicitly disabled here:
Instead of:
do:
The text was updated successfully, but these errors were encountered: