Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kvm backend Got into OnDebugTrap with LastBreakpoint error #223

Closed
jasocrow opened this issue Jan 22, 2025 · 11 comments · Fixed by #224
Closed

kvm backend Got into OnDebugTrap with LastBreakpoint error #223

jasocrow opened this issue Jan 22, 2025 · 11 comments · Fixed by #224

Comments

@jasocrow
Copy link
Contributor

I am consistently seeing the error message "Got into OnDebugTrap with LastBreakpoint error" when using the KVM backend.

I suspect the trap flag needs to be explicitly disabled here:

Instead of:

    if (TraceType_ == TraceType_t::Rip) {
      TrapFlag(true);
    } else {
      KvmDebugPrint("Turning off RFLAGS.TF\n");
    }

do:

    if (TraceType_ == TraceType_t::Rip) {
      TrapFlag(true);
    } else {
      KvmDebugPrint("Turning off RFLAGS.TF\n");
     TrapFlag(false);
    }
@0vercl0k
Copy link
Owner

0vercl0k commented Jan 22, 2025 via email

@jasocrow
Copy link
Contributor Author

Thanks for the quick response!

I am seeing this every time I use the KVM backend when I fuzz using the latest wtf code. I see this error almost immediately on every fuzz iteration as soon as one of my breakpoints is executed.

kvm: Handling bp @ 0x55c26332c200                                                                                        |kvm: Handling bp @ 0x55c26332c200
kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200)                                                          |kvm: Disarming bp and turning on RFLAGS.TF (rip=0x55c26332c200)
kvm: Turning on RFLAGS.TF                                                                                                |kvm: Turning on RFLAGS.TF
kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201                                                                       |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c201
kvm: Received debug trap @ 0x55c26332c201                                                                                |kvm: Received debug trap @ 0x55c26332c201
kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF                                                         |kvm: Resetting breakpoint @ 0x3711b200kvm: Turning off RFLAGS.TF
kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204                                                                       |kvm: exit_reason = KVM_EXIT_DEBUG @ 0x55c26332c204
kvm: Received debug trap @ 0x55c26332c204                                                                                |kvm: Received debug trap @ 0x55c26332c204
Got into OnDebugTrap with LastBreakpointGpa_ = none

If I look at the disassembly of the instructions, I see that 0x55c26332c204 is the address of the instruction immediately following 0x55c26332c201, indicating that the TF is remaining set for some reason.

.text:0000000003047200 55        push    rbp       ; breakpoint here (0x55c26332c200)
.text:0000000003047201 48 89 E5  mov     rbp, rsp  ; first KVM_EXIT_DEBUG
.text:0000000003047204 41 57     push    r15       ; second KVM_EXIT_DEBUG

Adding the fix I suggested above in my private copy of wtf fixes the issue.

I swear I've run into this issue before as well with a different custom hypervisor I was using. IIRC, the trap flag was not getting cleared automatically when a VM-exit related to it occurred, which was not the behavior that I had initially expected.

@0vercl0k
Copy link
Owner

0vercl0k commented Jan 22, 2025 via email

@0vercl0k
Copy link
Owner

0vercl0k commented Jan 22, 2025 via email

@0vercl0k
Copy link
Owner

0vercl0k commented Jan 22, 2025 via email

@0vercl0k
Copy link
Owner

0vercl0k commented Jan 22, 2025 via email

0vercl0k added a commit that referenced this issue Jan 23, 2025
@0vercl0k
Copy link
Owner

Okay yeah I confirmed the above; below outputs are w/o a breakpoint that needs single-stepping over it (so 'no bug')..:

bubuntu:~/wtf/targets/hevd$ sudo ../../src/build/wtf run --name hevd --input ./inputs/A --backend kvm
Found a 'state' folder in the cwd, so using it.
The debugger instance is loaded with 7 items
PMU Version 2 is available (3 fixed counters of 48 bits)
Resolved breakpoint 0xfffff8046f1bf830 at GPA 0x27bf830 aka HVA 0x56005f657c10
Setting debug register status to zero.
Setting debug register status to zero.
Setting mxcsr_mask to 0xffbf.
Resolved breakpoint 0x7ff6f5bb1124 at GPA 0xd374124 aka HVA 0x56005f6585d4
Resolved breakpoint 0xfffff8046f122bb0 at GPA 0x2722bb0 aka HVA 0x56005f65a090
Resolved breakpoint 0xfffff8046f0287c4 at GPA 0x26287c4 aka HVA 0x56005f65acd4
Resolved breakpoint 0xfffff8046f2a53e0 at GPA 0x28a53e0 aka HVA 0x56005f65b920
Resolved breakpoint 0xfffff8046f1c3880 at GPA 0x27c3880 aka HVA 0x56005f65cdf0
Running ./inputs/A
kvm: exit_reason = KVM_EXIT_DEBUG @ 0xfffff8046f122bb0
kvm: Handling bp @ 0xfffff8046f122bb0
Hevd: DbgPrintEx: [-] Invalid IOCTL Code: 0x%X
kvm: The bp handler ended up moving @rip from 0xfffff8046f122bb0 to 0xfffff8046ca955ec so no need to do the step-over dance
kvm: exit_reason = KVM_EXIT_DEBUG @ 0x7ff6f5bb1124
kvm: Handling bp @ 0x7ff6f5bb1124
Hevd: Back from kernel!
kvm: The bp handler asked us to stop so no need to do the step-over dance
--------------------------------------------------
Run stats:
          Dirty pages: 663552 bytes, 162 pages, 0 MB
            UffdPages: 684032 bytes, 167 pages, 0 MB
              VMExits: 2
Instructions executed: 6400
#1 cov: 0 exec/s: 0.0 lastcov: 0.0s crash: 0 timeout: 0 cr3: 0 uptime: 0.0s

..and w/ bp that needs single-stepping (triggers the bug):

bubuntu:~/wtf/targets/hevd$ sudo ../../src/build/wtf run --name hevd --input ./inputs/A --backend kvm
Found a 'state' folder in the cwd, so using it.
The debugger instance is loaded with 7 items
PMU Version 2 is available (3 fixed counters of 48 bits)
Resolved breakpoint 0xfffff8046f1bf830 at GPA 0x27bf830 aka HVA 0x5632583aac10
Setting debug register status to zero.
Setting debug register status to zero.
Setting mxcsr_mask to 0xffbf.
Resolved breakpoint 0x7ff6f5bb111e at GPA 0xd37411e aka HVA 0x5632583ab5ce
Resolved breakpoint 0x7ff6f5bb1124 at GPA 0xd374124 aka HVA 0x5632583ab5d4
Resolved breakpoint 0xfffff8046f122bb0 at GPA 0x2722bb0 aka HVA 0x5632583ad0b0
Resolved breakpoint 0xfffff8046f0287c4 at GPA 0x26287c4 aka HVA 0x5632583adcd4
Resolved breakpoint 0xfffff8046f2a53e0 at GPA 0x28a53e0 aka HVA 0x5632583ae920
Resolved breakpoint 0xfffff8046f1c3880 at GPA 0x27c3880 aka HVA 0x5632583afdf0
Running ./inputs/A
kvm: exit_reason = KVM_EXIT_DEBUG @ 0x7ff6f5bb111e
kvm: Handling bp @ 0x7ff6f5bb111e
Hevd: Hello!
kvm: Disarming bp and turning on RFLAGS.TF (rip=0x7ff6f5bb111e)
kvm: Turning on RFLAGS.TF
kvm: exit_reason = KVM_EXIT_DEBUG @ 0x7ff83e2e6360
kvm: Received debug trap @ 0x7ff83e2e6360
kvm: Resetting breakpoint @ 0xd37411ekvm: Turning off RFLAGS.TF
kvm: exit_reason = KVM_EXIT_DEBUG @ 0x7ff83e2e6365
kvm: Received debug trap @ 0x7ff83e2e6365
Got into OnDebugTrap with LastBreakpointGpa_ = none--------------------------------------------------
Run stats:
          Dirty pages: 53248 bytes, 13 pages, 0 MB
            UffdPages: 90112 bytes, 22 pages, 0 MB
              VMExits: 3
Instructions executed: 2
#1 cov: 0 exec/s: 0.0 lastcov: 0.0s crash: 0 timeout: 0 cr3: 0 uptime: 0.0s

@0vercl0k
Copy link
Owner

I will be fixing this in #224 - I basically implemented your suggested fix.

I need to run for now but will run more tests when I'm back / see if there's another similar issue elsewhere in the code / if it might affect WHV as well.. and if we're good I'll release v0.5.6 to address this.

Thanks again for the report as usual 🙏🏽🙏🏽

Cheers

@0vercl0k
Copy link
Owner

Okay https://github.com/0vercl0k/wtf/releases/tag/v0.5.6 is out - hope this works fine this time 😅

Apologies for the trouble and thank you again for finding this & filling a report (and a suggested fix!) 🙏🏽🙏🏽

Cheers

@jasocrow
Copy link
Contributor Author

Damn, thanks for getting a fix out so quickly!

@0vercl0k
Copy link
Owner

Heh, I had to to be honest - major regression 😥

Cheers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants