添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Collectives on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more

I am emulating some hardware in QEMU, which correspond to some drivers in the kernel of linux guest.

Right now, I can use memory_region_init_io to setup mmio regions so that whenever the kernel driver read/write to mmio address, I would get a callback.

How could I get the stack trace of the kernel that triggers mmio access within the callback? I want to know which line in the kernel driver triggers which mmio access.

I know that mmiotrace may be an option, but that trace occurs in guest kernel. Is there anyway I could achieve that with qemu-kvm.

static uint64_t mmio_read(void *opaque, hwaddr addr,
                              unsigned size) {
    /* Here, I want to get the stacktrace inside VM 
     * that caused this mmio read */
    printf("mmio_read: %lx[%u] returns %lx\n", addr, size, ret);
    return 0;
static void stream_dma_write(void *opaque, hwaddr addr,
                           uint64_t val, unsigned size) {
    /* Here, I want to get the stacktrace inside VM 
     * that caused this mmio write */
    printf("mmio_write: %lx[%u]=%lx \n", addr, size, val);
static const MemoryRegionOps mmio_ops {
    .read = mmio_read,
    .write = mmio_write,
void init_region(uintptr_t addr, size_t size) {
MemoryRegion *subregion = malloc(sizeof(MemoryRegion));
    memory_region_init_io(subregion, OBJECT(opaque), 
                &mmio_ops, NULL, "mmio-region", size);
    memory_region_add_subregion_overlap(get_system_memory(),
                addr, subregion, 100);

Unfortunately QEMU doesn't provide anything really that would do this for you as an API you can call from within QEMU C code. There are a couple of problems:

  • QEMU doesn't continuously update all CPU state for each instruction, and in particular it does not update the PC value until it absolutely has to, because writing "add 4 to the PC field in the CPU state struct" all the time is expensive. So the current PC is not really conveniently accessible from a device MMIO read/write function.

  • QEMU doesn't have any code that knows how to do a guest stack backtrace. This is a comparatively complicated thing to do correctly (you'll find code for it in debuggers, of course).

  • I think if I were designing something for this purpose I'd probably try to provide a way for a device to trigger the guest to stop so that a target-architecture gdb attached to the QEMU gdbstub could examine registers and do backtraces. Then you could script the debugger if you wanted "print backtrace and continue guest execution".

    That said, here are a couple of suggestions you could try:

  • If you're lucky, then setting watchpoints from a target-architecture gdb on the QEMU gdbstub for the addresses of the device's registers will let you get control in gdb when the guest does a device access so you can do a backtrace. I give this about a 50% chance of working, because I'm not sure how robust large-area watchpoint support is going to be; also you'll need to set the watchpoints on the virtual address the kernel mapped the device to, which might be tricky to determine.

  • My experience with writing device models has been that it's usually pretty obvious just by looking at the source code for the device driver what it was doing when it made an MMIO access to the device. You know which register was written to and with what value, which is often sufficient to narrow down which bit of the driver made the access. This does depend on the complexity of the hw and driver, of course.

  • Using QEMU's -d and -D options to log a combination of device-specific trace events and general guest CPU execution/control flow information is another trick I've found helpful in trying to work out what a guest was doing to a device.

  • Unfortunately, I am currently dealing with large drivers like wifi and nic. As for some comments to your suggestions. 1) debugger memory watchpoints is limited by hardware. I think I usually can get around 5. 2) that does work good for small projects. I will explore your option 3 to see what its capable. – Bruce Shen Jun 20 '20 at 2:48 Right now, I need to modify the kernel module, e.g. adding printk's and look at driver's debug log to identify the line of mmio and dma access. It is very hard to identify dma access too. – Bruce Shen Jun 20 '20 at 2:51 Ah, yes, I didn't notice you were using KVM. (For TCG watchpoints are emulated in QEMU so h/w restrictions don't matter.) – Peter Maydell Jun 21 '20 at 13:54 A research work that I have read implemented the stack tracing of the guest under tcg accel. I haven't delved into their code. But I guess KVM would be really different in that matter – Bruce Shen Jun 22 '20 at 2:30

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.