qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] softmmu: fix watchpoint processing in icount mode


From: Richard Henderson
Subject: Re: [PATCH] softmmu: fix watchpoint processing in icount mode
Date: Fri, 10 Sep 2021 16:41:43 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0

On 9/10/21 3:46 PM, David Hildenbrand wrote:
On 10.09.21 15:34, Richard Henderson wrote:
On 9/10/21 1:15 PM, David Hildenbrand wrote:
On 07.09.21 13:30, Pavel Dovgalyuk wrote:
Watchpoint processing code restores vCPU state twice:
in tb_check_watchpoint and in cpu_loop_exit_restore/cpu_restore_state.
Normally it does not affect anything, but in icount mode instruction
counter is incremented twice and becomes incorrect.
This patch eliminates unneeded CPU state restore.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
---
   softmmu/physmem.c |    5 +----
   1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 23e77cb771..4025dfab11 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -941,14 +941,11 @@ void cpu_check_watchpoint(CPUState *cpu, vaddr addr, 
vaddr len,
                   if (wp->flags & BP_STOP_BEFORE_ACCESS) {
                       cpu->exception_index = EXCP_DEBUG;
                       mmap_unlock();
-                    cpu_loop_exit_restore(cpu, ra);
+                    cpu_loop_exit(cpu);
                   } else {
                       /* Force execution of one insn next time.  */
                       cpu->cflags_next_tb = 1 | curr_cflags(cpu);
                       mmap_unlock();
-                    if (ra) {
-                        cpu_restore_state(cpu, ra, true);
-                    }
                       cpu_loop_exit_noexc(cpu);
                   }
               }



I'm not an expert on that code, but it looks good to me.

Maybe we could have added a comment above the tb_check_watchpoint() call to 
highlight that
the restore will happen in there.

Hmm.  Curious.

Looking at tb_check_watchpoint, I have trouble seeing how it could be correct.
Watchpoints can happen at any memory reference within the TB.  We should be 
rolling back
to the cpu state at the memory reference (cpu_retore_state) and not the cpu 
state at the
start of the TB (cpu_restore_state_from_tb).

cpu_restore_state() ends up calling cpu_restore_state_from_tb() with essentially
the same parameters or what am I missing?

Whoops, yes.  I must have been thinking of a different function.

I'm also not sure why we're invalidating tb's.  Why does watchpoint hit imply 
that we
should want to ditch the TB?  If we want different behaviour from the next 
execution, we
should be adjusting cflags.

It goes back to

commit 06d55cc19ac84e799d2df8c750049e51798b00a4
Author: aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Date:   Tue Nov 18 20:24:06 2008 +0000

     Restore pc on watchpoint hits (Jan Kiszka)
     In order to provide accurate information about the triggering
     instruction, this patch adds the required bits to restore the pc if the
     access happened inside a TB. With the BP_STOP_BEFORE_ACCESS flag, the
     watchpoint user can control if the debug trap should be issued on or
     after the accessing instruction.
     Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
     Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>


*trying to rememebr what we do on watchpoints* I think we want to
make sure that we end up with a single-instruction TB, right? So we
want to make sure to remove the old one.

When the watchpoint needs to trigger after the insn, we do indeed want to execute a single insn, which we do with the cflags there in the patch context. But when we want to stop before the insn, we're already done -- so what was the invalidate supposed to achieve?

(Then of course there's the problem that Phillipe filed (#245) in which we set cflags as per above, then take an interrupt before using it, then wind up with garbage. Ho hum.)


r~

r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]