[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v5] Prevent vhost-user-blk-test hang
From: |
Raphael Norwitz |
Subject: |
Re: [PATCH v5] Prevent vhost-user-blk-test hang |
Date: |
Thu, 30 Sep 2021 05:29:06 +0000 |
User-agent: |
Mutt/1.10.1 (2018-07-13) |
On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > In the vhost-user-blk-test, as of now there is nothing stoping
> > vhost-user-blk in QEMU writing to the socket right after forking off the
> > storage daemon before it has a chance to come up properly, leaving the
> > test hanging forever. This intermittently hanging test has caused QEMU
> > automation failures reported multiple times on the mailing list [1].
> >
> > This change makes the storage-daemon notify the vhost-user-blk-test
> > that it is fully initialized and ready to handle client connections by
> > creating a pidfile on initialiation. This ensures that the storage-daemon
> > backend won't miss vhost-user messages and thereby resolves the hang.
> >
> > [1]
> > https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
>
Hey Stefan,
> Hi Raphael,
> I would like to understand the issue that is being worked around in the
> patch.
>
> QEMU should be okay with listen fd passing. The qemu-storage-daemon
> documentation even contains example code for this
> (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> listen fd passing is fundamentally broken.
>
The issue is that the "client" (in this case vhost-user-blk in QEMU) can
proceed to use the socket before the storage-daemon has a chance to
properly start up and monitor it. This is nothing unique to the
storage-daemon - I've seen races like this happen happend with different
vhost-user backends before.
Yes - I do think the docs can be improved to explicitly state that the
storage-daemon must be allowed to properly initialize before any data is
sent over the socket. Maybe we should even perscribe the use of the pidfile
option?
> Can you share more details about the problem?
>
Did you see my analysis [1]?
[1]
https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
socket and the storage daemon never receives it. Looking at the
QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
the storage-daemon never receives any message to begin with. AFAICT
there is nothing stopping QEMU from running first and sending a message
before vhost-user-blk comes up, and from testing we can see that waiting
for the storage-daemon to come up resolves the problem completely.
> Does "writing to the socket" mean writing vhost-user protocol messages
> or does it mean connect(2)?
>
Yes - it means writing vhost-user messages. We see a message sent from
QEMU to the backend.
Note that in qtest_socket_server() (called from create_listen_socket())
we have already called listen() on the socket, so I would expect QEMU
calling connect(2) to succeed and proceed to successfully send messages
whether or not there is another listener. I even tried commenting out the
execlp for the storage-daemon and I saw the same behavior from QEMU - it
sends the message and hangs indefinitely.
> Could the problem be that vhost-user-blk-test.c creates the listen fds
> and does not close them? This means the host network stack doesn't
> consider the socket closed after QEMU terminates and therefore the test
> process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> to close the fds after spawning qemu-storage-daemon.
>
When the test hangs both QEMU and storage-daemon are still up and
connected to the socket and waiting for messages from each other. I don't
see how we would close the FD in this state or how it would help.
We may want to think about implementing some kind of timeoout for initial
vhost-user messages so that we fail instead of hang in cases like these,
as I proposed in [1]. What do you think?
> Stefan
>
> >
> > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > Reviewed-by: Eric Blake <eblake@redhat.com>
> > ---
> > tests/qtest/vhost-user-blk-test.c | 26 +++++++++++++++++++++++++-
> > 1 file changed, 25 insertions(+), 1 deletion(-)
> >
> > diff --git a/tests/qtest/vhost-user-blk-test.c
> > b/tests/qtest/vhost-user-blk-test.c
> > index 6f108a1b62..5fed262da1 100644
> > --- a/tests/qtest/vhost-user-blk-test.c
> > +++ b/tests/qtest/vhost-user-blk-test.c
> > @@ -24,6 +24,7 @@
> > #define TEST_IMAGE_SIZE (64 * 1024 * 1024)
> > #define QVIRTIO_BLK_TIMEOUT_US (30 * 1000 * 1000)
> > #define PCI_SLOT_HP 0x06
> > +#define PIDFILE_RETRIES 5
> >
> > typedef struct {
> > pid_t pid;
> > @@ -885,7 +886,8 @@ static void start_vhost_user_blk(GString *cmd_line, int
> > vus_instances,
> > int num_queues)
> > {
> > const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
> > - int i;
> > + int i, retries;
> > + char *daemon_pidfile_path;
> > gchar *img_path;
> > GString *storage_daemon_command = g_string_new(NULL);
> > QemuStorageDaemonState *qsd;
> > @@ -898,6 +900,8 @@ static void start_vhost_user_blk(GString *cmd_line, int
> > vus_instances,
> > " -object memory-backend-memfd,id=mem,size=256M,share=on "
> > " -M memory-backend=mem -m 256M ");
> >
> > + daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
> > +
> > for (i = 0; i < vus_instances; i++) {
> > int fd;
> > char *sock_path = create_listen_socket(&fd);
> > @@ -914,6 +918,9 @@ static void start_vhost_user_blk(GString *cmd_line, int
> > vus_instances,
> > i + 1, sock_path);
> > }
> >
> > + g_string_append_printf(storage_daemon_command, "--pidfile %s ",
> > + daemon_pidfile_path);
> > +
> > g_test_message("starting vhost-user backend: %s",
> > storage_daemon_command->str);
> > pid_t pid = fork();
> > @@ -930,7 +937,24 @@ static void start_vhost_user_blk(GString *cmd_line,
> > int vus_instances,
> > execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
> > exit(1);
> > }
> > +
> > + /*
> > + * Ensure the storage-daemon has come up properly before allowing the
> > + * test to proceed.
> > + */
> > + retries = 0;
> > + while (access(daemon_pidfile_path, F_OK) != 0) {
> > + g_assert_cmpint(retries, <, PIDFILE_RETRIES);
> > +
> > + retries++;
> > + g_usleep(1000);
> > + }
> > +
> > g_string_free(storage_daemon_command, true);
> > + if (access(daemon_pidfile_path, F_OK) == 0) {
> > + unlink(daemon_pidfile_path);
> > + }
> > + g_free(daemon_pidfile_path);
> >
> > qsd = g_new(QemuStorageDaemonState, 1);
> > qsd->pid = pid;
> > --
> > 2.20.1
> >