gnunet-svn
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[libmicrohttpd] 02/05: Enforced no use of 'per_ip_connection_mutex' in s


From: gnunet
Subject: [libmicrohttpd] 02/05: Enforced no use of 'per_ip_connection_mutex' in slave daemons
Date: Thu, 12 May 2022 15:42:20 +0200

This is an automated email from the git hooks/post-receive script.

karlson2k pushed a commit to branch master
in repository libmicrohttpd.

commit c1a1826e8ebb8814fb26e2d255235f6527297f00
Author: Evgeny Grin (Karlson2k) <k2k@narod.ru>
AuthorDate: Thu May 12 10:55:09 2022 +0300

    Enforced no use of 'per_ip_connection_mutex' in slave daemons
---
 src/microhttpd/daemon.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/src/microhttpd/daemon.c b/src/microhttpd/daemon.c
index 6a2677fc..dd19a095 100644
--- a/src/microhttpd/daemon.c
+++ b/src/microhttpd/daemon.c
@@ -260,6 +260,7 @@ struct MHD_IPCount
 static void
 MHD_ip_count_lock (struct MHD_Daemon *daemon)
 {
+  mhd_assert (NULL == daemon->master);
 #if defined(MHD_USE_POSIX_THREADS) || defined(MHD_USE_W32_THREADS)
   MHD_mutex_lock_chk_ (&daemon->per_ip_connection_mutex);
 #else
@@ -276,6 +277,7 @@ MHD_ip_count_lock (struct MHD_Daemon *daemon)
 static void
 MHD_ip_count_unlock (struct MHD_Daemon *daemon)
 {
+  mhd_assert (NULL == daemon->master);
 #if defined(MHD_USE_POSIX_THREADS) || defined(MHD_USE_W32_THREADS)
   MHD_mutex_unlock_chk_ (&daemon->per_ip_connection_mutex);
 #else
@@ -7482,6 +7484,10 @@ MHD_start_daemon_va (unsigned int flags,
           goto thread_failed;
         }
         /* Some members must be used only in master daemon */
+#if defined(MHD_USE_THREADS)
+        memset (&d->per_ip_connection_mutex, 1,
+                sizeof(d->per_ip_connection_mutex));
+#endif /* MHD_USE_THREADS */
 #ifdef DAUTH_SUPPORT
         d->nnc = NULL;
         d->nonce_nc_size = 0;

-- 
To stop receiving notification emails like this one, please contact
gnunet@gnunet.org.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]