From patchwork Mon May 8 09:44:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lipeng Zhu X-Patchwork-Id: 91038 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2027598vqo; Mon, 8 May 2023 02:45:16 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6ERWt9XwUrI/K47/49ZUE3zqJcBE1NO0WYPAmrIQaXm76udqgk20J8LkS3wApzRd6AyChP X-Received: by 2002:a17:906:db04:b0:960:175a:3af7 with SMTP id xj4-20020a170906db0400b00960175a3af7mr7669692ejb.19.1683539116736; Mon, 08 May 2023 02:45:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683539116; cv=none; d=google.com; s=arc-20160816; b=fJVF/qIJIP2ryLESgP065DKI1PtTI0HBKdpboqiBy3I6GFWOvfWjc88l5Y6QwHoLev Xj0HUFd1aJjOEsbLljbmHtgpN2KG+yxfGVWOPQmfq7u5/JH36tV/CMYmsa4gr9sVT1mJ 9ZmdzBKPB8WkKvGHrm7zYBSTEdd+NWeudRVhfsP2TBbMCpL69qLqmBj6dnV5xWus28M1 WbiYAsm6NAm0BSt7EMLB44cf55EFxIvmWwAkLGuHWnKyA8CPF4NkcLc4mcM9DP6hpbxY S30HPb8y4QHv2mtV3K+Aol6deDXbO02J3TQ7SrRX9eTq9VyWVKE+7gPGukeuOnMP0wtW wHSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=5vQ3fMmWabZDCQFhZFdnqJK2i6R8BBUiQMf8z5pll8U=; b=hs3qxXoZzy/oerTfJ4U3+eNGgHjSYgPlGFlYFOuviSSic82UdcF8P8fbBfd044kKAO 08bJQtU65QtYhoPwem3HtPDJU6sle4mVuHe/LEHiHaKt3Gi5HX+wXajsACRnrEkO+vjx iDQDUnV6ni4J6cYJ9PICNji5JlFU7VUvQwwUtqywtLQ5fS63pt+Gvi1O0GDdA+Src0sW +SkyWbJRhssTsPPJclV9DWU1SQPuAoQ10kU7AFDK3MbE71cSUXLMIqLJtkaMjlAUJhVl /xNo/bcX8DLLTt05kvhHy553ZJnE27g9SigMMuqDnWmdNmk+QZaB0NKcaQC95WTNaMLB IdHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=Lj6pYRH7; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id l14-20020a170906938e00b0095382d71c52si5496938ejx.880.2023.05.08.02.45.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 02:45:16 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=Lj6pYRH7; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 249C03857717 for ; Mon, 8 May 2023 09:45:09 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 249C03857717 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683539109; bh=5vQ3fMmWabZDCQFhZFdnqJK2i6R8BBUiQMf8z5pll8U=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=Lj6pYRH7ANyDAG/R+PrhtpwxpWV85kWN8iRGZg+cYAwDLt4aWkPKyoCAuwUk/HRkR oIhsoc/TfyBlH4F01AGU5lg47IXfPNp2AJ3E7MOTSIyFY6wF43+zzN1re5Ar+iemU6 zYhYNOG4DBHk7OgecP7hiN2lWUyx+rVhhZEamL1g= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by sourceware.org (Postfix) with ESMTPS id 047A13858D32; Mon, 8 May 2023 09:44:20 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 047A13858D32 X-IronPort-AV: E=McAfee;i="6600,9927,10703"; a="329962426" X-IronPort-AV: E=Sophos;i="5.99,258,1677571200"; d="scan'208";a="329962426" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2023 02:44:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10703"; a="768007545" X-IronPort-AV: E=Sophos;i="5.99,258,1677571200"; d="scan'208";a="768007545" Received: from linux-pnp-server-30.sh.intel.com ([10.239.146.163]) by fmsmga004.fm.intel.com with ESMTP; 08 May 2023 02:44:17 -0700 To: rep.dot.nop@gmail.com, fortran@gcc.gnu.org, gcc-patches@gcc.gnu.org Cc: hongjiu.lu@intel.com, tianyou.li@intel.com, pan.deng@intel.com, wangyang.guo@intel.com, Lipeng Zhu Subject: [PATCH v4] libgfortran: Replace mutex with rwlock Date: Mon, 8 May 2023 17:44:43 +0800 Message-Id: <20230508094442.1413139-1-lipeng.zhu@intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230424214534.77117b73 () nbbrfq> References: <20230424214534.77117b73 () nbbrfq> MIME-Version: 1.0 X-Spam-Status: No, score=-10.8 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Lipeng Zhu via Gcc-patches From: Lipeng Zhu Reply-To: Lipeng Zhu Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765317899903641439?= X-GMAIL-MSGID: =?utf-8?q?1765318713201058662?= This patch try to introduce the rwlock and split the read/write to unit_root tree and unit_cache with rwlock instead of the mutex to increase CPU efficiency. In the get_gfc_unit function, the percentage to step into the insert_unit function is around 30%, in most instances, we can get the unit in the phase of reading the unit_cache or unit_root tree. So split the read/write phase by rwlock would be an approach to make it more parallel. BTW, the IPC metrics can gain around 9x in our test server with 220 cores. The benchmark we used is https://github.com/rwesson/NEAT libgcc/ChangeLog: * gthr-posix.h (__GTHREAD_RWLOCK_INIT): New macro (__gthrw): New function (__gthread_rwlock_rdlock): New function (__gthread_rwlock_tryrdlock): New function (__gthread_rwlock_wrlock): New function (__gthread_rwlock_trywrlock): New function (__gthread_rwlock_unlock): New function libgfortran/ChangeLog: * io/async.c (DEBUG_LINE): New * io/async.h (RWLOCK_DEBUG_ADD): New macro (CHECK_RDLOCK): New macro (CHECK_WRLOCK): New macro (TAIL_RWLOCK_DEBUG_QUEUE): New macro (IN_RWLOCK_DEBUG_QUEUE): New macro (RDLOCK): New macro (WRLOCK): New macro (RWUNLOCK): New macro (RD_TO_WRLOCK): New macro (INTERN_RDLOCK): New macro (INTERN_WRLOCK): New macro (INTERN_RWUNLOCK): New macro * io/io.h (internal_proto): Define unit_rwlock * io/transfer.c (st_read_done_worker): Relace unit_lock with unit_rwlock (st_write_done_worker): Relace unit_lock with unit_rwlock * io/unit.c (get_gfc_unit): Relace unit_lock with unit_rwlock (if): Relace unit_lock with unit_rwlock (close_unit_1): Relace unit_lock with unit_rwlock (close_units): Relace unit_lock with unit_rwlock (newunit_alloc): Relace unit_lock with unit_rwlock * io/unix.c (flush_all_units): Relace unit_lock with unit_rwlock --- v1 -> v2: Limit the pthread_rwlock usage in libgcc only when __cplusplus isn't defined. v2 -> v3: Rebase the patch with trunk branch. v3 -> v4: Update the comments. Reviewed-by: Hongjiu Lu Signed-off-by: Lipeng Zhu --- libgcc/gthr-posix.h | 60 +++++++++++++++ libgfortran/io/async.c | 4 + libgfortran/io/async.h | 151 ++++++++++++++++++++++++++++++++++++++ libgfortran/io/io.h | 15 ++-- libgfortran/io/transfer.c | 8 +- libgfortran/io/unit.c | 75 +++++++++++-------- libgfortran/io/unix.c | 16 ++-- 7 files changed, 283 insertions(+), 46 deletions(-) diff --git a/libgcc/gthr-posix.h b/libgcc/gthr-posix.h index aebcfdd9f4c..73283082997 100644 --- a/libgcc/gthr-posix.h +++ b/libgcc/gthr-posix.h @@ -48,6 +48,9 @@ typedef pthread_t __gthread_t; typedef pthread_key_t __gthread_key_t; typedef pthread_once_t __gthread_once_t; typedef pthread_mutex_t __gthread_mutex_t; +#ifndef __cplusplus +typedef pthread_rwlock_t __gthread_rwlock_t; +#endif typedef pthread_mutex_t __gthread_recursive_mutex_t; typedef pthread_cond_t __gthread_cond_t; typedef struct timespec __gthread_time_t; @@ -58,6 +61,9 @@ typedef struct timespec __gthread_time_t; #define __GTHREAD_MUTEX_INIT PTHREAD_MUTEX_INITIALIZER #define __GTHREAD_MUTEX_INIT_FUNCTION __gthread_mutex_init_function +#ifndef __cplusplus +#define __GTHREAD_RWLOCK_INIT PTHREAD_RWLOCK_INITIALIZER +#endif #define __GTHREAD_ONCE_INIT PTHREAD_ONCE_INIT #if defined(PTHREAD_RECURSIVE_MUTEX_INITIALIZER) #define __GTHREAD_RECURSIVE_MUTEX_INIT PTHREAD_RECURSIVE_MUTEX_INITIALIZER @@ -135,6 +141,13 @@ __gthrw(pthread_mutexattr_init) __gthrw(pthread_mutexattr_settype) __gthrw(pthread_mutexattr_destroy) +#ifndef __cplusplus +__gthrw(pthread_rwlock_rdlock) +__gthrw(pthread_rwlock_tryrdlock) +__gthrw(pthread_rwlock_wrlock) +__gthrw(pthread_rwlock_trywrlock) +__gthrw(pthread_rwlock_unlock) +#endif #if defined(_LIBOBJC) || defined(_LIBOBJC_WEAK) /* Objective-C. */ @@ -885,6 +898,53 @@ __gthread_cond_destroy (__gthread_cond_t* __cond) return __gthrw_(pthread_cond_destroy) (__cond); } +#ifndef __cplusplus +static inline int +__gthread_rwlock_rdlock (__gthread_rwlock_t *__rwlock) +{ + if (__gthread_active_p ()) + return __gthrw_(pthread_rwlock_rdlock) (__rwlock); + else + return 0; +} + +static inline int +__gthread_rwlock_tryrdlock (__gthread_rwlock_t *__rwlock) +{ + if (__gthread_active_p ()) + return __gthrw_(pthread_rwlock_tryrdlock) (__rwlock); + else + return 0; +} + +static inline int +__gthread_rwlock_wrlock (__gthread_rwlock_t *__rwlock) +{ + if (__gthread_active_p ()) + return __gthrw_(pthread_rwlock_wrlock) (__rwlock); + else + return 0; +} + +static inline int +__gthread_rwlock_trywrlock (__gthread_rwlock_t *__rwlock) +{ + if (__gthread_active_p ()) + return __gthrw_(pthread_rwlock_trywrlock) (__rwlock); + else + return 0; +} + +static inline int +__gthread_rwlock_unlock (__gthread_rwlock_t *__rwlock) +{ + if (__gthread_active_p ()) + return __gthrw_(pthread_rwlock_unlock) (__rwlock); + else + return 0; +} +#endif + #endif /* _LIBOBJC */ #endif /* ! GCC_GTHR_POSIX_H */ diff --git a/libgfortran/io/async.c b/libgfortran/io/async.c index 81d1d8175aa..a97272ce5e4 100644 --- a/libgfortran/io/async.c +++ b/libgfortran/io/async.c @@ -42,6 +42,10 @@ DEBUG_LINE (__thread const char *aio_prefix = MPREFIX); DEBUG_LINE (__gthread_mutex_t debug_queue_lock = __GTHREAD_MUTEX_INIT;) DEBUG_LINE (aio_lock_debug *aio_debug_head = NULL;) +#ifdef __GTHREAD_RWLOCK_INIT +DEBUG_LINE (aio_rwlock_debug *aio_rwlock_debug_head = NULL;) +DEBUG_LINE (__gthread_rwlock_t debug_queue_rwlock = __GTHREAD_RWLOCK_INIT;) +#endif /* Current unit for asynchronous I/O. Needed for error reporting. */ diff --git a/libgfortran/io/async.h b/libgfortran/io/async.h index ad226c8e856..0033cc74252 100644 --- a/libgfortran/io/async.h +++ b/libgfortran/io/async.h @@ -210,6 +210,128 @@ DEBUG_PRINTF ("%s" DEBUG_RED "ACQ:" DEBUG_NORM " %-30s %78p\n", aio_prefix, #mutex, mutex); \ } while (0) +#ifdef __GTHREAD_RWLOCK_INIT +#define RWLOCK_DEBUG_ADD(rwlock) do { \ + aio_rwlock_debug *n; \ + n = xmalloc (sizeof(aio_rwlock_debug)); \ + n->prev = TAIL_RWLOCK_DEBUG_QUEUE; \ + if (n->prev) \ + n->prev->next = n; \ + n->next = NULL; \ + n->line = __LINE__; \ + n->func = __FUNCTION__; \ + n->rw = rwlock; \ + if (!aio_rwlock_debug_head) { \ + aio_rwlock_debug_head = n; \ + } \ + } while (0) + +#define CHECK_RDLOCK(rwlock, status) do { \ + aio_rwlock_debug *curr; \ + INTERN_WRLOCK (&debug_queue_rwlock); \ + if (__gthread_rwlock_tryrdlock (rwlock)) { \ + if ((curr = IN_RWLOCK_DEBUG_QUEUE (rwlock))) { \ + sprintf (status, DEBUG_RED "%s():%d" DEBUG_NORM, curr->func, curr->line); \ + } else \ + sprintf (status, DEBUG_RED "unknown" DEBUG_NORM); \ + } \ + else { \ + __gthread_rwlock_unlock (rwlock); \ + sprintf (status, DEBUG_GREEN "rwunlocked" DEBUG_NORM); \ + } \ + INTERN_RWUNLOCK (&debug_queue_rwlock); \ + }while (0) + +#define CHECK_WRLOCK(rwlock, status) do { \ + aio_rwlock_debug *curr; \ + INTERN_WRLOCK (&debug_queue_rwlock); \ + if (__gthread_rwlock_trywrlock (rwlock)) { \ + if ((curr = IN_RWLOCK_DEBUG_QUEUE (rwlock))) { \ + sprintf (status, DEBUG_RED "%s():%d" DEBUG_NORM, curr->func, curr->line); \ + } else \ + sprintf (status, DEBUG_RED "unknown" DEBUG_NORM); \ + } \ + else { \ + __gthread_rwlock_unlock (rwlock); \ + sprintf (status, DEBUG_GREEN "rwunlocked" DEBUG_NORM); \ + } \ + INTERN_RWUNLOCK (&debug_queue_rwlock); \ + }while (0) + +#define TAIL_RWLOCK_DEBUG_QUEUE ({ \ + aio_rwlock_debug *curr = aio_rwlock_debug_head; \ + while (curr && curr->next) { \ + curr = curr->next; \ + } \ + curr; \ + }) + +#define IN_RWLOCK_DEBUG_QUEUE(rwlock) ({ \ + __label__ end; \ + aio_rwlock_debug *curr = aio_rwlock_debug_head; \ + while (curr) { \ + if (curr->rw == rwlock) { \ + goto end; \ + } \ + curr = curr->next; \ + } \ + end:; \ + curr; \ + }) + +#define RDLOCK(rwlock) do { \ + char status[200]; \ + CHECK_RDLOCK (rwlock, status); \ + DEBUG_PRINTF ("%s%-42s prev: %-35s %20s():%-5d %18p\n", aio_prefix, \ + DEBUG_RED "RDLOCK: " DEBUG_NORM #rwlock, status, __FUNCTION__, __LINE__, (void *) rwlock); \ + INTERN_RDLOCK (rwlock); \ + INTERN_WRLOCK (&debug_queue_rwlock); \ + RWLOCK_DEBUG_ADD (rwlock); \ + INTERN_RWUNLOCK (&debug_queue_rwlock); \ + DEBUG_PRINTF ("%s" DEBUG_RED "ACQ:" DEBUG_NORM " %-30s %78p\n", aio_prefix, #rwlock, rwlock); \ + } while (0) + +#define WRLOCK(rwlock) do { \ + char status[200]; \ + CHECK_WRLOCK (rwlock, status); \ + DEBUG_PRINTF ("%s%-42s prev: %-35s %20s():%-5d %18p\n", aio_prefix, \ + DEBUG_RED "WRLOCK: " DEBUG_NORM #rwlock, status, __FUNCTION__, __LINE__, (void *) rwlock); \ + INTERN_WRLOCK (rwlock); \ + INTERN_WRLOCK (&debug_queue_rwlock); \ + RWLOCK_DEBUG_ADD (rwlock); \ + INTERN_RWUNLOCK (&debug_queue_rwlock); \ + DEBUG_PRINTF ("%s" DEBUG_RED "ACQ:" DEBUG_NORM " %-30s %78p\n", aio_prefix, #rwlock, rwlock); \ + } while (0) + +#define RWUNLOCK(rwlock) do { \ + aio_rwlock_debug *curr; \ + DEBUG_PRINTF ("%s%-75s %20s():%-5d %18p\n", aio_prefix, DEBUG_GREEN "RWUNLOCK: " DEBUG_NORM #rwlock, \ + __FUNCTION__, __LINE__, (void *) rwlock); \ + INTERN_WRLOCK (&debug_queue_rwlock); \ + curr = IN_RWLOCK_DEBUG_QUEUE (rwlock); \ + if (curr) \ + { \ + if (curr->prev) \ + curr->prev->next = curr->next; \ + if (curr->next) { \ + curr->next->prev = curr->prev; \ + if (curr == aio_rwlock_debug_head) \ + aio_rwlock_debug_head = curr->next; \ + } else { \ + if (curr == aio_rwlock_debug_head) \ + aio_rwlock_debug_head = NULL; \ + } \ + free (curr); \ + } \ + INTERN_RWUNLOCK (&debug_queue_rwlock); \ + INTERN_RWUNLOCK (rwlock); \ + } while (0) + +#define RD_TO_WRLOCK(rwlock) \ + RWUNLOCK (rwlock);\ + WRLOCK (rwlock); +#endif + #define DEBUG_LINE(...) __VA_ARGS__ #else @@ -221,12 +343,31 @@ #define LOCK(mutex) INTERN_LOCK (mutex) #define UNLOCK(mutex) INTERN_UNLOCK (mutex) #define TRYLOCK(mutex) (__gthread_mutex_trylock (mutex)) +#ifdef __GTHREAD_RWLOCK_INIT +#define RDLOCK(rwlock) INTERN_RDLOCK (rwlock) +#define WRLOCK(rwlock) INTERN_WRLOCK (rwlock) +#define RWUNLOCK(rwlock) INTERN_RWUNLOCK (rwlock) +#define RD_TO_WRLOCK(rwlock) \ + RWUNLOCK (rwlock);\ + WRLOCK (rwlock); +#endif +#endif + +#ifndef __GTHREAD_RWLOCK_INIT +#define RDLOCK(rwlock) LOCK (rwlock) +#define WRLOCK(rwlock) LOCK (rwlock) +#define RWUNLOCK(rwlock) UNLOCK (rwlock) +#define RD_TO_WRLOCK(rwlock) {} #endif #define INTERN_LOCK(mutex) T_ERROR (__gthread_mutex_lock, mutex); #define INTERN_UNLOCK(mutex) T_ERROR (__gthread_mutex_unlock, mutex); +#define INTERN_RDLOCK(rwlock) T_ERROR (__gthread_rwlock_rdlock, rwlock); +#define INTERN_WRLOCK(rwlock) T_ERROR (__gthread_rwlock_wrlock, rwlock); +#define INTERN_RWUNLOCK(rwlock) T_ERROR (__gthread_rwlock_unlock, rwlock); + #if ASYNC_IO /* au->lock has to be held when calling this macro. */ @@ -288,8 +429,18 @@ DEBUG_LINE (typedef struct aio_lock_debug{ struct aio_lock_debug *prev; } aio_lock_debug;) +DEBUG_LINE (typedef struct aio_rwlock_debug{ + __gthread_rwlock_t *rw; + int line; + const char *func; + struct aio_rwlock_debug *next; + struct aio_rwlock_debug *prev; +} aio_rwlock_debug;) + DEBUG_LINE (extern aio_lock_debug *aio_debug_head;) DEBUG_LINE (extern __gthread_mutex_t debug_queue_lock;) +DEBUG_LINE (extern aio_rwlock_debug *aio_rwlock_debug_head;) +DEBUG_LINE (extern __gthread_rwlock_t debug_queue_rwlock;) /* Thread - local storage of the current unit we are looking at. Needed for error reporting. */ diff --git a/libgfortran/io/io.h b/libgfortran/io/io.h index ecdf1dd3f05..15daa0995b1 100644 --- a/libgfortran/io/io.h +++ b/libgfortran/io/io.h @@ -690,7 +690,7 @@ typedef struct gfc_unit from the UNIT_ROOT tree, but doesn't free it and the last of the waiting threads will do that. This must be either atomically increased/decreased, or - always guarded by UNIT_LOCK. */ + always guarded by UNIT_RWLOCK. */ int waiting; /* Flag set by close_unit if the unit as been closed. Must be manipulated under unit's lock. */ @@ -769,8 +769,13 @@ internal_proto(default_recl); extern gfc_unit *unit_root; internal_proto(unit_root); -extern __gthread_mutex_t unit_lock; -internal_proto(unit_lock); +#ifdef __GTHREAD_RWLOCK_INIT +extern __gthread_rwlock_t unit_rwlock; +internal_proto(unit_rwlock); +#else +extern __gthread_mutex_t unit_rwlock; +internal_proto(unit_rwlock); +#endif extern int close_unit (gfc_unit *); internal_proto(close_unit); @@ -1015,9 +1020,9 @@ dec_waiting_unlocked (gfc_unit *u) #ifdef HAVE_ATOMIC_FETCH_ADD (void) __atomic_fetch_add (&u->waiting, -1, __ATOMIC_RELAXED); #else - __gthread_mutex_lock (&unit_lock); + WRLOCK (&unit_rwlock); u->waiting--; - __gthread_mutex_unlock (&unit_lock); + RWUNLOCK (&unit_rwlock); #endif } diff --git a/libgfortran/io/transfer.c b/libgfortran/io/transfer.c index 8bb5d1101ca..b01f45b80f6 100644 --- a/libgfortran/io/transfer.c +++ b/libgfortran/io/transfer.c @@ -4539,9 +4539,9 @@ st_read_done_worker (st_parameter_dt *dtp, bool unlock) if (free_newunit) { /* Avoid inverse lock issues by placing after unlock_unit. */ - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); newunit_free (dtp->common.unit); - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); } } @@ -4636,9 +4636,9 @@ st_write_done_worker (st_parameter_dt *dtp, bool unlock) if (free_newunit) { /* Avoid inverse lock issues by placing after unlock_unit. */ - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); newunit_free (dtp->common.unit); - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); } } diff --git a/libgfortran/io/unit.c b/libgfortran/io/unit.c index 82664dc5f98..62f1db21d34 100644 --- a/libgfortran/io/unit.c +++ b/libgfortran/io/unit.c @@ -33,34 +33,36 @@ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see /* IO locking rules: - UNIT_LOCK is a master lock, protecting UNIT_ROOT tree and UNIT_CACHE. + UNIT_RWLOCK is a master lock, protecting UNIT_ROOT tree and UNIT_CACHE. + And use the rwlock to spilt read and write phase to UNIT_ROOT tree + and UNIT_CACHE to increase CPU efficiency. Concurrent use of different units should be supported, so each unit has its own lock, LOCK. Open should be atomic with its reopening of units and list_read.c in several places needs find_unit another unit while holding stdin - unit's lock, so it must be possible to acquire UNIT_LOCK while holding + unit's lock, so it must be possible to acquire UNIT_RWLOCK while holding some unit's lock. Therefore to avoid deadlocks, it is forbidden - to acquire unit's private locks while holding UNIT_LOCK, except + to acquire unit's private locks while holding UNIT_RWLOCK, except for freshly created units (where no other thread can get at their address yet) or when using just trylock rather than lock operation. In addition to unit's private lock each unit has a WAITERS counter and CLOSED flag. WAITERS counter must be either only atomically incremented/decremented in all places (if atomic builtins - are supported), or protected by UNIT_LOCK in all places (otherwise). + are supported), or protected by UNIT_RWLOCK in all places (otherwise). CLOSED flag must be always protected by unit's LOCK. - After finding a unit in UNIT_CACHE or UNIT_ROOT with UNIT_LOCK held, + After finding a unit in UNIT_CACHE or UNIT_ROOT with UNIT_RWLOCK held, WAITERS must be incremented to avoid concurrent close from freeing - the unit between unlocking UNIT_LOCK and acquiring unit's LOCK. - Unit freeing is always done under UNIT_LOCK. If close_unit sees any + the unit between unlocking UNIT_RWLOCK and acquiring unit's LOCK. + Unit freeing is always done under UNIT_RWLOCK. If close_unit sees any WAITERS, it doesn't free the unit but instead sets the CLOSED flag and the thread that decrements WAITERS to zero while CLOSED flag is - set is responsible for freeing it (while holding UNIT_LOCK). + set is responsible for freeing it (while holding UNIT_RWLOCK). flush_all_units operation is iterating over the unit tree with - increasing UNIT_NUMBER while holding UNIT_LOCK and attempting to + increasing UNIT_NUMBER while holding UNIT_RWLOCK and attempting to flush each unit (and therefore needs the unit's LOCK held as well). To avoid deadlocks, it just trylocks the LOCK and if unsuccessful, - remembers the current unit's UNIT_NUMBER, unlocks UNIT_LOCK, acquires - unit's LOCK and after flushing reacquires UNIT_LOCK and restarts with + remembers the current unit's UNIT_NUMBER, unlocks UNIT_RWLOCK, acquires + unit's LOCK and after flushing reacquires UNIT_RWLOCK and restarts with the smallest UNIT_NUMBER above the last one flushed. If find_unit/find_or_create_unit/find_file/get_unit routines return @@ -101,10 +103,14 @@ gfc_offset max_offset; gfc_offset default_recl; gfc_unit *unit_root; +#ifdef __GTHREAD_RWLOCK_INIT +__gthread_rwlock_t unit_rwlock = __GTHREAD_RWLOCK_INIT; +#else #ifdef __GTHREAD_MUTEX_INIT -__gthread_mutex_t unit_lock = __GTHREAD_MUTEX_INIT; +__gthread_mutex_t unit_rwlock = __GTHREAD_MUTEX_INIT; #else -__gthread_mutex_t unit_lock; +__gthread_mutex_t unit_rwlock; +#endif #endif /* We use these filenames for error reporting. */ @@ -329,7 +335,7 @@ get_gfc_unit (int n, int do_create) int c, created = 0; NOTE ("Unit n=%d, do_create = %d", n, do_create); - LOCK (&unit_lock); + RDLOCK (&unit_rwlock); retry: for (c = 0; c < CACHE_SIZE; c++) @@ -350,6 +356,17 @@ retry: if (c == 0) break; } + /* We did not find a unit in the cache nor in the unit list, create a new + (locked) unit and insert into the unit list and cache. + Manipulating either or both the unit list and the unit cache requires to + hold a write-lock [for obvious reasons]: + 1. By separating the read/write lock, it will greatly reduce the contention + at the read part, while write part is not always necessary or most + unlikely once the unit hit in cache. + 2. We try to balance the implementation complexity and the performance + gains that fit into current cases we observed by just using a + pthread_rwlock. */ + RD_TO_WRLOCK (&unit_rwlock); if (p == NULL && do_create) { @@ -368,8 +385,8 @@ retry: if (created) { /* Newly created units have their lock held already - from insert_unit. Just unlock UNIT_LOCK and return. */ - UNLOCK (&unit_lock); + from insert_unit. Just unlock UNIT_RWLOCK and return. */ + RWUNLOCK (&unit_rwlock); return p; } @@ -380,7 +397,7 @@ found: if (! TRYLOCK (&p->lock)) { /* assert (p->closed == 0); */ - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); return p; } @@ -388,14 +405,14 @@ found: } - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); if (p != NULL && (p->child_dtio == 0)) { LOCK (&p->lock); if (p->closed) { - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); UNLOCK (&p->lock); if (predec_waiting_locked (p) == 0) destroy_unit_mutex (p); @@ -593,8 +610,8 @@ init_units (void) #endif #endif -#ifndef __GTHREAD_MUTEX_INIT - __GTHREAD_MUTEX_INIT_FUNCTION (&unit_lock); +#if (!defined(__GTHREAD_RWLOCK_INIT) && !defined(__GTHREAD_MUTEX_INIT)) + __GTHREAD_MUTEX_INIT_FUNCTION (&unit_rwlock); #endif if (sizeof (max_offset) == 8) @@ -731,7 +748,7 @@ close_unit_1 (gfc_unit *u, int locked) u->closed = 1; if (!locked) - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); for (i = 0; i < CACHE_SIZE; i++) if (unit_cache[i] == u) @@ -758,7 +775,7 @@ close_unit_1 (gfc_unit *u, int locked) destroy_unit_mutex (u); if (!locked) - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); return rc; } @@ -795,10 +812,10 @@ close_unit (gfc_unit *u) void close_units (void) { - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); while (unit_root != NULL) close_unit_1 (unit_root, 1); - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); free (newunits); @@ -905,7 +922,7 @@ finish_last_advance_record (gfc_unit *u) int newunit_alloc (void) { - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); if (!newunits) { newunits = xcalloc (16, 1); @@ -919,7 +936,7 @@ newunit_alloc (void) { newunits[ii] = true; newunit_lwi = ii + 1; - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); return -ii + NEWUNIT_START; } } @@ -932,12 +949,12 @@ newunit_alloc (void) memset (newunits + old_size, 0, old_size); newunits[old_size] = true; newunit_lwi = old_size + 1; - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); return -old_size + NEWUNIT_START; } -/* Free a previously allocated newunit= unit number. unit_lock must +/* Free a previously allocated newunit= unit number. unit_rwlock must be held when calling. */ void diff --git a/libgfortran/io/unix.c b/libgfortran/io/unix.c index ba12be08252..d53003919ab 100644 --- a/libgfortran/io/unix.c +++ b/libgfortran/io/unix.c @@ -1774,7 +1774,7 @@ find_file (const char *file, gfc_charlen_type file_len) id = id_from_path (path); #endif - LOCK (&unit_lock); + RDLOCK (&unit_rwlock); retry: u = find_file0 (unit_root, FIND_FILE0_ARGS); if (u != NULL) @@ -1783,19 +1783,19 @@ retry: if (! __gthread_mutex_trylock (&u->lock)) { /* assert (u->closed == 0); */ - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); goto done; } inc_waiting_locked (u); } - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); if (u != NULL) { LOCK (&u->lock); if (u->closed) { - LOCK (&unit_lock); + RDLOCK (&unit_rwlock); UNLOCK (&u->lock); if (predec_waiting_locked (u) == 0) free (u); @@ -1839,13 +1839,13 @@ flush_all_units (void) gfc_unit *u; int min_unit = 0; - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); do { u = flush_all_units_1 (unit_root, min_unit); if (u != NULL) inc_waiting_locked (u); - UNLOCK (&unit_lock); + RWUNLOCK (&unit_rwlock); if (u == NULL) return; @@ -1856,13 +1856,13 @@ flush_all_units (void) if (u->closed == 0) { sflush (u->s); - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); UNLOCK (&u->lock); (void) predec_waiting_locked (u); } else { - LOCK (&unit_lock); + WRLOCK (&unit_rwlock); UNLOCK (&u->lock); if (predec_waiting_locked (u) == 0) free (u);