From patchwork Sun Oct 29 23:01:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159437 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1892456vqb; Sun, 29 Oct 2023 16:11:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGt9vqdcO/PlFV2CiYve9n2+5QctbjJFpTsMHokviJerr62INB9BOU6DUAk6U+8JLNAbS0+ X-Received: by 2002:a05:6a21:9988:b0:15c:b7ba:9137 with SMTP id ve8-20020a056a21998800b0015cb7ba9137mr7936849pzb.2.1698621096119; Sun, 29 Oct 2023 16:11:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621096; cv=none; d=google.com; s=arc-20160816; b=t3GKKSSzCHCV1dW9NTD4++ZKCKpQJegqGqFbMenPaWezC8rNanTX3uoxgVbjNUeNX4 1Lat72xzkeU75MZQF1aU4LT/0yQRz2U1c2jLFDxhxlLeE3XJNpqFLdP391WDGLlmuvFJ gk6JIRuZPxON4CjGonVDz6m0AzMnlPJqQQN8BhRB/JpFngmdlEkYD6fA8kG5QLjsUmMQ 2Joz8ZYLshHWOseQR0vRpzl8/ZoUynQuWHb3uTYwQ03HWtQlTM4PFsZzfDkK/TDILAKY n2PIOrmLp2BuVXwHOCBKtp+gLIVjS/gKSnvNSGuUFHB51wvgCSE/wPP7diMEBGWq+8u1 52mQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GYD9pgNSP01Rz8/kweiXSEF2g7wjGaI2HHUMXHZ0VBE=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=O4KVwApT/wRFluD3gwHcrmmXpoQhi5jGB6o8iPdlHnYTcAvuQBEqCK6fR3U1EBGG3a zZ3FxmKr7BPMn98Ot142DY7mPymZyhDCPs3UXYI9h3aJORrNh4o+avyDZbTDTACjW7P5 ytwCHvjGNNDRA7/t4iuT6pcs+Qje11DoVTGZ/4iWz4qleByPQISZdLYvd7f72X93ev0H /OgR1CDuHp9JKk2Ciwgqls4V54rMfGT6IwE36W0Vcs+MAANm5/Op4O0bJyGx4g7XYw8R 1l3vfW/euFq9Ey8nEKCk8FaHJu+/RmrCtIEJbRO7yu2NxzYIYOWTlJeyoMUpaiJZVDRK zXzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=aVpW1wju; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id f15-20020a170902ce8f00b001ca0b64f5a6si1329991plg.449.2023.10.29.16.11.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:11:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=aVpW1wju; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 60AE6807E449; Sun, 29 Oct 2023 16:11:32 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232225AbjJ2XJt (ORCPT + 31 others); Sun, 29 Oct 2023 19:09:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232797AbjJ2XJQ (ORCPT ); Sun, 29 Oct 2023 19:09:16 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 802D35256 for ; Sun, 29 Oct 2023 16:03:57 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id A42C36607389; Sun, 29 Oct 2023 23:02:35 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620556; bh=8mHobcZt5q0EvB1OxUktWTmajwvEEdihAewQToXFLnE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aVpW1wjuKVHMcJl8AwetW+SjRrRqS3ZqXoa4ojguAof7GuTPAkVL2mE9vISHmZrMY bmzLQt/7sRiddkBk3QPB10UQLnYr9b5TOtPYmETy0z/tdgj4ySnLemP6afpdltLaRx 43AINAMyrF24KDK6Wjbba54k/HfUeEs04lqnAxQ26+ItWPkWkpnhd4h6yd0pOMA1IH 1lSB9x8+t8wTKQbifhU2hlLkwGfPt8kyqbhgW2sw+3d33cr0TODMiwzS1QtN4FqFnN AMGxm7HgyEun0ra71s7zRuDKmoa4VL4VtkCWw23CXfPWvvjX5/qv5E5driSNrsQUq3 eDyoOVmcOqAZg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 07/26] drm/shmem-helper: Use refcount_t for pages_use_count Date: Mon, 30 Oct 2023 02:01:46 +0300 Message-ID: <20231029230205.93277-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:11:32 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133314748945937 X-GMAIL-MSGID: 1781133314748945937 Use atomic refcount_t helper for pages_use_count to optimize pin/unpin functions by skipping reservation locking while GEM's pin refcount > 1. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem_shmem_helper.c | 33 +++++++++++-------------- drivers/gpu/drm/lima/lima_gem.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- include/drm/drm_gem_shmem_helper.h | 2 +- 4 files changed, 18 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b9b71a1a563a..6e02643ed87e 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -155,7 +155,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (shmem->pages) drm_gem_shmem_put_pages_locked(shmem); - drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); dma_resv_unlock(shmem->base.resv); @@ -173,14 +173,13 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - if (shmem->pages_use_count++ > 0) + if (refcount_inc_not_zero(&shmem->pages_use_count)) return 0; pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count = 0; return PTR_ERR(pages); } @@ -196,6 +195,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages = pages; + refcount_set(&shmem->pages_use_count, 1); + return 0; } @@ -211,21 +212,17 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - return; - - if (--shmem->pages_use_count > 0) - return; - + if (refcount_dec_and_test(&shmem->pages_use_count)) { #ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); #endif - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; + } } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); @@ -552,8 +549,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) * mmap'd, vm_open() just grabs an additional reference for the new * mm the vma is getting copied into (ie. on fork()). */ - if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - shmem->pages_use_count++; + drm_WARN_ON_ONCE(obj->dev, + !refcount_inc_not_zero(&shmem->pages_use_count)); dma_resv_unlock(shmem->base.resv); @@ -641,7 +638,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, return; drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); - drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); + drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count)); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 62d4a409faa8..988e74f67465 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -47,7 +47,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) } bo->base.pages = pages; - bo->base.pages_use_count = 1; + refcount_set(&bo->base.pages_use_count, 1); mapping_set_unevictable(mapping); } diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 9fd4a89c52dd..770dab1942c2 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -487,7 +487,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, goto err_unlock; } bo->base.pages = pages; - bo->base.pages_use_count = 1; + refcount_set(&bo->base.pages_use_count, 1); } else { pages = bo->base.pages; if (pages[page_offset]) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 5088bd623518..bd3596e54abe 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -37,7 +37,7 @@ struct drm_gem_shmem_object { * Reference count on the pages table. * The pages are put when the count reaches zero. */ - unsigned int pages_use_count; + refcount_t pages_use_count; /** * @pages_pin_count: