From patchwork Sun Oct 29 23:01:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159455 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1899764vqb; Sun, 29 Oct 2023 16:36:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH342/553HIhyU7CS34zOTsPkphsLlOJ3mryajWwdZGlcjSDEvJjtsZiRsQpGDTNOcdGh3o X-Received: by 2002:a17:90b:f06:b0:27d:694f:195d with SMTP id br6-20020a17090b0f0600b0027d694f195dmr7525463pjb.6.1698622598578; Sun, 29 Oct 2023 16:36:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698622598; cv=none; d=google.com; s=arc-20160816; b=vD0CDIWQHJyY+H+MLOJsyIeIzgdWk+tehuXCO+sRPdBD3SP90u9Blwxgb6duPKNIP0 d15EzHRj5TDRALBIqbVeWtIwHZ8nyTQx8HKV8fs2xvPknxGlHAXYMOzbTRO32NWoQMsj iSVrhkjybb/YnVFVeLwJflPg+A3A3X9xVYm2s8C5mJpCWqNEy4nMJnABxVeaax4pv7yw PmVa52MXaDYVNCvJ/a0AYdmcLB8Cetno4XAWsaRnJS1rGL6g2V2UGlHP5aq50Vz41bMn Nc0c5zdCPJ12Tjc1l++w33Bz9SMpE2oP1ThMPMr5sFS6tKtXZCkGDQ94kDPn9+2PJUjI lyFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DFejmfhhw8RKHzGeJXbYKFFo35pjOAPU1tmz4ha9vGo=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=nDh5aozj1SEVatPSTh/bC8iDkCYsmZw4sZrs24GtCG4RcarKklzH2juLgFLPJ8lkJp jt7GzgGjo1w3ID0bvS9mgNp9zrEuBXEVzyG5gO/JYzvXvjYLpok9BaoUcW1zqF73fisE B1L7mG6t4UZTPNmqB3G3kHb4H5/wfMJ0cuRsoTVERsXE4fs+K77GiNi9sVM3sZqwDm5X fhUWQ3+yHBh+CJKB2B98kb7vCSyT6m1gY0L+NYOpNNcYRQFhFyLE1RWxeyruY9kgv3lk w4g/qN33/yx7QVUT1/5a2rJSB4isTrXOJR7cqriYdQ47Ni0JvOLQ8s8P8RF97PZpJ1YS sVYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="R/OWfRgx"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id u5-20020a17090ae00500b0027d27512cd4si707273pjy.81.2023.10.29.16.36.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:36:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="R/OWfRgx"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 230DE80560CD; Sun, 29 Oct 2023 16:36:29 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231590AbjJ2Xf7 (ORCPT + 31 others); Sun, 29 Oct 2023 19:35:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231864AbjJ2XIL (ORCPT ); Sun, 29 Oct 2023 19:08:11 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FB67AD23 for ; Sun, 29 Oct 2023 16:03:32 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id ADFBB66072F6; Sun, 29 Oct 2023 23:02:26 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620548; bh=XMI34Bap/4EWbhWtNHKdt2ZdOU0a8Uo5FM3YxczC1TA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R/OWfRgxYZoWz1k8oxoZU7AlFIevmUtkYE8IK+FC518G6J+6iS0mIeGNI4bcUlNUI sqfmqq+PXy2D+uORtsdrCRjBEjueugqwI26EZQEbNm4yDyZYJP7b4oOCguQKUiYNkD NZgy2zZJlGX3INve0fNupLNk/XCyPYO/UoEWEYIzRlroocVPMlbzqWNmya007IB6bh JdnZrkVCdcmPw0kqbzXMy3AIcoN8aXpgiSKDwQDOboyFKiVe6giML86l8YnHtA34D6 fn7+ToxE3R3w0NA/XdMQGvV+f+TC2UlQBxhdUi1XbO7x0ckCWEAxmXDUQ3drEqs0tk oMdaK+f6tOtYQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 01/26] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Date: Mon, 30 Oct 2023 02:01:40 +0300 Message-ID: <20231029230205.93277-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:36:29 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134889632699333 X-GMAIL-MSGID: 1781134889632699333 Make drm/gem API function names consistent by having locked function use the _locked postfix in the name, while the unlocked variants don't use the _unlocked postfix. Rename drm_gem_v/unmap() function names to make them consistent with the rest of the API functions. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_client.c | 6 +++--- drivers/gpu/drm/drm_gem.c | 20 ++++++++++---------- drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +++--- drivers/gpu/drm/drm_internal.h | 4 ++-- drivers/gpu/drm/drm_prime.c | 4 ++-- drivers/gpu/drm/lima/lima_sched.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_dump.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +++--- include/drm/drm_gem.h | 4 ++-- 9 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index 2762572f286e..c935db1ba918 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -265,7 +265,7 @@ void drm_client_dev_restore(struct drm_device *dev) static void drm_client_buffer_delete(struct drm_client_buffer *buffer) { if (buffer->gem) { - drm_gem_vunmap_unlocked(buffer->gem, &buffer->map); + drm_gem_vunmap(buffer->gem, &buffer->map); drm_gem_object_put(buffer->gem); } @@ -349,7 +349,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap_unlocked(buffer->gem, map); + ret = drm_gem_vmap(buffer->gem, map); if (ret) return ret; @@ -371,7 +371,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { struct iosys_map *map = &buffer->map; - drm_gem_vunmap_unlocked(buffer->gem, map); + drm_gem_vunmap(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 44a948b80ee1..95327b003692 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1175,7 +1175,7 @@ void drm_gem_unpin(struct drm_gem_object *obj) obj->funcs->unpin(obj); } -int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) +int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map) { int ret; @@ -1192,9 +1192,9 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) return 0; } -EXPORT_SYMBOL(drm_gem_vmap); +EXPORT_SYMBOL(drm_gem_vmap_locked); -void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) +void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map) { dma_resv_assert_held(obj->resv); @@ -1207,27 +1207,27 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) /* Always set the mapping to NULL. Callers may rely on this. */ iosys_map_clear(map); } -EXPORT_SYMBOL(drm_gem_vunmap); +EXPORT_SYMBOL(drm_gem_vunmap_locked); -int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) { int ret; dma_resv_lock(obj->resv, NULL); - ret = drm_gem_vmap(obj, map); + ret = drm_gem_vmap_locked(obj, map); dma_resv_unlock(obj->resv); return ret; } -EXPORT_SYMBOL(drm_gem_vmap_unlocked); +EXPORT_SYMBOL(drm_gem_vmap); -void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) { dma_resv_lock(obj->resv, NULL); - drm_gem_vunmap(obj, map); + drm_gem_vunmap_locked(obj, map); dma_resv_unlock(obj->resv); } -EXPORT_SYMBOL(drm_gem_vunmap_unlocked); +EXPORT_SYMBOL(drm_gem_vunmap); /** * drm_gem_lock_reservations - Sets up the ww context and acquires diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c index 3bdb6ba37ff4..3808f47310bf 100644 --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c @@ -362,7 +362,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map, ret = -EINVAL; goto err_drm_gem_vunmap; } - ret = drm_gem_vmap_unlocked(obj, &map[i]); + ret = drm_gem_vmap(obj, &map[i]); if (ret) goto err_drm_gem_vunmap; } @@ -384,7 +384,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map, obj = drm_gem_fb_get_obj(fb, i); if (!obj) continue; - drm_gem_vunmap_unlocked(obj, &map[i]); + drm_gem_vunmap(obj, &map[i]); } return ret; } @@ -411,7 +411,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, struct iosys_map *map) continue; if (iosys_map_is_null(&map[i])) continue; - drm_gem_vunmap_unlocked(obj, &map[i]); + drm_gem_vunmap(obj, &map[i]); } } EXPORT_SYMBOL(drm_gem_fb_vunmap); diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h index 8462b657c375..61179f89a941 100644 --- a/drivers/gpu/drm/drm_internal.h +++ b/drivers/gpu/drm/drm_internal.h @@ -177,8 +177,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, int drm_gem_pin(struct drm_gem_object *obj); void drm_gem_unpin(struct drm_gem_object *obj); -int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map); -void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map); +int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map); /* drm_debugfs.c drm_debugfs_crc.c */ #if defined(CONFIG_DEBUG_FS) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 63b709a67471..57ac5623f09a 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -682,7 +682,7 @@ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map) { struct drm_gem_object *obj = dma_buf->priv; - return drm_gem_vmap(obj, map); + return drm_gem_vmap_locked(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vmap); @@ -698,7 +698,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map) { struct drm_gem_object *obj = dma_buf->priv; - drm_gem_vunmap(obj, map); + drm_gem_vunmap_locked(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index ffd91a5ee299..843487128544 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -371,7 +371,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) } else { buffer_chunk->size = lima_bo_size(bo); - ret = drm_gem_vmap_unlocked(&bo->base.base, &map); + ret = drm_gem_vmap(&bo->base.base, &map); if (ret) { kvfree(et); goto out; @@ -379,7 +379,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); - drm_gem_vunmap_unlocked(&bo->base.base, &map); + drm_gem_vunmap(&bo->base.base, &map); } buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/panfrost/panfrost_dump.c b/drivers/gpu/drm/panfrost/panfrost_dump.c index e7942ac449c6..0f30bbea9895 100644 --- a/drivers/gpu/drm/panfrost/panfrost_dump.c +++ b/drivers/gpu/drm/panfrost/panfrost_dump.c @@ -209,7 +209,7 @@ void panfrost_core_dump(struct panfrost_job *job) goto dump_header; } - ret = drm_gem_vmap_unlocked(&bo->base.base, &map); + ret = drm_gem_vmap(&bo->base.base, &map); if (ret) { dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n"); iter.hdr->bomap.valid = 0; @@ -236,7 +236,7 @@ void panfrost_core_dump(struct panfrost_job *job) vaddr = map.vaddr; memcpy(iter.data, vaddr, bo->base.base.size); - drm_gem_vunmap_unlocked(&bo->base.base, &map); + drm_gem_vunmap(&bo->base.base, &map); iter.hdr->bomap.valid = 1; diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index ba9b6e2b2636..52befead08c6 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -106,7 +106,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, goto err_close_bo; } - ret = drm_gem_vmap_unlocked(&bo->base, &map); + ret = drm_gem_vmap(&bo->base, &map); if (ret) goto err_put_mapping; perfcnt->buf = map.vaddr; @@ -165,7 +165,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, return 0; err_vunmap: - drm_gem_vunmap_unlocked(&bo->base, &map); + drm_gem_vunmap(&bo->base, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -195,7 +195,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); perfcnt->user = NULL; - drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map); + drm_gem_vunmap(&perfcnt->mapping->obj->base.base, &map); perfcnt->buf = NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 16364487fde9..3daa8db644c3 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -527,8 +527,8 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); -int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); -void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); +int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map); int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles, int count, struct drm_gem_object ***objs_out); From patchwork Sun Oct 29 23:01:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159449 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1896198vqb; Sun, 29 Oct 2023 16:23:58 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG7qXdukUAeBbVaSOqkjAf1R/OuW0N9SLjUAgU34RvtP2uXCjieR68+qmr48vbEjhBXku7o X-Received: by 2002:a05:6a20:e11e:b0:12c:2dc7:74bc with SMTP id kr30-20020a056a20e11e00b0012c2dc774bcmr12823517pzb.46.1698621838488; Sun, 29 Oct 2023 16:23:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621838; cv=none; d=google.com; s=arc-20160816; b=Rmm20R0PXTqQPeDnU3674e/O6FstSlw6TXNz95ZDomQEumWQRdnqtYblNpliSwK4ap Remqji2yByIc8Yq/FqYKbEKDW72ndqs2Kc4MZ4x9/Fz9YOR+uwdpaGJL/x6fc3vpvyhh 3k8cFuP0LHSdqQ0HL9peNwO14wFsx83TuDVkW/FtQmDY+cHsxxtgjP+NgEMTt4DpNuDF 4MIj4Xr5MsLfOaXrLcgnreOgPa9wDw+l4s5+BlkpmUDTaHS6TdOAQ8fTfEbd7dzhkgm3 PjNsSqJ/qfy1wn/BI6xogMogcBfAknNCn3Bd2d2HQf5kvmaZZBqByALMZzPlx2uCOQ41 EAiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Tfo4M9T/sbRJeWWF1fQjM9AXkJQCNJFnO6StQC6Rw2k=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=Xq1JL8qzg8INzc8sdThydCmypVRw2hjvU5C1LmYLlIaKZGDP4UzJ/QuHckSN+YR/sR kkezISo/eZfZahCkJ24ySN89EUiPUX8bPGZUopPQqTJxqIB+ywklZEKT9dsOG0OfWNeA JSWw9Tog3zPt6VDVxRzDV1sTnGtK0Q8J353uknodocxze3e+G3sHtj6KbsYT4tIvJJl9 gSD4At/XzMzLtssORaZWj8peNrhMEgG0JTenay7/xWozfdsCUPK7sxw7irbJOFHcYbvv qqGPtMnEgKrftnEA8F5+0Rl4rnIEmXZv6kGiz90DqI0Ni6xQ1MUTHXkaREXV2iTumtxG +8zw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=RdnV8+Jx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id z3-20020a633303000000b005b11ce9c97esi4034449pgz.353.2023.10.29.16.23.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:23:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=RdnV8+Jx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 68D70807D54C; Sun, 29 Oct 2023 16:23:01 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231361AbjJ2XWs (ORCPT + 31 others); Sun, 29 Oct 2023 19:22:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232185AbjJ2XWd (ORCPT ); Sun, 29 Oct 2023 19:22:33 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D0F0AD27 for ; Sun, 29 Oct 2023 16:03:32 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 2E19366072BB; Sun, 29 Oct 2023 23:02:28 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620549; bh=KY/ituSmjApTGQqTq0AtCm1dqXKqbUYecOQaKKnFD8s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RdnV8+JxJ7IDe7iu3uGPQ4wPKugU58yvcc5QiILNK7R3iFTwnsaTaEavspOLHIchQ s7r/XqeOfQFSuAI3qvUM8cfqkMjULJOc8OdZSZOKesj2buoLnjg80oQwbMd6XemfDK jiSR/Uhbew4G+eMNMQ4urou+7mh33a9bp4BY37+Pj3r+SoRCl1qRrq2R8wH6yqcgYu mRuIiomdnsFdppITadS3k/qJ6cPJEIAP+lJwvylf6vXCeGXIEfUD/TUPAjaCaFYPXo YjEZ2dkTd9cYcNngKIqLoM23jSho9YZimeUS6rqs+t0ZlX8YjdV+1XS4ChRPjQdL8D ikanOEQgUVRRQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 02/26] drm/gem: Add _locked postfix to functions that have unlocked counterpart Date: Mon, 30 Oct 2023 02:01:41 +0300 Message-ID: <20231029230205.93277-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:23:01 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134093042869466 X-GMAIL-MSGID: 1781134093042869466 Add _locked postfix to drm_gem functions that have unlocked counterpart functions to make GEM functions naming more consistent and intuitive in regards to the locking requirements. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem.c | 6 +++--- include/drm/drm_gem.h | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 95327b003692..4523cd40fb2f 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1490,10 +1490,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, EXPORT_SYMBOL(drm_gem_lru_scan); /** - * drm_gem_evict - helper to evict backing pages for a GEM object + * drm_gem_evict_locked - helper to evict backing pages for a GEM object * @obj: obj in question */ -int drm_gem_evict(struct drm_gem_object *obj) +int drm_gem_evict_locked(struct drm_gem_object *obj) { dma_resv_assert_held(obj->resv); @@ -1505,4 +1505,4 @@ int drm_gem_evict(struct drm_gem_object *obj) return 0; } -EXPORT_SYMBOL(drm_gem_evict); +EXPORT_SYMBOL(drm_gem_evict_locked); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 3daa8db644c3..c55d8571dbb3 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -551,7 +551,7 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned long *remaining, bool (*shrink)(struct drm_gem_object *obj)); -int drm_gem_evict(struct drm_gem_object *obj); +int drm_gem_evict_locked(struct drm_gem_object *obj); #ifdef CONFIG_LOCKDEP /** From patchwork Sun Oct 29 23:01:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159439 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1894691vqb; Sun, 29 Oct 2023 16:18:33 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFz3FxS4PqkG8YzTOoPfcabbSAJb4f8MoKc1UjdQYXnHAVj78qGbSvm23B213qOgF1xA2Yt X-Received: by 2002:a05:6a20:5511:b0:16c:13b1:2c6d with SMTP id ko17-20020a056a20551100b0016c13b12c6dmr5176475pzb.56.1698621513611; Sun, 29 Oct 2023 16:18:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621513; cv=none; d=google.com; s=arc-20160816; b=XrgNLLOnQtRG7paCeMyq3ZYH8vzytyYuTZgv2KPHmizbvXVyhHe3mfU7QPvcAE9bHh uOskSqHuK44ks7/ydgHpJHOCZ15VgwKcgEn922tASXX9SaWHN/Os6sW0KsVGCFA4aLm6 FW8Pp1oDvxtzeipE+YrIle/W6QJVRSdvs2lyJNActu6M+Eu3c43+YT7myRq1WUOI92M0 tGKgW/SpiHz8CYIe5qJc9HNK5y/u6+6IjpLncli1IuZLapcU1NCFK8l2qiwca74A0V/V majK821wMayA6BTtRSYCi0VR1l2BZ9XMInaLpgjRUEzvKDo4jys83W/MJ1CvcOgpACmv lujg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=KaJSYS5DxjgAFffc744eZoHH0JKA1hCgY4RbJpxT0tI=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=wrtnGPjgo+2jZwVnkHQWP0ri8qWjGTvGoIkTkUebwHf24QVhlJDoZORMERDcdsrZC6 cXd62VG7i/Vo+xe/bLEVNohUFxcaCHiUen5kLptkIvUnZp3upI7nfHZ7wBLDfIxs228U izLy3SGL/6q2V3eP2+OBawtDX/Isn6OhGPeK/TbTKUVfEegrtz9V00RMMxuFOHHi4ouv wruFddK6BFCaX0BT62ebh4owXsN/qRBLFKwJoBRvZkQfBMIvwREDKast2oyuo2fsdwwu /8/lddghsf+JNoBgi+jfVyJSfW4tL2iLZtzoqsq6MDb2M13xebw82vYxwn9hfMS+g5nV Xwog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=RC6ai7aX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id ei5-20020a17090ae54500b002801f1333fesi3049115pjb.128.2023.10.29.16.18.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:18:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=RC6ai7aX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A1B52804704C; Sun, 29 Oct 2023 16:18:32 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231465AbjJ2XS1 (ORCPT + 31 others); Sun, 29 Oct 2023 19:18:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231236AbjJ2XSL (ORCPT ); Sun, 29 Oct 2023 19:18:11 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BB66AD2E for ; Sun, 29 Oct 2023 16:03:33 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 9CE106607332; Sun, 29 Oct 2023 23:02:29 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620550; bh=21lEqDbncYmn6x++Y7957tMuiAQZiHCpmJysA2qwmVQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RC6ai7aXw16M2QebV6Ces9socrac1c9HRK7i6+pWQK+Q9Um06dx+zaOAoCM5mRqHL 4NuBU7rDdxNzsj6oiJuv2iyreu1MB+iSXdJhimQYGQPB3EE3IJg1fWD15a9/xNpsGN rRivX7V6sUvcY2PISBaSW/S35JtLaLVdC32kTmEe/qUIW4ejdCd1fXKv6gkpNhH9Jb BNeAfO9Z2LeMFb2D7drTJ1aRJcV8HgI7+O4xCJ1I8ehqGCY5KK9Yn3wpGyZHqKFsCP 1atLlQzqBm59RYlAC/k9aM4N5j+EsuPzOrpHl1AwvsqOamIFUDyXZRRs8V0dI+UDvF ixW9UCmAWV8SQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 03/26] drm/shmem-helper: Make all exported symbols GPL Date: Mon, 30 Oct 2023 02:01:42 +0300 Message-ID: <20231029230205.93277-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:18:32 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133752098366883 X-GMAIL-MSGID: 1781133752098366883 Make all drm-shmem exported symbols GPL to make them consistent with the rest of drm-shmem symbols. Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem_shmem_helper.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index e435f986cd13..0d61f2b3e213 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -226,7 +226,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) shmem->pages_mark_accessed_on_put); shmem->pages = NULL; } -EXPORT_SYMBOL(drm_gem_shmem_put_pages); +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { @@ -271,7 +271,7 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) return ret; } -EXPORT_SYMBOL(drm_gem_shmem_pin); +EXPORT_SYMBOL_GPL(drm_gem_shmem_pin); /** * drm_gem_shmem_unpin - Unpin backing pages for a shmem GEM object @@ -290,7 +290,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_gem_shmem_unpin_locked(shmem); dma_resv_unlock(shmem->base.resv); } -EXPORT_SYMBOL(drm_gem_shmem_unpin); +EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object @@ -360,7 +360,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, return ret; } -EXPORT_SYMBOL(drm_gem_shmem_vmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap); /* * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object @@ -396,7 +396,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, shmem->vaddr = NULL; } -EXPORT_SYMBOL(drm_gem_shmem_vunmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap); static int drm_gem_shmem_create_with_handle(struct drm_file *file_priv, @@ -435,7 +435,7 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) return (madv >= 0); } -EXPORT_SYMBOL(drm_gem_shmem_madvise); +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { @@ -467,7 +467,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL(drm_gem_shmem_purge); +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge); /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object @@ -642,7 +642,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); } -EXPORT_SYMBOL(drm_gem_shmem_print_info); +EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); /** * drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned From patchwork Sun Oct 29 23:01:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159459 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1901400vqb; Sun, 29 Oct 2023 16:43:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEuLdRSSwpjSU/rB2LGdHrjEylOMkKyB5SW0exWMYuX+dDRu2VvxRHsWCI+o4tFMHJL08Ab X-Received: by 2002:a05:6871:2b04:b0:1e9:beae:947d with SMTP id dr4-20020a0568712b0400b001e9beae947dmr11919844oac.19.1698622991928; Sun, 29 Oct 2023 16:43:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698622991; cv=none; d=google.com; s=arc-20160816; b=dfafyKaw4mwnFpp3uTYY+Dwx8iYw6aQkjLhUnJQfeOK0gg62Hl9l9nKq4cZ8V1GXqc zsTCe6G6oq8+pDC7d3R1COxUckpZPkYvXRYNy5dRocO0otGUvQ0tivMKrS90pNQbMn2v O2J8GSLaGWkd82b/5H2GM9BrHyabIN9rf5WzmrrWSclL9Bff0w2n2niQ+CEVgDJ3qadl jHHeUe4lOKD+IxcA7EYk2+RXYXg3VTKBYHYu+2uxokoe80+xi/yuNaQzDQ53NXDmPFP0 YBAF/RexdpMHfRJXAHMY40n4kzlnm0HsLa37IxRg13/ctXWibobHaOyiwB7VEFsSmvAi pxvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=47p6rpgnhC61jweYscymvySKQj1X2OZUOGACFEBsDj8=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=wSoHWKwPGRXNtAMa7cxkYFxSlTIBLadLTQ+S+ZmnNMw0/j9c7t78fWPicZzUIyXplS +gZZLoM/A3oOhpiJ1BFh4H4UTYCOzpvGLEBEo71l9NiPTUIvfmk45Q5P8Ns7Pa6OIrHf EuyHoBINnTv0YmEXU6uGmFUC8dhUb1TWAYoZ2TE/xs4PiBvUe1KxsreANv0iN4ZjJI5x OizKtG1OofDAMfya9jy0pQuKgU4dB2oQtd8G4E0V9mYIecIwI0HIOUrCg1qmcUlwJ1xM xXTvvaVXgqYeHzkt/OgXwNPPndqLl2M3kRqFpfeNh8KE+nlKyaCQwP+dsYT7jh7q75Q3 SQZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=W2DbvloI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id bx14-20020a056a02050e00b005b9022ddea9si4237514pgb.197.2023.10.29.16.43.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:43:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=W2DbvloI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 0C7178040EA2; Sun, 29 Oct 2023 16:43:11 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231681AbjJ2Xm5 (ORCPT + 31 others); Sun, 29 Oct 2023 19:42:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231483AbjJ2Xmq (ORCPT ); Sun, 29 Oct 2023 19:42:46 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 513219ED1 for ; Sun, 29 Oct 2023 16:03:12 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1EBAA6607355; Sun, 29 Oct 2023 23:02:31 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620552; bh=xX+LegATs/dUSVY0ygA2/7wdnKYA+7pFnIoOs65gAY0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W2DbvloITO8VX3OBzPtOElGsVJlsgjiyRGZf3FXpM0wHXuGj3sJ8TUrYipMJF0MLT 7IKtKqhMwLFfyagIxXMuFqm0ErlIh1Z5RTdSAwHkQPiECzt/u6LCcd0Y2A8TkKD6RM uuMtCGqAAb3Z0EqoQk6hUuYK6QFO7BEUqAnoNlUkFphdFyX7s0fKAOgMfyofxqKNnq tQRsIT8O4Z2wCxgWqolFbeDnWKgvpUQ7ZA+lUKRfSnAhzu1ruzNQSVw+AQ4uZsdU+Q CsCbKXEZbV0KAXA0vpL7QIr1L5bDUVMLAPYkZu+ol6HgG40wPqu9FjGLJfJ4Gm2imn 6lX8WJej/8uSw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 04/26] drm/shmem-helper: Refactor locked/unlocked functions Date: Mon, 30 Oct 2023 02:01:43 +0300 Message-ID: <20231029230205.93277-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:43:11 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781135302839617638 X-GMAIL-MSGID: 1781135302839617638 Add locked and remove unlocked postfixes from drm-shmem function names, making names consistent with the drm/gem core code. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 64 +++++++++---------- drivers/gpu/drm/lima/lima_gem.c | 8 +-- drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 6 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- drivers/gpu/drm/v3d/v3d_bo.c | 4 +- drivers/gpu/drm/virtio/virtgpu_object.c | 4 +- include/drm/drm_gem_shmem_helper.h | 36 +++++------ 9 files changed, 64 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 0d61f2b3e213..154585ddae08 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -43,8 +43,8 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = { .pin = drm_gem_shmem_object_pin, .unpin = drm_gem_shmem_object_unpin, .get_sg_table = drm_gem_shmem_object_get_sg_table, - .vmap = drm_gem_shmem_object_vmap, - .vunmap = drm_gem_shmem_object_vunmap, + .vmap = drm_gem_shmem_object_vmap_locked, + .vunmap = drm_gem_shmem_object_vunmap_locked, .mmap = drm_gem_shmem_object_mmap, .vm_ops = &drm_gem_shmem_vm_ops, }; @@ -153,7 +153,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) kfree(shmem->sgt); } if (shmem->pages) - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); drm_WARN_ON(obj->dev, shmem->pages_use_count); @@ -165,7 +165,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; @@ -199,12 +199,12 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) } /* - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object + * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object * * This function decreases the use count and puts the backing pages when use drops to zero. */ -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; @@ -226,7 +226,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) shmem->pages_mark_accessed_on_put); shmem->pages = NULL; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { @@ -234,7 +234,7 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); return ret; } @@ -243,7 +243,7 @@ static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) { dma_resv_assert_held(shmem->base.resv); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } /** @@ -293,7 +293,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); /* - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object * @map: Returns the kernel virtual address of the SHMEM GEM object's backing * store. @@ -302,13 +302,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); * exists for the buffer backing the shmem GEM object. It hides the differences * between dma-buf imported and natively allocated objects. * - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_locked(). * * Returns: * 0 on success or a negative error code on failure. */ -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; int ret = 0; @@ -331,7 +331,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, return 0; } - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); if (ret) goto err_zero_use; @@ -354,28 +354,28 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); err_zero_use: shmem->vmap_use_count = 0; return ret; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked); /* - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object + * drm_gem_shmem_vunmap_locked - Unmap a virtual mapping for a shmem GEM object * @shmem: shmem GEM object * @map: Kernel virtual address where the SHMEM GEM object was mapped * * This function cleans up a kernel virtual address mapping acquired by - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to - * zero. + * drm_gem_shmem_vmap_locked(). The mapping is only removed when the use count + * drops to zero. * * This function hides the differences between dma-buf imported and natively * allocated objects. */ -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; @@ -391,12 +391,12 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, return; vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } shmem->vaddr = NULL; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap_locked); static int drm_gem_shmem_create_with_handle(struct drm_file *file_priv, @@ -424,7 +424,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, /* Update madvise status, returns true if not purged, else * false or -errno. */ -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv) { dma_resv_assert_held(shmem->base.resv); @@ -435,9 +435,9 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) return (madv >= 0); } -EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked); -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct drm_device *dev = obj->dev; @@ -451,7 +451,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) kfree(shmem->sgt); shmem->sgt = NULL; - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); shmem->madv = -1; @@ -467,7 +467,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL_GPL(drm_gem_shmem_purge); +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked); /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object @@ -564,7 +564,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); dma_resv_lock(shmem->base.resv, NULL); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); dma_resv_unlock(shmem->base.resv); drm_gem_vm_close(vma); @@ -611,7 +611,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct } dma_resv_lock(shmem->base.resv, NULL); - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); dma_resv_unlock(shmem->base.resv); if (ret) @@ -679,7 +679,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ drm_WARN_ON(obj->dev, obj->import_attach); - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); if (ret) return ERR_PTR(ret); @@ -701,7 +701,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ sg_free_table(sgt); kfree(sgt); err_put_pages: - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 4f9736e5f929..62d4a409faa8 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -180,7 +180,7 @@ static int lima_gem_pin(struct drm_gem_object *obj) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_pin(&bo->base); + return drm_gem_shmem_object_pin(obj); } static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) @@ -190,7 +190,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_vmap(&bo->base, map); + return drm_gem_shmem_object_vmap_locked(obj, map); } static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) @@ -200,7 +200,7 @@ static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_mmap(&bo->base, vma); + return drm_gem_shmem_object_mmap(obj, vma); } static const struct drm_gem_object_funcs lima_gem_funcs = { @@ -212,7 +212,7 @@ static const struct drm_gem_object_funcs lima_gem_funcs = { .unpin = drm_gem_shmem_object_unpin, .get_sg_table = drm_gem_shmem_object_get_sg_table, .vmap = lima_gem_vmap, - .vunmap = drm_gem_shmem_object_vunmap, + .vunmap = drm_gem_shmem_object_vunmap_locked, .mmap = lima_gem_mmap, .vm_ops = &drm_gem_shmem_vm_ops, }; diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index b834777b409b..7f2aba96d5b9 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -438,7 +438,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, } } - args->retained = drm_gem_shmem_madvise(&bo->base, args->madv); + args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv); if (args->retained) { if (args->madv == PANFROST_MADV_DONTNEED) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 0cf64456e29a..6b77d8cebcb2 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -192,7 +192,7 @@ static int panfrost_gem_pin(struct drm_gem_object *obj) if (bo->is_heap) return -EINVAL; - return drm_gem_shmem_pin(&bo->base); + return drm_gem_shmem_object_pin(obj); } static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj) @@ -231,8 +231,8 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .pin = panfrost_gem_pin, .unpin = drm_gem_shmem_object_unpin, .get_sg_table = drm_gem_shmem_object_get_sg_table, - .vmap = drm_gem_shmem_object_vmap, - .vunmap = drm_gem_shmem_object_vunmap, + .vmap = drm_gem_shmem_object_vmap_locked, + .vunmap = drm_gem_shmem_object_vunmap_locked, .mmap = drm_gem_shmem_object_mmap, .status = panfrost_gem_status, .rss = panfrost_gem_rss, diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 6a71a2555f85..72193bd734e1 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -52,7 +52,7 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) goto unlock_mappings; panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge(&bo->base); + drm_gem_shmem_purge_locked(&bo->base); ret = true; dma_resv_unlock(shmem->base.resv); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 846dd697c410..9fd4a89c52dd 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -536,7 +536,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, err_map: sg_free_table(sgt); err_pages: - drm_gem_shmem_put_pages(&bo->base); + drm_gem_shmem_put_pages_locked(&bo->base); err_unlock: dma_resv_unlock(obj->resv); err_bo: diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c index 8b3229a37c6d..42cd874f6810 100644 --- a/drivers/gpu/drm/v3d/v3d_bo.c +++ b/drivers/gpu/drm/v3d/v3d_bo.c @@ -56,8 +56,8 @@ static const struct drm_gem_object_funcs v3d_gem_funcs = { .pin = drm_gem_shmem_object_pin, .unpin = drm_gem_shmem_object_unpin, .get_sg_table = drm_gem_shmem_object_get_sg_table, - .vmap = drm_gem_shmem_object_vmap, - .vunmap = drm_gem_shmem_object_vunmap, + .vmap = drm_gem_shmem_object_vmap_locked, + .vunmap = drm_gem_shmem_object_vunmap_locked, .mmap = drm_gem_shmem_object_mmap, .vm_ops = &drm_gem_shmem_vm_ops, }; diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index c7e74cf13022..ee5d2a70656b 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -106,8 +106,8 @@ static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { .pin = drm_gem_shmem_object_pin, .unpin = drm_gem_shmem_object_unpin, .get_sg_table = drm_gem_shmem_object_get_sg_table, - .vmap = drm_gem_shmem_object_vmap, - .vunmap = drm_gem_shmem_object_vunmap, + .vmap = drm_gem_shmem_object_vmap_locked, + .vunmap = drm_gem_shmem_object_vunmap_locked, .mmap = drm_gem_shmem_object_mmap, .vm_ops = &drm_gem_shmem_vm_ops, }; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index bf0c31aa8fbe..6ee4a4046980 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -99,16 +99,16 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map); -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map); +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map); +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map); int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma); -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { @@ -117,7 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); @@ -208,22 +208,22 @@ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_ } /* - * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap() + * drm_gem_shmem_object_vmap_locked - GEM object function for drm_gem_shmem_vmap_locked() * @obj: GEM object * @map: Returns the kernel virtual address of the SHMEM GEM object's backing store. * - * This function wraps drm_gem_shmem_vmap(). Drivers that employ the shmem helpers should - * use it as their &drm_gem_object_funcs.vmap handler. + * This function wraps drm_gem_shmem_vmap_locked(). Drivers that employ the shmem + * helpers should use it as their &drm_gem_object_funcs.vmap handler. * * Returns: * 0 on success or a negative error code on failure. */ -static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, - struct iosys_map *map) +static inline int drm_gem_shmem_object_vmap_locked(struct drm_gem_object *obj, + struct iosys_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - return drm_gem_shmem_vmap(shmem, map); + return drm_gem_shmem_vmap_locked(shmem, map); } /* @@ -231,15 +231,15 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, * @obj: GEM object * @map: Kernel virtual address where the SHMEM GEM object was mapped * - * This function wraps drm_gem_shmem_vunmap(). Drivers that employ the shmem helpers should - * use it as their &drm_gem_object_funcs.vunmap handler. + * This function wraps drm_gem_shmem_vunmap_locked(). Drivers that employ the shmem + * helpers should use it as their &drm_gem_object_funcs.vunmap handler. */ -static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj, - struct iosys_map *map) +static inline void drm_gem_shmem_object_vunmap_locked(struct drm_gem_object *obj, + struct iosys_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - drm_gem_shmem_vunmap(shmem, map); + drm_gem_shmem_vunmap_locked(shmem, map); } /** From patchwork Sun Oct 29 23:01:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159434 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1891889vqb; Sun, 29 Oct 2023 16:10:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEpk7eUMB0q/ppYiKcXv7lccTwansLyjcIhC3VSOjbvzPqhD0COIhElsxxg4F2K1k+w5IWx X-Received: by 2002:a05:6a21:33a7:b0:17a:fe0a:c66c with SMTP id yy39-20020a056a2133a700b0017afe0ac66cmr14428976pzb.2.1698621007056; Sun, 29 Oct 2023 16:10:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621007; cv=none; d=google.com; s=arc-20160816; b=rLueqbL3qbkDYHUR418Q8+47ZwustnO+6yqhVn4e2+algT1/tIuJkz3ZBjU12w+GrP JuBfsSX3rsSUIbtNJLGh/k0fT1IUXph5KwVXcY2vXwTM4xJl5/I+0JOY/ebm3jXKYxfy MxZfJ0DnpFTjxaP4ax7JzG5L8wmjdurWyZNHDyeZFFo8MugJWt7n3EnVSJ6kxueNij7z GcvRtONZzlm3RwLBcaSndx0OvJoco4SWfHpziik+s/e9/9qjzJ5M3Sjw1PaIqGs2wRoi KYw6/kTZsPli1YO3jE82ceXfx0E0I7pHOJrguL9000+wlIV7gnVpc7b/qHjNYeUGgy/2 cZDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xWPnYDvf6DjGzD0P0Ek0j2bZ3W+V3/76LDf2dQbzMuY=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=t2NB59t33qmK3QOTmRLff5G4yDsncqPelhr+x1DqGIJDWY1Zje6Z3ooGYsKiT9SZyt sd+0iuqH0ZrHHG35UQiUhJd9asQX8m2FlvyjLg7MzmE6jAf5+lv8cpX0DsaCF5XYT6lg pOXUPdbjsLADD0U2n2K3QGmiEQlBiQjl006h2YFAXJoSSJH22g9PRrIaN5cB0MDTI7IV QOn2Fb0dLa6l9RvIMHaqXfEj15v8iUl0C0KPAhvmRgsWrfEFnVXYigz3XuJVVtLIsZ+V LMgAucZeqORJQWeWYeUq0LSMjnINeRuZf5GLvJoR8PwUx/F/rkV8t0AeqXrF7+rTAZsC hj8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=h4GZgR0L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id cl2-20020a056a02098200b005855f67e490si4367423pgb.690.2023.10.29.16.10.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:10:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=h4GZgR0L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 07028804869D; Sun, 29 Oct 2023 16:10:06 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232338AbjJ2XJm (ORCPT + 31 others); Sun, 29 Oct 2023 19:09:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56874 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232757AbjJ2XJP (ORCPT ); Sun, 29 Oct 2023 19:09:15 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50999658B for ; Sun, 29 Oct 2023 16:03:56 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id B013C6607385; Sun, 29 Oct 2023 23:02:32 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620553; bh=JEnVqea8F9PfIoRTIpDPPiewdAcM8Ofed+XPNsSrPBQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h4GZgR0LNwo/2QeIv6jRwELl1gLnMhUOnkGBFpuyQxI1yqeYYs7Wk0nZ2av/VrEYm FnzCXjWoQ+t/iyGT1pmHTIz63pACLzLRXVtzKl9sW0j84K+4HxqbEAnke0XfKaHnp2 gwxzNxM+Wb54/DXAby0hPqXUdk5UoyXCgoqbi86yyTzh88Qhp69yWjZ3fsxFUojJgA LXuDlZ4QcAUW0Y2CuOvebEtMxOQZ6UhBtOA3bxzJ+LEk+sDSGNkcf1Bt2mILUEDfk9 Tr+on7Fkk+XcYGfWY0G3hOZelPnB2xUzzelAaUCAckYmxf5PqLOoYlM+9HMuTlHCmL VKhXo794QwNeg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 05/26] drm/shmem-helper: Remove obsoleted is_iomem test Date: Mon, 30 Oct 2023 02:01:44 +0300 Message-ID: <20231029230205.93277-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:10:06 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133221303647801 X-GMAIL-MSGID: 1781133221303647801 Everything that uses the mapped buffer should be agnostic to is_iomem. The only reason for the is_iomem test is that we're setting shmem->vaddr to the returned map->vaddr. Now that the shmem->vaddr code is gone, remove the obsoleted is_iomem test to clean up the code. Suggested-by: Thomas Zimmermann Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 154585ddae08..2cc0601865f6 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -315,12 +315,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, if (obj->import_attach) { ret = dma_buf_vmap(obj->import_attach->dmabuf, map); - if (!ret) { - if (drm_WARN_ON(obj->dev, map->is_iomem)) { - dma_buf_vunmap(obj->import_attach->dmabuf, map); - return -EIO; - } - } } else { pgprot_t prot = PAGE_KERNEL; From patchwork Sun Oct 29 23:01:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159435 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1891915vqb; Sun, 29 Oct 2023 16:10:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH9HV0jPxQzwnVlgFGk+hSzYa7Bh5s8891RYTK0JMaPfBQTVL+kJbN2F1EjqI+WLlpGtBth X-Received: by 2002:a05:6a20:4425:b0:179:f79e:8615 with SMTP id ce37-20020a056a20442500b00179f79e8615mr12596916pzb.52.1698621012297; Sun, 29 Oct 2023 16:10:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621012; cv=none; d=google.com; s=arc-20160816; b=swXIhLerxYgVvEqQ5GdTHxOICMDF762VPbpbbBGS8YR7O7C0kje6xlXpuzCgvDSmp5 wJSoaEfIK1QI0mhdijwsD4r/qgKb85K+DN5pWmyDM0IPjJQMprF6m5xCRy1c9A3YxI2b uh4zenSmcbXmHyaKwZl96FobnBILrNxPrATHr10QkIPCIS5Pw4iut9SQfTUhlLxJkhP8 iq9ZOOjHRhuTspP8EEpsjFFh3WIWt+VZlooEqzHFkmTik/qvT6eqaJBp1zGEl+9utbkB cUp2/9wRZWx49X9gxJA7e7fzhYfKTCn0dQ2ztcaW1mdpmGukCu3kOPgRGiOFflAjpgep 8+mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+1BuFQKKKmRpE57N6tUysP7LDtC8ZyadZny0WmRfh0k=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=AV7ujaUvN3anRz9ZY2jugzUBqPVL0lVkh1dHXMQ3pHI7pge1nVJ/YkVqwKcbE7Mkju +v3ZpUHXc1xLclxe+P5ngcUZSV8K+T+6XSGvSL88loq64NdpavTudl97gtAHDtp7bpue vdODxiyX0LqBIihjl04QzAqFXbTh5n/tZWlSRrY7yHg+TkBKEwOZgrHzWXuRLnXgfg7U WGvqqENCCFPtxwKkaVDbnTwNKhCAdvUFBdLLKznqiW14Ih91vHQRC/0A5UcSn9CyXkA6 nFsroF3v4FH6tX7xA0XX/P/SoLEdeb2Fm0w3iBl+xMhkbmFkm058mlI2wiTl2nSV+kVE ePqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kIcrT91R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id fb33-20020a056a002da100b0068a38a9ab84si4139394pfb.176.2023.10.29.16.10.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:10:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kIcrT91R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 8C01980477BE; Sun, 29 Oct 2023 16:10:11 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232375AbjJ2XJr (ORCPT + 31 others); Sun, 29 Oct 2023 19:09:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232760AbjJ2XJP (ORCPT ); Sun, 29 Oct 2023 19:09:15 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5585465BC for ; Sun, 29 Oct 2023 16:03:56 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 298FC6607388; Sun, 29 Oct 2023 23:02:34 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620555; bh=kRBcrLAj4D2pvg6jvqHxVmGBnWprZUgp4H9azTfbRYE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kIcrT91R2KsnfqEJqrzr/8g5XvCiRTG15pQZrSdsY7CjrthMuTV7UzFCNftZBg1Kx BzICvJKEVCKLmItWaM3WtIcqVa7F6dbGZfLe6jAAXWpiMWZl1i3+1O8K+Ype8kWrOz R6zoBWI0WE2fxtmAVZ/QiFlmaqzr3yxhekXoIplfhC5oEQOTfVjZM1pgxrDQuYi+ca Htl/XMDw1kRYVtEv8jT3+GblNjOo069yvUlg0ld4ec99SHby87w0OznehkfMW/UDZJ et6pZ+ZDDBdzgezCpqkps5G+ApbO/NCPzb3BWIF5c+spZLGgtYOyxzsUNDY8//cQkK io8K/I6piPBhA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 06/26] drm/shmem-helper: Add and use pages_pin_count Date: Mon, 30 Oct 2023 02:01:45 +0300 Message-ID: <20231029230205.93277-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:10:11 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133226329589071 X-GMAIL-MSGID: 1781133226329589071 Add separate pages_pin_count for tracking of whether drm-shmem pages are moveable or not. With the addition of memory shrinker support to drm-shmem, the pages_use_count will no longer determine whether pages are hard-pinned in memory, but whether pages exist and are soft-pinned (and could be swapped out). The pages_pin_count > 1 will hard-pin pages in memory. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem_shmem_helper.c | 25 +++++++++++++++++-------- include/drm/drm_gem_shmem_helper.h | 11 +++++++++++ 2 files changed, 28 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2cc0601865f6..b9b71a1a563a 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -156,6 +156,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) drm_gem_shmem_put_pages_locked(shmem); drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); dma_resv_unlock(shmem->base.resv); } @@ -234,18 +235,16 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret = drm_gem_shmem_get_pages_locked(shmem); + if (!ret) + refcount_set(&shmem->pages_pin_count, 1); return ret; } -static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) -{ - dma_resv_assert_held(shmem->base.resv); - - drm_gem_shmem_put_pages_locked(shmem); -} - /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -263,6 +262,9 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; @@ -286,8 +288,14 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (refcount_dec_not_one(&shmem->pages_pin_count)) + return; + dma_resv_lock(shmem->base.resv, NULL); - drm_gem_shmem_unpin_locked(shmem); + + if (refcount_dec_and_test(&shmem->pages_pin_count)) + drm_gem_shmem_put_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); @@ -632,6 +640,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, if (shmem->base.import_attach) return; + drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 6ee4a4046980..5088bd623518 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -39,6 +39,17 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; + /** + * @pages_pin_count: + * + * Reference count on the pinned pages table. + * + * Pages are hard-pinned and reside in memory if count + * greater than zero. Otherwise, when count is zero, the pages are + * allowed to be evicted and purged by memory shrinker. + */ + refcount_t pages_pin_count; + /** * @madv: State for madvise * From patchwork Sun Oct 29 23:01:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159437 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1892456vqb; Sun, 29 Oct 2023 16:11:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGt9vqdcO/PlFV2CiYve9n2+5QctbjJFpTsMHokviJerr62INB9BOU6DUAk6U+8JLNAbS0+ X-Received: by 2002:a05:6a21:9988:b0:15c:b7ba:9137 with SMTP id ve8-20020a056a21998800b0015cb7ba9137mr7936849pzb.2.1698621096119; Sun, 29 Oct 2023 16:11:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621096; cv=none; d=google.com; s=arc-20160816; b=t3GKKSSzCHCV1dW9NTD4++ZKCKpQJegqGqFbMenPaWezC8rNanTX3uoxgVbjNUeNX4 1Lat72xzkeU75MZQF1aU4LT/0yQRz2U1c2jLFDxhxlLeE3XJNpqFLdP391WDGLlmuvFJ gk6JIRuZPxON4CjGonVDz6m0AzMnlPJqQQN8BhRB/JpFngmdlEkYD6fA8kG5QLjsUmMQ 2Joz8ZYLshHWOseQR0vRpzl8/ZoUynQuWHb3uTYwQ03HWtQlTM4PFsZzfDkK/TDILAKY n2PIOrmLp2BuVXwHOCBKtp+gLIVjS/gKSnvNSGuUFHB51wvgCSE/wPP7diMEBGWq+8u1 52mQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GYD9pgNSP01Rz8/kweiXSEF2g7wjGaI2HHUMXHZ0VBE=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=O4KVwApT/wRFluD3gwHcrmmXpoQhi5jGB6o8iPdlHnYTcAvuQBEqCK6fR3U1EBGG3a zZ3FxmKr7BPMn98Ot142DY7mPymZyhDCPs3UXYI9h3aJORrNh4o+avyDZbTDTACjW7P5 ytwCHvjGNNDRA7/t4iuT6pcs+Qje11DoVTGZ/4iWz4qleByPQISZdLYvd7f72X93ev0H /OgR1CDuHp9JKk2Ciwgqls4V54rMfGT6IwE36W0Vcs+MAANm5/Op4O0bJyGx4g7XYw8R 1l3vfW/euFq9Ey8nEKCk8FaHJu+/RmrCtIEJbRO7yu2NxzYIYOWTlJeyoMUpaiJZVDRK zXzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=aVpW1wju; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id f15-20020a170902ce8f00b001ca0b64f5a6si1329991plg.449.2023.10.29.16.11.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:11:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=aVpW1wju; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 60AE6807E449; Sun, 29 Oct 2023 16:11:32 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232225AbjJ2XJt (ORCPT + 31 others); Sun, 29 Oct 2023 19:09:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232797AbjJ2XJQ (ORCPT ); Sun, 29 Oct 2023 19:09:16 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 802D35256 for ; Sun, 29 Oct 2023 16:03:57 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id A42C36607389; Sun, 29 Oct 2023 23:02:35 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620556; bh=8mHobcZt5q0EvB1OxUktWTmajwvEEdihAewQToXFLnE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aVpW1wjuKVHMcJl8AwetW+SjRrRqS3ZqXoa4ojguAof7GuTPAkVL2mE9vISHmZrMY bmzLQt/7sRiddkBk3QPB10UQLnYr9b5TOtPYmETy0z/tdgj4ySnLemP6afpdltLaRx 43AINAMyrF24KDK6Wjbba54k/HfUeEs04lqnAxQ26+ItWPkWkpnhd4h6yd0pOMA1IH 1lSB9x8+t8wTKQbifhU2hlLkwGfPt8kyqbhgW2sw+3d33cr0TODMiwzS1QtN4FqFnN AMGxm7HgyEun0ra71s7zRuDKmoa4VL4VtkCWw23CXfPWvvjX5/qv5E5driSNrsQUq3 eDyoOVmcOqAZg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 07/26] drm/shmem-helper: Use refcount_t for pages_use_count Date: Mon, 30 Oct 2023 02:01:46 +0300 Message-ID: <20231029230205.93277-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:11:32 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133314748945937 X-GMAIL-MSGID: 1781133314748945937 Use atomic refcount_t helper for pages_use_count to optimize pin/unpin functions by skipping reservation locking while GEM's pin refcount > 1. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem_shmem_helper.c | 33 +++++++++++-------------- drivers/gpu/drm/lima/lima_gem.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- include/drm/drm_gem_shmem_helper.h | 2 +- 4 files changed, 18 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b9b71a1a563a..6e02643ed87e 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -155,7 +155,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (shmem->pages) drm_gem_shmem_put_pages_locked(shmem); - drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); dma_resv_unlock(shmem->base.resv); @@ -173,14 +173,13 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - if (shmem->pages_use_count++ > 0) + if (refcount_inc_not_zero(&shmem->pages_use_count)) return 0; pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count = 0; return PTR_ERR(pages); } @@ -196,6 +195,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages = pages; + refcount_set(&shmem->pages_use_count, 1); + return 0; } @@ -211,21 +212,17 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - return; - - if (--shmem->pages_use_count > 0) - return; - + if (refcount_dec_and_test(&shmem->pages_use_count)) { #ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); #endif - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; + } } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); @@ -552,8 +549,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) * mmap'd, vm_open() just grabs an additional reference for the new * mm the vma is getting copied into (ie. on fork()). */ - if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - shmem->pages_use_count++; + drm_WARN_ON_ONCE(obj->dev, + !refcount_inc_not_zero(&shmem->pages_use_count)); dma_resv_unlock(shmem->base.resv); @@ -641,7 +638,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, return; drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); - drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); + drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count)); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 62d4a409faa8..988e74f67465 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -47,7 +47,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) } bo->base.pages = pages; - bo->base.pages_use_count = 1; + refcount_set(&bo->base.pages_use_count, 1); mapping_set_unevictable(mapping); } diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 9fd4a89c52dd..770dab1942c2 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -487,7 +487,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, goto err_unlock; } bo->base.pages = pages; - bo->base.pages_use_count = 1; + refcount_set(&bo->base.pages_use_count, 1); } else { pages = bo->base.pages; if (pages[page_offset]) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 5088bd623518..bd3596e54abe 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -37,7 +37,7 @@ struct drm_gem_shmem_object { * Reference count on the pages table. * The pages are put when the count reaches zero. */ - unsigned int pages_use_count; + refcount_t pages_use_count; /** * @pages_pin_count: From patchwork Sun Oct 29 23:01:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159452 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1896362vqb; Sun, 29 Oct 2023 16:24:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHSPZnR4UfsmBnB7CwyCQvsfbC27pN+aeRpwOlreDJTM6/Y8AnzQzabgy6v20fNSSRRnJx+ X-Received: by 2002:a05:6a21:7882:b0:17d:d272:7954 with SMTP id bf2-20020a056a21788200b0017dd2727954mr13532386pzc.9.1698621876492; Sun, 29 Oct 2023 16:24:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621876; cv=none; d=google.com; s=arc-20160816; b=MuefCdm0woOB1MNSHKjSze7WcWqXnbJ/U91I1AHsXpdA5mIwed4xzKWk/VgbLumQDw nKL46an9pXkdIgPn07YER+01nL+7MKo3oQpFf+ybvXkl+/mIilm3lKJfKvXLuz6QsgdH CJPVSTdhpqrzTWZu8FQwvISkDs6Rv8kMX+ZBRNNGeRKm5PtHNVD1SI4wNTEic4KwP1d+ 5XwIBF+FYqM04YCdiDxYMMzzfkRn/NNtFkuMRS+SBPTrrAH3S9gUFoxF3PZDeN1rgCal CeAw8fiJYoOrW2mxduY2olzulCPMGXTFphQufhIvBeEQxnxehNbWer4XRYF0jAQg4x/1 ENuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1AW6X5nKzH8SYfrsGQ3wkWYfyfDdlxZOFbUPbk7+7b8=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=VNdKXwENelZdU1WgAB+6f3nLKK/vgqe3ZAqkUQncxYjF73p7TCRc/Tpk98xGLsRIRc XKC3mi9IqHbi89IEAido0LeaBhtsLyOKEynrbMY1YoUc7qjFwuU2NlkNzApe8RUEkrLQ mfIHEQCiyriyE3pbpN5DBV/GI7Sz7kXmfFHuqlnafGOlReNm+Ty3MiVtutbYCrE0/0lx szUT0iB5/QXM0SXl8A5Gawx59uC2SyN8j8X9+jezzxBY7NW8eRlwaGKBIw+lT6YZxV5k Dr1segHMAXBmRAbjOmu3bu2iS+4E5ChkBl7buzPQdmwaJWfx7KNzlm+VMXxB2asOnMr/ paOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="ai/cGL3k"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id j71-20020a63804a000000b0057d08dac75csi4099290pgd.517.2023.10.29.16.24.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:24:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="ai/cGL3k"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id EA2E18053667; Sun, 29 Oct 2023 16:24:33 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232405AbjJ2XYL (ORCPT + 31 others); Sun, 29 Oct 2023 19:24:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232531AbjJ2XX5 (ORCPT ); Sun, 29 Oct 2023 19:23:57 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8714355A9 for ; Sun, 29 Oct 2023 16:03:57 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 20C78660738B; Sun, 29 Oct 2023 23:02:37 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620558; bh=h4go9B82Nk180l48F9rwDYUgJr4iX1WNyzYSJFvLmGo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ai/cGL3kQ3X9JcBQlCwzwCqNsvIu2/SK3aEtzWguJbmsCKFc7avh6sOGJIDzLVkJY B1tayakBaplIuNSZkdH0lsaUKx70mUIluW4GjdHsMTM6JiZMhW78iDI9GdrWqokMZN l35trduagrYUstl7w46uURKgpDaj8O/chjIF4xKry4/fAEbswwpnlol1wcjZ0M1wdZ aZoh1x/nvokYl3exc37aRNPZy0IzO+m71TVwOYSZmWwejLLe/3VT2M2F7I5beNUhkg fwVUUDEliNQOTURyk9tVMmoVCepeQ+AbFEeEs6eLqlo2g6PvBhLFP4ljR5/hcHNfMg 3a2CRoftwIQSA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 08/26] drm/shmem-helper: Add and use lockless drm_gem_shmem_get_pages() Date: Mon, 30 Oct 2023 02:01:47 +0300 Message-ID: <20231029230205.93277-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:24:34 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134132787109496 X-GMAIL-MSGID: 1781134132787109496 Add lockless drm_gem_shmem_get_pages() helper that skips taking reservation lock if pages_use_count is non-zero, leveraging from atomicity of the refcount_t. Make drm_gem_shmem_mmap() to utilize the new helper. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 6e02643ed87e..41b749bedb11 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -226,6 +226,20 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +{ + int ret; + + if (refcount_inc_not_zero(&shmem->pages_use_count)) + return 0; + + dma_resv_lock(shmem->base.resv, NULL); + ret = drm_gem_shmem_get_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} + static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { int ret; @@ -609,10 +623,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct return ret; } - dma_resv_lock(shmem->base.resv, NULL); - ret = drm_gem_shmem_get_pages_locked(shmem); - dma_resv_unlock(shmem->base.resv); - + ret = drm_gem_shmem_get_pages(shmem); if (ret) return ret; From patchwork Sun Oct 29 23:01:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159436 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1891955vqb; Sun, 29 Oct 2023 16:10:17 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG1sXHkRdt61tJwljozRC68W9a7deKPP2mvmlOCUeNHX+R5eh+It8AP4rK0B66UJPgMY0r5 X-Received: by 2002:a17:902:eb83:b0:1b0:f8:9b2d with SMTP id q3-20020a170902eb8300b001b000f89b2dmr11520057plg.29.1698621017408; Sun, 29 Oct 2023 16:10:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621017; cv=none; d=google.com; s=arc-20160816; b=mcgdCs2e7rshBWjJfGXSQPm899Nhy9ZDHCF0WbnukM83bLTGULZt4YVXfdhyfTKLvh XSm4G8EREpKuG6TovRdwIwcgobQu4Nhi0sNsnQbXEHRyVC+7oJYGc7dH9/hqDLYOl2SD nqBQbsYAI46zmQzQrXhQsqZj+Ob0roLH86UIPSJvctZj9H7kUuS/Y/6YwfA1hWgVGcdt JIM0CwIYXdi5jUJlULkGWvVNTqqENmprgh1LP6xq+zrjVAgeLzVgJP3rV7XlDgbDIWYx DaNMDut1UgI/sli5Om9TFMTggUB08lcBFQlM7z5yqqJ25a4hclrX0Qby6CIJGZxPpe3r LZNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eG7kCbEWfVF6ujHh2hEmt1iJI0Tev49NWP2lYd7JBJg=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=1GDPrpycyIIyJ4eX2NFPyhVArFf+UjkSuHvhdDC9UQTEZnCymeAExZfw71qwlom0uZ HbooMS7nSyYisAV2dJSyiJo19RwxocpYmcSGUJFqLvCCLq/+WechK3/CqlWexhSOvcTM waCJAx3zqnNqoegRUJ5iGX2FDCizjjkX43dwkBKXjwQ2Gfnk+pxKEgII1qLaWuovWiL9 AosVJyLruy09L6ZnAm1zC6wYvU/LWG5a0lfivnaEy7pSvP6iGjMmIjXLbaoxsv1FN+Jn Cgqo8cjY0gmf62SxRr52y5iotC7iyFQUNy5c4R87L7cdxj+Kprednv1Q+WUBQkn2sLTr QyLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=P1+Y6dZP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id o2-20020a17090323c200b001bf0e15c0a9si4065339plh.269.2023.10.29.16.10.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:10:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=P1+Y6dZP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 769788047449; Sun, 29 Oct 2023 16:10:16 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231615AbjJ2XKA (ORCPT + 31 others); Sun, 29 Oct 2023 19:10:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232838AbjJ2XJS (ORCPT ); Sun, 29 Oct 2023 19:09:18 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 948D859CB for ; Sun, 29 Oct 2023 16:04:07 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id D7FE96607387; Sun, 29 Oct 2023 23:02:38 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620560; bh=ih31JcNUpLNYCcpJ32jBP51vfnpgNdQgzCQ0RrzRDWM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P1+Y6dZP3fY4uMO+pqyCfgwQ7hDgFCQo/u7MWZtJcKTUHXEij+CQdEzjDqxlywHRL rHK4RZMKZRwpEq5x/gLfs0H0O8dVe1yMRorow86lsdVpz9YeWoVq3fkZOGCTUAiawa LRnfeoyUN/2On+AVhn6d7lfPdFyDjsOtnmFeoHTo5BU5EPbi3E9IlXjQJ14gz6LZpl hIsxTHNtQbF3+b9UgXAyszXvQL8unAQvKHW3LTXO/32VD4WpdZdBsuLpWQQldnIAXi yGkudz326ldeXY0VIqw9xJSzeFYBZaDCyR2kLD+G38r3R3D/W9I3sYmEHhvANoY8mH XVC0AJB8a5CYw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 09/26] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Date: Mon, 30 Oct 2023 02:01:48 +0300 Message-ID: <20231029230205.93277-10-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:10:16 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133231635861796 X-GMAIL-MSGID: 1781133231635861796 The vmapped pages shall be pinned in memory and previously get/put_pages() were implicitly hard-pinning/unpinning the pages. This will no longer be the case with addition of memory shrinker because pages_use_count > 0 won't determine anymore whether pages are hard-pinned (they will be soft-pinned), while the new pages_pin_count will do the hard-pinning. Switch the vmap/vunmap() to use pin/unpin() functions in a preparation of addition of the memory shrinker support to drm-shmem. Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem_shmem_helper.c | 19 ++++++++++++------- include/drm/drm_gem_shmem_helper.h | 2 +- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 41b749bedb11..6f963c2c1ecc 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -256,6 +256,14 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) return ret; } +static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + if (refcount_dec_and_test(&shmem->pages_pin_count)) + drm_gem_shmem_put_pages_locked(shmem); +} + /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -303,10 +311,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) return; dma_resv_lock(shmem->base.resv, NULL); - - if (refcount_dec_and_test(&shmem->pages_pin_count)) - drm_gem_shmem_put_pages_locked(shmem); - + drm_gem_shmem_unpin_locked(shmem); dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); @@ -344,7 +349,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, return 0; } - ret = drm_gem_shmem_get_pages_locked(shmem); + ret = drm_gem_shmem_pin_locked(shmem); if (ret) goto err_zero_use; @@ -367,7 +372,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_unpin_locked(shmem); err_zero_use: shmem->vmap_use_count = 0; @@ -404,7 +409,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, return; vunmap(shmem->vaddr); - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_unpin_locked(shmem); } shmem->vaddr = NULL; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index bd3596e54abe..a6de11001048 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -124,7 +124,7 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { return (shmem->madv > 0) && - !shmem->vmap_use_count && shmem->sgt && + !refcount_read(&shmem->pages_pin_count) && shmem->sgt && !shmem->base.dma_buf && !shmem->base.import_attach; } From patchwork Sun Oct 29 23:01:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159457 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1899930vqb; Sun, 29 Oct 2023 16:37:11 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEUpg4FkE6MKz2q1AlUyxBXZnUt6RHZFV79triKiXF+eCZhUvuaC7xk2KQBGdrLGmlcMCED X-Received: by 2002:a05:6871:8011:b0:1e1:e1f8:ea3e with SMTP id sk17-20020a056871801100b001e1e1f8ea3emr9394246oab.1.1698622631478; Sun, 29 Oct 2023 16:37:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698622631; cv=none; d=google.com; s=arc-20160816; b=tjUGrgXwo240c5FwlHHgzcoBLgg1avsoA+eaLAjrVvJY4Z4SYK7NqHrv1wKwrocIDW cqIyeUq74q47vT/93VpP9G2NLW6acFqp8m4Vt64hqfOFUsXbt689tSc5vW9h1rJd4zFx IVmT3+ispBimh2mGbvOAXOo5FieO6R77P/l+kSXWgWtqRf0nlvECiHawAiUnGm7/78+2 q/Aqf5hODM/RS1gne6V+6rods2q3+Q5OVXShdmfRBnK25Cc738G9gL2UjBIrk0fsMe5X vRKyJPHCLjCLOWChr+sLpYbOOWSPU4cPCYUhOFXh1qZqPqv2utCTv4njtn2TCFHrhxhC ApyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fkH1hQJ+B7BJyTp/5kn1MuxuYjriIE1IzWIy6MBBzfA=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=noIhVb6Lbw9BRhiWRj8rvc7ou+8Xco3fFPJzn/Chsn1ai4i/bmyHBe4twQe1hi7SQC eJdHZ7umnn5E5o8CaVI6H5VmtMoOsD+KKtXkKO7LvFlQX/gKQpvXaffl1dGKJ2ucOIR+ f/ZtBIvAX5ulP2JiAzcewl8fsHLw0ZHvz6De30APEpMtxRfOw8mVcStMknCgjrY2u5Yj JU8RgO0gSskMuYRTaL5wk4P8mxxo0IfGhtFjxFgX4JB8f4RnQdltDm2Y7ex1Bzp8i85b B0VDlFm6BeFqj6CvHbpSwn4KBbIcFSDHIX85iWQZeEuw8PVabWFvR2ecEp8z+8vwt2x7 Pl1A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Bepg4tDe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id k8-20020a636f08000000b005b9519d9e3esi3611475pgc.242.2023.10.29.16.37.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:37:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Bepg4tDe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 9785A80A777D; Sun, 29 Oct 2023 16:36:18 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231298AbjJ2Xfy (ORCPT + 31 others); Sun, 29 Oct 2023 19:35:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232599AbjJ2XJH (ORCPT ); Sun, 29 Oct 2023 19:09:07 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96FB359E0 for ; Sun, 29 Oct 2023 16:04:07 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 58CA4660738F; Sun, 29 Oct 2023 23:02:40 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620561; bh=YiUzWYhixjIFzHrYQzk+9I4gbvEVcmmLu1dRzQZfca8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Bepg4tDeL6IToL2jrwVhA463pzPKixSi9MO+ENdVY784DZoUBfh2DTKI2JDt1Vz9A h5vfQRsA++vjOLAEnb2vSD7FnDurLGe3OiKg20HfUKh22sfamKbXt6KInBmFaz5+pU 4eIawXZoTbbnsSwcGnQmR5V0v2wzWkdJ84gIgmh6AUTqE5vaT/CIFu1S8oO5W4Qde1 kVCfS9jj50xvqjI81gVG8dJkGkd0ib/pvTUt1b0Zkx9a8FMmeS20oxus6vDvkFxOMO e15j2ipUSr7DCzlrUgBDfv6z6xg69ATDPzXJFWO3z2GqyFjFod/mRcW8/NSu3CjXkp JIXem/lGbuiKA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 10/26] drm/shmem-helper: Use refcount_t for vmap_use_count Date: Mon, 30 Oct 2023 02:01:49 +0300 Message-ID: <20231029230205.93277-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:36:18 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134924719083647 X-GMAIL-MSGID: 1781134924719083647 Use refcount_t helper for vmap_use_count to make refcounting consistent with pages_use_count and pages_pin_count that use refcount_t. This also makes vmapping to benefit from the refcount_t's overflow checks. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem_shmem_helper.c | 28 +++++++++++--------------- include/drm/drm_gem_shmem_helper.h | 2 +- 2 files changed, 13 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 6f963c2c1ecc..08b5a57c59d8 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -144,7 +144,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } else { dma_resv_lock(shmem->base.resv, NULL); - drm_WARN_ON(obj->dev, shmem->vmap_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, @@ -344,23 +344,25 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, dma_resv_assert_held(shmem->base.resv); - if (shmem->vmap_use_count++ > 0) { + if (refcount_inc_not_zero(&shmem->vmap_use_count)) { iosys_map_set_vaddr(map, shmem->vaddr); return 0; } ret = drm_gem_shmem_pin_locked(shmem); if (ret) - goto err_zero_use; + return ret; if (shmem->map_wc) prot = pgprot_writecombine(prot); shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, prot); - if (!shmem->vaddr) + if (!shmem->vaddr) { ret = -ENOMEM; - else + } else { iosys_map_set_vaddr(map, shmem->vaddr); + refcount_set(&shmem->vmap_use_count, 1); + } } if (ret) { @@ -373,8 +375,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) drm_gem_shmem_unpin_locked(shmem); -err_zero_use: - shmem->vmap_use_count = 0; return ret; } @@ -402,14 +402,10 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, } else { dma_resv_assert_held(shmem->base.resv); - if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) - return; - - if (--shmem->vmap_use_count > 0) - return; - - vunmap(shmem->vaddr); - drm_gem_shmem_unpin_locked(shmem); + if (refcount_dec_and_test(&shmem->vmap_use_count)) { + vunmap(shmem->vaddr); + drm_gem_shmem_unpin_locked(shmem); + } } shmem->vaddr = NULL; @@ -655,7 +651,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count)); - drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); + drm_printf_indent(p, indent, "vmap_use_count=%u\n", refcount_read(&shmem->vmap_use_count)); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); } EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index a6de11001048..e7b3f4c02bf5 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -82,7 +82,7 @@ struct drm_gem_shmem_object { * Reference count on the virtual address. * The address are un-mapped when the count reaches zero. */ - unsigned int vmap_use_count; + refcount_t vmap_use_count; /** * @pages_mark_dirty_on_put: From patchwork Sun Oct 29 23:01:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159456 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1899779vqb; Sun, 29 Oct 2023 16:36:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGZcnnPZRolbabxiUilxXAPNpFFveJUfpgSSDsVURQpC20QUXnXFJPNYK1Fnq2x8RO/47Fy X-Received: by 2002:a17:902:6acb:b0:1ca:2c3b:7747 with SMTP id i11-20020a1709026acb00b001ca2c3b7747mr5487000plt.20.1698622601693; Sun, 29 Oct 2023 16:36:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698622601; cv=none; d=google.com; s=arc-20160816; b=kP2yLec511NjPfQO11zuhnDuUPiqJJhZYhTyuUvBtU6YQ5eqvSJVO0yQeVY6l9mAhq 4BKrttXqa/M6ljji3zHQvhjb+QvEqXfKdbFdC8xtdpu3TD2GsdrNF+ypAR1I57x+SWdm Qork/k8OqoW9O8lqwvjKkrTJV2o0fdgPhk6p9mwVAtz1m4SsZqB/XUAySG1qIh8qq6yR ZEGOg0RDXV3FjfP7X8TKxIDt4r/40OIqoCWjUGreyJy1alIg5ztDCo45QP2fLA4b59HX VNy7Ef7IMEsWO2TGHPKFyXJdOamlg4lzkaj0YRNQ3ItUG2NXTWXP5jZinREAKGVPmijW ppvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5QsgUj/4LYloQiHXkRNPm1vyv9HyqH7ejdKeMBSZTX4=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=joMiy9MmoQoMIIbbAZP4aHFH7+5bbbKNDZhOcw5tfJMYFAKm3j5BZ+zzia7Gv9YhsD 1B6vQno5FzLCGcpw+GGKuFnHKneJAqyORUmUpt1j/FdD9Q6/ypbDfllXjRLI2agprxqd xhmGotwVub8VKbopQSbtkK+mnQFXv9P4MkriI8linCiSjOHOOvJGZeahRLPH/Zsl/hkX mrBcsULVxOawKmeNpr1uJiv02le7aK7wA6FWVv5ELaFFLiLH2D0iBktIDPu8pDf93mCW 8XI7miTccibk3xbx/ra/yidcO9RgNGGfZl1syReOhk0W5A/5TBee/NHdvgC31Tv6gFyB ndog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=b6voNmWv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id t11-20020a170902e1cb00b001c73732c1f2si4094676pla.223.2023.10.29.16.36.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:36:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=b6voNmWv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id EAE6280A44FD; Sun, 29 Oct 2023 16:36:34 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231636AbjJ2XgK (ORCPT + 31 others); Sun, 29 Oct 2023 19:36:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232601AbjJ2XJH (ORCPT ); Sun, 29 Oct 2023 19:09:07 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 179C759E8 for ; Sun, 29 Oct 2023 16:04:09 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1E09E6607392; Sun, 29 Oct 2023 23:02:42 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620563; bh=yiD04P9CgkHzY0nKiv5L+yDOq2HcDAwa5da3tFP1xDY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b6voNmWv+XnQYeoygWZPgZN4ZykybdYJjp2jCYpFL43XMxSR0Vnxfrdfk9/aowgY+ qsFTfaHOylP4a1gSazSif5Q7WRcmYcxshuPxpDa+pDFQ7t91S6709I3lMYyyn4k0WN V1yOJJlUwI9ovBVwOM+ViQRQBdAPDnHSUYH9uAFVAs85zl4sXZ3SHsmHb5ohGFA8n5 qLpsjnvUrn78ytyiM7t9b+uIEQbr4kjrjQfU7rGvs9wogNk3COfU2Q2VUKqa/QEv7J 0rqyGVVAIgITd/hRMQ5CnbYR4+3T22T2IoLdPwt8BBOZ4NtoW7ra6Lq/xrTkuFcqeD kmwrfeer6kROw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 11/26] drm/shmem-helper: Prepare drm_gem_shmem_free() to shrinker addition Date: Mon, 30 Oct 2023 02:01:50 +0300 Message-ID: <20231029230205.93277-12-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:36:35 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134893021407797 X-GMAIL-MSGID: 1781134893021407797 Prepare drm_gem_shmem_free() to addition of memory shrinker support to drm-shmem by adding and using variant of put_pages() that doesn't touch reservation lock. Reservation shouldn't be touched because lockdep will trigger a bogus warning about locking contention with fs_reclaim code paths that can't happen during the time when GEM is freed and lockdep doesn't know about that. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 08b5a57c59d8..24ff2b99e75b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -128,6 +128,22 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static void +drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); +#endif + + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -142,8 +158,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { - dma_resv_lock(shmem->base.resv, NULL); - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); if (shmem->sgt) { @@ -157,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); - - dma_resv_unlock(shmem->base.resv); } drm_gem_object_release(obj); @@ -208,21 +220,10 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { - struct drm_gem_object *obj = &shmem->base; - dma_resv_assert_held(shmem->base.resv); - if (refcount_dec_and_test(&shmem->pages_use_count)) { -#ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); -#endif - - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; - } + if (refcount_dec_and_test(&shmem->pages_use_count)) + drm_gem_shmem_free_pages(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); From patchwork Sun Oct 29 23:01:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159458 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1899949vqb; Sun, 29 Oct 2023 16:37:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHeGxT3ZoqG+6YzSWr+N+Oq5QnR0nu0W9eNmZ5KU6Fv7U2ud7hkWAgLirW6Zuw+sH6inJKl X-Received: by 2002:a17:903:1cd:b0:1c9:cad9:e6e3 with SMTP id e13-20020a17090301cd00b001c9cad9e6e3mr14280908plh.32.1698622636308; Sun, 29 Oct 2023 16:37:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698622636; cv=none; d=google.com; s=arc-20160816; b=ItBt7C86DPYCND/tL0b2h8woUZgEtBsdDbC0SEID05mRTY+be9R+A+FoYPX2tsjRa1 J9HwMZYWLF1Sqy5Xqm3c5DvpmWlXdS7nsPUDYHJH6GkWkcUu8osLhp0mPj+sLAx2FzLc 7EhGRa6SvAkUShei0Q2Th8UuDq9pk+QN9DztokS0UiSf642ltpU8UXD9obH5mXl9j8Sz yCqFah/DNLCzgGdnAoXQT8Qd5sCUYWTbefSpTZMrVKga4QuiJOZLn8nOh2cuCLwCFrXO Hed4kqRKHki63Cer1U++iv7JvNfWbahulQPClFaFBtwBdg0ufuMxL565D98oAVLLWhxa a5jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7SSzayWNlysU5sdKne4uhYDTN4Ccz0eByy85nOX96CE=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=FR/RHE284Bh+2xAvyPg703UwJHnhuRP13UbG7k6X/XBUYfg7atRyCP4mVwoTKMdySI kLLu4ZzBlFp70NCAg+zgpxgnoEOrEHUCZY2X7JUENgWurJa2qCtKINcSqBDFURr7JgQD JRATd4Jj9xP/Mx75ak3/D9aW1FoD1OP4nOAYT05/I4jhmMSZwJKX6UvxfJPyposA3YGf rO9D9Wl/XOWukj7js6R6dURQZ04HoMYETwbiG2+DNoBClBpxUj2KRnjS1U9OugwRsYT3 y+NCg4X1pGlZ6uQ7F7AdKjlpgyQS8DNKlc3bBwJwgfBnxRT9U9ccHMhqi3tQJ7fL/uEb f2YA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=PeN7dCvY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id bz8-20020a056a02060800b005ab11ef76a4si4371580pgb.682.2023.10.29.16.37.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:37:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=PeN7dCvY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 3DE7A80A13BD; Sun, 29 Oct 2023 16:37:06 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231608AbjJ2XgV (ORCPT + 31 others); Sun, 29 Oct 2023 19:36:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232603AbjJ2XJH (ORCPT ); Sun, 29 Oct 2023 19:09:07 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 167ED559B for ; Sun, 29 Oct 2023 16:04:09 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 994336607398; Sun, 29 Oct 2023 23:02:43 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620564; bh=6JSrV8dvFR5m76/teSuwB8SoeYdLm77F2T3NdDv9SQQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PeN7dCvY7+whfuYqVPe+2Zkcivdg478TFFVcsWFNKyHjzKg+m7MoVZVSSKB1sNSFx F802eM4s0jQy3ZWMxkxCn3bN1LjkBrADWVn+D9sRF/e+jPFNvgKv/5sbpbTnxjg3Ar B/j2uTxhKr2Ki/kNM8LfvZMJ7vcyDhCWcDSAOqL73C0Lf93+eOG6gemcE/fWjkYE/R IrCOKoIanJJICwzN6we1X9m5qYIaoq39xRVeIHYHL/UUVLSTNcACsMEaPdziEdVeyr a2WuTlEKFZrkv6FFB50n1pZmrL1D/RXpEzwyQ1Nltw8IwG/4On6N0OaXdvJv07yitG iZFzaLI7XL2IA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 12/26] drm/shmem-helper: Make drm_gem_shmem_get_pages() public Date: Mon, 30 Oct 2023 02:01:51 +0300 Message-ID: <20231029230205.93277-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:37:06 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134929320688343 X-GMAIL-MSGID: 1781134929320688343 We're going to move away from having implicit get_pages() done by get_pages_sgt() to ease simplify refcnt handling. Drivers will manage get/put_pages() by themselves. Expose the drm_gem_shmem_get_pages() in a public drm-shmem API. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 10 +++++++++- include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 24ff2b99e75b..ca6f422c0dfc 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -227,7 +227,14 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +/* + * drm_gem_shmem_get_pages - Increase use count on the backing pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This function Increases the use count and allocates the backing pages if + * use-count equals to zero. + */ +int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret; @@ -240,6 +247,7 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) return ret; } +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages); static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index e7b3f4c02bf5..45cd293e10a4 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -110,6 +110,7 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); From patchwork Sun Oct 29 23:01:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159453 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1896646vqb; Sun, 29 Oct 2023 16:25:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFCL5dgd4+zQYgG0j+5BCV/D57ZldjtzToqzbFZusiPSQnErN8K2VMHRpMMIlWps0G7/cbq X-Received: by 2002:a17:903:191:b0:1c6:d88:dc07 with SMTP id z17-20020a170903019100b001c60d88dc07mr10160729plg.48.1698621950134; Sun, 29 Oct 2023 16:25:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621950; cv=none; d=google.com; s=arc-20160816; b=VNQqkF3w7k2FFoGcWqTHxwxZNQz6e37HtOEG9isd3/nGBv0k+6au287iShuuL8z8mz z37doYeLtdAWLy2QDQZUL/Lzx+gFTUAKbxjmpgn5bJMbNZ0H3iM8r1gAkKRrDJat/UAH KFAD9sSHjv5SRKsyuq/LldfOJBkAZVl8ovSPC15NwGLoNwNc+oPWzEJsQa+AUGGX0OMe 7JXnxJo+l0laFVqrztLoBOPZX90HnDqn+eKOIJ8nOmllIihAkI90UUfLWCYnfaZZLEcI mTO8MbJr+Cw2zhEQa6JTSbE9cDiSKSDm1tnYWVIVT47h+07dUVrfUZohoLwP5LO7fsBI Qbfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=U6VyFd90nUV17m9SBSmju3sAp+SGFGVNn4oNwQbCp2k=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=U9YBWZvcsWEG41ybOXoaa8qcqSKfxvaHgYC2HY1U41DyXZNfvO05zVcTHBW0E65ug5 3LtaQaYbQSC4S0p0GxFb3iZy4ZxXJMjwI4AjuF4qxuEnm2kbWGVBO5pM5aY03L7jr2tk RRLiXAhcI82psNk5m4fINyfVxYqcYlycPGMEb9aT2JnBjYG/2wy9l/UmACsjLeOgfv3g uqMmAa/69hcQgryDlRUWEeYtSLW5qQEKyHuXK64LMm8KtI1pfIk7Rui9AlSIx7I72Pdd j3ptALfuSHNMNT2c0wUPhkPEgCMi/pf+w/lF9Z4CeTsuSd3KlmCSZkDWejTYocq9hT5B QI+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=nSliMwpu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id f15-20020a170902ce8f00b001ca0b64f5a6si1342057plg.449.2023.10.29.16.25.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:25:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=nSliMwpu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id D1ADF805367A; Sun, 29 Oct 2023 16:25:47 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232249AbjJ2XZg (ORCPT + 31 others); Sun, 29 Oct 2023 19:25:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232151AbjJ2XZU (ORCPT ); Sun, 29 Oct 2023 19:25:20 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D30A8212F for ; Sun, 29 Oct 2023 16:04:13 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1ACE4660739C; Sun, 29 Oct 2023 23:02:45 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620566; bh=0ZBUE0rLGCchFfVCmY9eX9kO1j9q0pZNyfBn0+Aeqt4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nSliMwpu82AbnU5e+erzpzW60WYrd2iWlEbQtA2FCaagAKLdJD3YIHk/zisDs1vgJ 0O2O2/RaQHJTtLvzNmPsarvVO/A9Yzxm+I3fda8P3iAOnXMkLjp0sHGxqaC1W16sZr e1aqdT0qI7jyhynFXkco2NGM+pm0hDW/taXH9aQGapSbjNYZ1KSruZzttU5ktUlYM+ iGJwoQzfBG2bRbMUYvocEhtEu11qu65pTHYR+DC5jKFCJHNs4+EwKFFHeIj8U2L5XP /w86l90cXCAOj0u7Qq+eZrUhbIdPkfbIiqaqivGyLkByUh5oUay2uP6xTMvD0NZOlQ 9fqz9i6ZUI4Ng== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 13/26] drm/shmem-helper: Add drm_gem_shmem_put_pages() Date: Mon, 30 Oct 2023 02:01:52 +0300 Message-ID: <20231029230205.93277-14-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:25:47 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134210013385514 X-GMAIL-MSGID: 1781134210013385514 We're going to move away from having implicit get_pages() done by get_pages_sgt() to ease simplify refcnt handling. Drivers will manage get/put_pages() by themselves. Add drm_gem_shmem_put_pages(). Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++ include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 21 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index ca6f422c0dfc..f371ebc6f85c 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -217,6 +217,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) * @shmem: shmem GEM object * * This function decreases the use count and puts the backing pages when use drops to zero. + * Caller must hold GEM's reservation lock. */ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { @@ -227,6 +228,25 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); +/* + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This function decreases the use count and puts the backing pages when use drops to zero. + * It's unlocked version of drm_gem_shmem_put_pages_locked(), caller must not hold + * GEM's reservation lock. + */ +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +{ + if (refcount_dec_not_one(&shmem->pages_use_count)) + return; + + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_put_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); + /* * drm_gem_shmem_get_pages - Increase use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 45cd293e10a4..6aad3e27d7ee 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -111,6 +111,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); From patchwork Sun Oct 29 23:01:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159454 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1896912vqb; Sun, 29 Oct 2023 16:26:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGRfL7LU5ENewqK8/eOnHV90Fn5xa0EISXtJGjEu/SfSzH+meuVzSkm4pcUqCAJlywai1+6 X-Received: by 2002:a05:6870:9a24:b0:1e9:d25d:3cb0 with SMTP id fo36-20020a0568709a2400b001e9d25d3cb0mr11213691oab.21.1698622013146; Sun, 29 Oct 2023 16:26:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698622013; cv=none; d=google.com; s=arc-20160816; b=Lx6b3psPxT2+OE2e4mOiaOuRmI8QXGyIFGEHivy5FITNqknq3iU/7lEbfwd8JmQeES Ca6LqzVep1vhZFvQ7IVgEPf+2lwqzaEfdSOAGwYR6aYdXROFwNzNp7Gyf0iqx/3VETc9 h3BHNQI5rn8RNNNpAdiqbENH6YWlN2BXjwrPe7zjmKwX95Ew8Z6QRCkXqNxTXO1kY7TB V26NhYEn9ZZ+KDrAYKQD0IZgF5eicjqKnyivWZrjc7KO9qkWqDYbWxymcMhzGMH5DckZ JU/wAhHRSUVSuHu0ZBNSxsnC0mhFkS4vHZjWOyZqrJkE1DcYKtdFiSWrR7kkBQO8QDgC AFFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JlvkJ/gVt69tH3h66k3lp8OXEEHRxoxamx5azdaEWk4=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=ocstsmhPQmOmT/aYf72CBlS/S/49c9fJXyttIuepzmMG7mO+xat7sghJ+59YNKKeNU /u51enXdqZpxVNl38Tm/YHPz+huRlTo4FKRmd1/lU42hyUTIZdbbRQH48cCXjvDEdeTD aC2L8UipBCvfWPWYUTOtG3XNi2eCXIoRfq41+0sQHHzTpnl537jj9/7Cwk2rKxAZUvKe DrqlBRiuz87BESVAFw65Zp+6blr1iJ1E34uZcqTVmgFA/W3rMpN1Msiwa5FgDJIa36XO 37qTY5DE0lZzbpeB+mkTEeOP+nWktXkbMw+WVYTFdpG9qFRrHC5bh+QRjXGF2CNQituE /atg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=mQKrB9bo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id t15-20020a63dd0f000000b0055e607f1e99si4103694pgg.882.2023.10.29.16.26.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:26:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=mQKrB9bo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 922C6805ADE9; Sun, 29 Oct 2023 16:26:21 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231929AbjJ2X0J (ORCPT + 31 others); Sun, 29 Oct 2023 19:26:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231818AbjJ2XZw (ORCPT ); Sun, 29 Oct 2023 19:25:52 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3ADC7D87 for ; Sun, 29 Oct 2023 16:04:13 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 90B23660739F; Sun, 29 Oct 2023 23:02:46 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620567; bh=XINqki645Wc8g9EPZb23p5o68bDFjXO7qBHmFhDTndY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mQKrB9boVoagum6YgnbTzBhRm3ekdhY/16TvUkNv4aHUd0nuWxU+91YM+fD+40llK sPzE3053tFMrnXUlCDYAiRd8abe6WmXp3Am7S+uUwj4xkH4vZr6nQ+6lcDbliYRSxj 1euNzz1JxaaMcCPRv0qrGd2LxrookCAeGvorDptQGuUww/iovif5TJOQ0ud+xZAdXI XcZu3R6gW6CkQZusoUdoHv/+oqd3XW+11GMASCYb5CiSfcMaBpeM2h/koctXCSbbGa JGatgPqYy+csPA6o+7tYbJzLGd74cpvPHclrOJ7zp9b6oyXmsWuM/RHiY1ThBvWgsY OO3kR5vahmowA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 14/26] drm/lima: Explicitly get and put drm-shmem pages Date: Mon, 30 Oct 2023 02:01:53 +0300 Message-ID: <20231029230205.93277-15-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:26:21 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134276328526034 X-GMAIL-MSGID: 1781134276328526034 To simplify the drm-shmem refcnt handling, we're moving away from the implicit get_pages() that is used by get_pages_sgt(). From now on drivers will have to pin pages while they use sgt. Lima driver doesn't have shrinker, hence pages are pinned and sgt is valid as long as pages' use-count > 0. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/lima/lima_gem.c | 18 ++++++++++++++++-- drivers/gpu/drm/lima/lima_gem.h | 1 + 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 988e74f67465..d255f5775dac 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -46,6 +46,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) return -ENOMEM; } + bo->put_pages = true; bo->base.pages = pages; refcount_set(&bo->base.pages_use_count, 1); @@ -115,6 +116,7 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, return PTR_ERR(shmem); obj = &shmem->base; + bo = to_lima_bo(obj); /* Mali Utgard GPU can only support 32bit address space */ mask = mapping_gfp_mask(obj->filp->f_mapping); @@ -123,13 +125,19 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, mapping_set_gfp_mask(obj->filp->f_mapping, mask); if (is_heap) { - bo = to_lima_bo(obj); err = lima_heap_alloc(bo, NULL); if (err) goto out; } else { - struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(shmem); + struct sg_table *sgt; + + err = drm_gem_shmem_get_pages(shmem); + if (err) + goto out; + + bo->put_pages = true; + sgt = drm_gem_shmem_get_pages_sgt(shmem); if (IS_ERR(sgt)) { err = PTR_ERR(sgt); goto out; @@ -139,6 +147,9 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, err = drm_gem_handle_create(file, obj, handle); out: + if (err && bo->put_pages) + drm_gem_shmem_put_pages(shmem); + /* drop reference from allocate - handle holds it now */ drm_gem_object_put(obj); @@ -152,6 +163,9 @@ static void lima_gem_free_object(struct drm_gem_object *obj) if (!list_empty(&bo->va)) dev_err(obj->dev->dev, "lima gem free bo still has va\n"); + if (bo->put_pages) + drm_gem_shmem_put_pages(&bo->base); + drm_gem_shmem_free(&bo->base); } diff --git a/drivers/gpu/drm/lima/lima_gem.h b/drivers/gpu/drm/lima/lima_gem.h index ccea06142f4b..dc5a6d465c80 100644 --- a/drivers/gpu/drm/lima/lima_gem.h +++ b/drivers/gpu/drm/lima/lima_gem.h @@ -16,6 +16,7 @@ struct lima_bo { struct list_head va; size_t heap_size; + bool put_pages; }; static inline struct lima_bo * From patchwork Sun Oct 29 23:01:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159440 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1894883vqb; Sun, 29 Oct 2023 16:19:11 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGV0Rl+Bh7n0OXHJMZKDcaZxwIau2zRShJ2bFXbY7g4aR9Q6Q7Dihr+b92TvUdxs2F7yv9g X-Received: by 2002:a05:6808:9b5:b0:3b2:f15e:459f with SMTP id e21-20020a05680809b500b003b2f15e459fmr8477560oig.58.1698621551145; Sun, 29 Oct 2023 16:19:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621551; cv=none; d=google.com; s=arc-20160816; b=JE8LozVmRJEhGgV52J/xpxoiyKmlU0lhDfpQRwlfd1KLE5kwm/2FnJbNu8LUQCN7Ef V8yg3QL5Cv7qAMIoZtE9fsCF2ffqGfsallpBVpjDCv0igtS4tvZEHrbNN2VqoOnrD3tg aEVPmhdQ2/RTwyKyKXSDTdSzwDQki1ltCMNdqeRDjZ3c1uPY3H9Rp8aPqC8zJBaSs6jU 1oupry5Y+nkOO3JH3SV2mrzg5skCLYEXmkMM1AwMnpbfSvuvKlJvli6BhpkOcS2/N1x1 T/nsKciE+HGRiZn2izofIvj5D/b9nysMJesvZzS1tgiElY9lODKpb0HQ63eYU+GC6fkS W6Fw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UAG9Oxtlor+kMy7BHyI2oEOYK6ntX/ScZXM1CZKO6jA=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=woppkWAqDX29FfJBMnkjgUZC4CmMz0fs0lwUdi+QxdhpzTF8xy+Jdd1GnxRcVMi09x QzsXEHViRlOF1v6y7fPZQbH5r4k3es7vnrjTom45w2k32ddSV/1XNDV5+FfUi4ph4gL5 AzdgjXA2B+Z5t0LC4BmXyl1cOypdLJHIl5GONByZIIwyrIyI4wMccB9GV1qQnjoHYRKk zxAJPHAv7EHrUgLqxa/G9vnuW4kP0JdqySUja1QVQ5QcwXsjc2oM8cetkMyWw8gb55ot cLAq+xU9HV3ejA8Rk8rjjJPKmnlz0ldVRUsZ7Wj3m15/kQQjbfO9jVUF2tAYTb8iqG6m YVUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="C7/+29RS"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id t189-20020a6381c6000000b005b96af23fe8si2144052pgd.424.2023.10.29.16.19.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:19:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="C7/+29RS"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 08E5F80B2324; Sun, 29 Oct 2023 16:19:10 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231768AbjJ2XTH (ORCPT + 31 others); Sun, 29 Oct 2023 19:19:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232024AbjJ2XSg (ORCPT ); Sun, 29 Oct 2023 19:18:36 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 104EF7D88 for ; Sun, 29 Oct 2023 16:04:17 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 0BF0066073A3; Sun, 29 Oct 2023 23:02:47 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620569; bh=kigW7BmHwtRjkP5CdmHcrhQZWaYuJgNAqNfW+QOmSH4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C7/+29RSZH8OEUufwdd12Q+/++94aRdOFa3S9xWzEYFLGkzFcnMN6fNYtr5s5Skoz FwA2Y4eXouplLhsbWhTKAf8M6OmHf2DQpe+zE35CUqJ99as5cm4btqm31Jfyg1SBHj PAOt/mthcB43f601TpiBsKsVIB6oRpTf6opv4yL9nvZCOCjIRXvIdOEvdyAqEoIAFx +Lju3LY5F3iRQfwllUvPlmJAp6atpCEBwoTkIxUGA+a3LYpl99Rh2ZgJUBnKN/sGkb KNH89cB7lHz1PF6GeRuDRJhB7fGsThMxBx1XpMVbZBPNwrviZIslPNzCgeReO0oMDn J2ajECiW8XRcQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 15/26] drm/panfrost: Explicitly get and put drm-shmem pages Date: Mon, 30 Oct 2023 02:01:54 +0300 Message-ID: <20231029230205.93277-16-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:19:10 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133791806528510 X-GMAIL-MSGID: 1781133791806528510 To simplify the drm-shmem refcnt handling, we're moving away from the implicit get_pages() that is used by get_pages_sgt(). From now on drivers will have to pin pages while they use sgt. Panfrost's shrinker doesn't support swapping out BOs, hence pages are pinned and sgt is valid as long as pages' use-count > 0. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/panfrost_gem.c | 17 +++++++++++++++++ drivers/gpu/drm/panfrost/panfrost_mmu.c | 6 ++---- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 6b77d8cebcb2..bb9d43cf7c3c 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -47,8 +47,13 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) } } kvfree(bo->sgts); + + drm_gem_shmem_put_pages(&bo->base); } + if (!bo->is_heap && !obj->import_attach) + drm_gem_shmem_put_pages(&bo->base); + drm_gem_shmem_free(&bo->base); } @@ -269,6 +274,7 @@ panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags) { struct drm_gem_shmem_object *shmem; struct panfrost_gem_object *bo; + int err; /* Round up heap allocations to 2MB to keep fault handling simple */ if (flags & PANFROST_BO_HEAP) @@ -282,7 +288,18 @@ panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags) bo->noexec = !!(flags & PANFROST_BO_NOEXEC); bo->is_heap = !!(flags & PANFROST_BO_HEAP); + if (!bo->is_heap) { + err = drm_gem_shmem_get_pages(shmem); + if (err) + goto err_free; + } + return bo; + +err_free: + drm_gem_shmem_free(&bo->base); + + return ERR_PTR(err); } struct drm_gem_object * diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 770dab1942c2..ac145a98377b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -504,7 +504,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, if (IS_ERR(pages[i])) { ret = PTR_ERR(pages[i]); pages[i] = NULL; - goto err_pages; + goto err_unlock; } } @@ -512,7 +512,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, ret = sg_alloc_table_from_pages(sgt, pages + page_offset, NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); if (ret) - goto err_pages; + goto err_unlock; ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); if (ret) @@ -535,8 +535,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, err_map: sg_free_table(sgt); -err_pages: - drm_gem_shmem_put_pages_locked(&bo->base); err_unlock: dma_resv_unlock(obj->resv); err_bo: From patchwork Sun Oct 29 23:01:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159438 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1894522vqb; Sun, 29 Oct 2023 16:17:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF5FAfiDNvPo3mVUqmjeCx0SHHPMJ+x1OdAful66cOPpvQTB1lCWbddG8zMwPOq8G30nXk9 X-Received: by 2002:a05:6a20:12c7:b0:138:836c:5370 with SMTP id v7-20020a056a2012c700b00138836c5370mr7422727pzg.42.1698621476466; Sun, 29 Oct 2023 16:17:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621476; cv=none; d=google.com; s=arc-20160816; b=mENUKNBLOmTjQkXaEDsKsBMvAfbeiqkTxI4zfpgS67hcuOuzxFQB1DtJkwsA9xBEBS J1kbLr4EqszKaUo+vqaB0m8paOUzenLTx8q+Sr5yLqSAybV85DYXKbxfwlG8JkQlIHv1 9KhhT247KNuoicTZoSFGARTCYocFUqsMSr/RGbfpU9VBQW+YKhufIlk3khwqB7+xFcpV 094EliyFl4OSmeSWf3HW3vlkVWzK9xf4InKRtPzoAe41Y58AzzokypwNkZ/sF3BUo4OW a+Lxic5FRvJ7XJ38s3jptkdJgUDTWmwsRdcidismmv73uaWtN33cpxxwxKwFBNlEd9Bt udwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=K/dnoLHW2XIlpfYkUGyCNTDXHxDSNx6znb/bES6fPYc=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=NuXjqiyIDCrpvMtCAdgbyjwyNmb72wMZ3JZB/nHBPIjBJnyVil2BmFgRh8NMisRodh yIUPJ6FHMZzqYJeu1E2F7TD6ezMF8WBwl8XEQNfyIyXOVhbP+22lHtoja9MvGM5vZtBG 4950pa9r4PI0hQxeooIbwiWKJ+UvGvLbStXQBOvaq6pj/h29LQQlGW155Br6Ax+4SN1V vFfXXVRWZ7JrXsQ1hd7DkzR6kAAoC4Dtt+QBh86YrOcksmpzTdNF44vCQjt0006eL/K5 rmVt8lkVEiQr+SD9CVLqdxTRQLUH+dCY7qT4cfmk7ZPpDCYQcEirdxAFe605QSAKYBIZ Sy1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=YXhiSes4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id p3-20020a170902f08300b001cc08c9d692si4093511pla.559.2023.10.29.16.17.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:17:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=YXhiSes4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 614228047046; Sun, 29 Oct 2023 16:17:55 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230526AbjJ2XRp (ORCPT + 31 others); Sun, 29 Oct 2023 19:17:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231488AbjJ2XRk (ORCPT ); Sun, 29 Oct 2023 19:17:40 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BF157AAF for ; Sun, 29 Oct 2023 16:04:17 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 7B5E266073A6; Sun, 29 Oct 2023 23:02:49 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620570; bh=SLnQz6auxyR/8a6Jw3cAOBNHaot5EBQ+F2hZkkWgFME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YXhiSes4ej4iBjp3tJ08F0kFblMQpIxVWEDb27XR0MszMbc1rPVoczrip+8IBQLfg 1WZFmkRlA7ewY1CkHx4BmwZ3BEu5mdUPVamTUpW90A/p8+LRYV72yqyIQoq+mez+fN ftth/3zKXV028Fzvfz6L8dAi7NcvtfnqJdeBLN4Q8mT8opYpbxktyypLXQ0hR/o71X eFLiay7FDNtK1H6h8Xl8+nsJ+UkeMs2YZWzc8CChVWJ4QqLbdf7rE5vgm2D4Kswtgo CvqekgMbmzZPDyFKQ5zzriUnP9y9xgKauxoPpCZYh4hPN8PnJs/lBRc3wrnksDRUVu J+pZaiI0oCTzQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 16/26] drm/virtio: Explicitly get and put drm-shmem pages Date: Mon, 30 Oct 2023 02:01:55 +0300 Message-ID: <20231029230205.93277-17-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:17:55 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133713260896406 X-GMAIL-MSGID: 1781133713260896406 We're moving away from implicit get_pages() that is done by get_pages_sgt() to simplify the refcnt handling. Drivers will have to pin pages while they use sgt. VirtIO-GPU doesn't support shrinker, hence pages are pinned and sgt is valid as long as pages' use-count > 0. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/virtio/virtgpu_object.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index ee5d2a70656b..998f8b05ceb1 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -67,6 +67,7 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); if (virtio_gpu_is_shmem(bo)) { + drm_gem_shmem_put_pages(&bo->base); drm_gem_shmem_free(&bo->base); } else if (virtio_gpu_is_vram(bo)) { struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo); @@ -196,9 +197,13 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, return PTR_ERR(shmem_obj); bo = gem_to_virtio_gpu_obj(&shmem_obj->base); + ret = drm_gem_shmem_get_pages(shmem_obj); + if (ret) + goto err_free_gem; + ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle); if (ret < 0) - goto err_free_gem; + goto err_put_pages; bo->dumb = params->dumb; @@ -243,6 +248,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, kvfree(ents); err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); +err_put_pages: + drm_gem_shmem_put_pages(shmem_obj); err_free_gem: drm_gem_shmem_free(shmem_obj); return ret; From patchwork Sun Oct 29 23:01:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159448 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1896019vqb; Sun, 29 Oct 2023 16:23:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFDnQmIK4OHk1EIrqZ3pMtaowUVhvrfEvpoZcyMusV3/ZxwmfXNMOhCa/7AP7wFC76w6uxL X-Received: by 2002:a05:6a21:4887:b0:165:186:1560 with SMTP id av7-20020a056a21488700b0016501861560mr6028563pzc.53.1698621795254; Sun, 29 Oct 2023 16:23:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621795; cv=none; d=google.com; s=arc-20160816; b=jI8AN/ep67ScR5SFjQOecrsPV+T8t9uZtKtKdzvSDX+3NAhPmlt0SWK9iqXbY+tpcP +d1SN/Rh2Sg+HEhMGhyU9AFaAOHfL+tX4m4KeWp+3oRVIIJw4OcRHv9YNzn6DQ2UEU5c a28s1idU6iui+5jHhi53coQl9FJRj4fIfOgF6wnVchnoMHWH2kiMT42jzmyLcoape+nF qBWE201fXhzCvmzi8KV/LUfm4sW148HYop9M/s/5C+UfxU3tFEHJsywjJHogd4+yBLAi aLVR+PUbpZneuk3taoJa2dBn1Gr6VR+LvRCPthjo6tOWjkriDtFn8a0KdFIT8f38afWS dKqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5MHcflXWxYfiU4GHksiEO8qkjt5CBoJSxW8lmEfO1VU=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=akATGsuxh6lz4BYTy63bVbyxDy89MT1TJjNNIYS9dyxb+hMTQdN/w//vtaG28dzolV /M6sX05dPx7Fal+Jq8XcfaQWN8nz/Zzdd0umfrXdyphi5M86DHm1G5UwrbztZ/N95Q6K JM6SU07fnCv2jlvtcaTOSnIA8QkoItLTI2qTfuZ0KWG6IKce4UL9Rxeazwz0liAPejZ2 pUpRWScsVBELKypkYGzSzvBYZBH5yq9hFHsXCCISuMY+anpxUpgwvXRlMAxbz0YlCszk 4s81MZMPUM15dhdNuKV357DFdQ/CkqtAW5wT21XIyepoWbZyTJbUcDSClVWWiQ0biMMX GzSg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="X5/Za5qX"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id j16-20020a170902da9000b001c9cc3a07c3si263034plx.280.2023.10.29.16.23.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:23:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="X5/Za5qX"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 08795807E463; Sun, 29 Oct 2023 16:23:13 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231417AbjJ2XWv (ORCPT + 31 others); Sun, 29 Oct 2023 19:22:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232316AbjJ2XWf (ORCPT ); Sun, 29 Oct 2023 19:22:35 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A7847D8A for ; Sun, 29 Oct 2023 16:04:17 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id EC9A36607324; Sun, 29 Oct 2023 23:02:50 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620572; bh=Jlmaekvj2hQJiZVYx0ur3VswhsYpjRjF0Slf4fhImyY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X5/Za5qX/f5RyiBjvLAk1GRX8JbJYZZxc4QRxhUA6GF1hRLeCIO5EbE+Hkv9N3EjK Pz1PUVy/2rpk4UWTW1+ccY6HQCO95x1NlavnfbU+de4gJ85ukcsDYceFr1lI3avV6J G7qaSLb1k2WLQdgnI2rMqCh1zK2IAA6MnMcsGSAbZvvIdG/62Op81b2ne4aPtCjifM rgbo9XiguiJSziDH9xM00EGN9a0g7TOR+Lw0k1hNNlzYw6QM9HZ16gwFnnmw0uHgft kMYRx86WLPfGHRPtoR0ez5igEZb1S1ZFCM+eTvLIKY020GQg71CFKmjQyOSYbxW1CZ QNCg2w3IJKnnA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 17/26] drm/v3d: Explicitly get and put drm-shmem pages Date: Mon, 30 Oct 2023 02:01:56 +0300 Message-ID: <20231029230205.93277-18-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:23:13 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134047466768541 X-GMAIL-MSGID: 1781134047466768541 To simplify the drm-shmem refcnt handling, we're moving away from the implicit get_pages() that is used by get_pages_sgt(). From now on drivers will have to pin pages while they use sgt. V3D driver doesn't support shrinker, hence pages are pinned and sgt is valid as long as pages' use-count > 0. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/v3d/v3d_bo.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c index 42cd874f6810..0597c6b01b6c 100644 --- a/drivers/gpu/drm/v3d/v3d_bo.c +++ b/drivers/gpu/drm/v3d/v3d_bo.c @@ -47,6 +47,9 @@ void v3d_free_object(struct drm_gem_object *obj) /* GPU execution may have dirtied any pages in the BO. */ bo->base.pages_mark_dirty_on_put = true; + if (!obj->import_attach) + drm_gem_shmem_put_pages(&bo->base); + drm_gem_shmem_free(&bo->base); } @@ -135,12 +138,18 @@ struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct drm_file *file_priv, return ERR_CAST(shmem_obj); bo = to_v3d_bo(&shmem_obj->base); - ret = v3d_bo_create_finish(&shmem_obj->base); + ret = drm_gem_shmem_get_pages(shmem_obj); if (ret) goto free_obj; + ret = v3d_bo_create_finish(&shmem_obj->base); + if (ret) + goto put_pages; + return bo; +put_pages: + drm_gem_shmem_put_pages(shmem_obj); free_obj: drm_gem_shmem_free(shmem_obj); return ERR_PTR(ret); From patchwork Sun Oct 29 23:01:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159447 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1895956vqb; Sun, 29 Oct 2023 16:23:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEb9dCR8t6+P8HT3pgvXZNF55Df6PQ5KwEMzRXdvZMMNirKGYRrDKy98mC1OybLfz2vqJ/L X-Received: by 2002:a05:6870:860c:b0:1ea:a11:71f4 with SMTP id h12-20020a056870860c00b001ea0a1171f4mr10716112oal.59.1698621783665; Sun, 29 Oct 2023 16:23:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621783; cv=none; d=google.com; s=arc-20160816; b=LEKio7BAyBel9AOVifaClVejJL0vppmmed8o5y0bj4QHOn2I+hgZjdrQBFkccKjp4y l1/F2Xgoyj+cpXkrfGq6kuaavkVN82SD4A22O3GCWqEc8yP6mb0Jp7wQdTiJ5o505dLy Jn4cmyWCQSsyZo69jaeyB2wxEyjZc7ESktTnhWTnbjGStZTcbtZTISipIrXNJ22uzaci U2DKYzZyDgdOqMXvZ6MgK8twq0CFPc9oEv2lyiCL4APE4EN7ajv0z3vMpO+GWcesbPSa hQPTjzokAtQfJy4/f/pOP9bhHLx1Cp/LWEOMODNB8vsgl7bogLz6wpFvKGGJM+dZrPI2 d3Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=J+VAFwuLDItYlUvrZN+zRdYdLqFAqpHb0tZVGA0fAFE=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=rCiqhIBJsfWyzNfSSopNS/UW4MJGMTW0URFL/9gguG02LNBuUN2WED5f+HGEXe31y5 8vV7Khvmwkx327Rmp9Nu8YF2ywluIWvj3Yj1unWm4A/Iq08NhQQlzQLPJ0S13szJ5EOm GffFpWGRBUcROLkUed2749Hzi/l1xY6t/cqTpiQpY1VT821BQa8Y688KipcIf1HPd13j 0jDO/W7Q+jMLJUAJzmr5MIP2pgHTbAIv678i3m1Swvx+KSCakBnvqBbqQaS3JXeBkOaF wjXFd2w2qA+qzBzqOzA42ZvYKibp7QPR9P4167lWv7RUQeHDclwdhF+Fcne+sNmuhoJO LxEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Vsu1u+Np; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id l26-20020a65681a000000b00577f4d736bbsi4150010pgt.373.2023.10.29.16.23.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:23:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Vsu1u+Np; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id C1ACA804393E; Sun, 29 Oct 2023 16:23:02 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232324AbjJ2XWz (ORCPT + 31 others); Sun, 29 Oct 2023 19:22:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232317AbjJ2XWf (ORCPT ); Sun, 29 Oct 2023 19:22:35 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BCFB7D8C for ; Sun, 29 Oct 2023 16:04:17 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 6BC9E6607390; Sun, 29 Oct 2023 23:02:52 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620573; bh=Y38zEoY1/gjYYjrjDZs9Rr+zwqObmBjdBml3XtFCTzs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vsu1u+Npxkp9e5ZNo9JZC5sEhd9r5AwEDu6nZCq3kKd+dbEAq44Z0f0Oq4nd5s9JJ IDidsJaAwKOwFEgHBx+rmabzyhrPrPquTmDzes50MPEQ2tnGo6txyw/C4mpuaNyb4n ixPslp3QLBnnLP96U5Hq2Q8dnX3TUSDXciHTZONuOYXT+4dhzug8t+dMCcNiEHMO+r Zgn+tjohtPFre1OJaf38Ybb4ckxki5HIJtmV/Knkn7lLgiij2m+3l8yenA4Aa6kr3R k/rmwBvcfH6L+9GiR1DzQrzVbaTADTKKvs4MejvOWFTLKeNFtLV+MerVAlTrLXGk1J UGR5T/2Kd5VGQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 18/26] drm/shmem-helper: Change sgt allocation policy Date: Mon, 30 Oct 2023 02:01:57 +0300 Message-ID: <20231029230205.93277-19-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:23:02 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134035791300261 X-GMAIL-MSGID: 1781134035791300261 In a preparation to addition of drm-shmem memory shrinker support, change the SGT allocation policy in this way: 1. SGT can be allocated only if shmem pages are pinned at the time of allocation, otherwise allocation fails. 2. Drivers must ensure that pages are pinned during the time of SGT usage and should get new SGT if pages were unpinned. This new policy is required by the shrinker because it will move pages to/from SWAP unless pages are pinned, invalidating SGT pointer once pages are relocated. Previous patches prepared drivers to the new policy. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 51 +++++++++++++------------- 1 file changed, 26 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f371ebc6f85c..1420d2166b76 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -133,6 +133,14 @@ drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + if (shmem->sgt) { + dma_unmap_sgtable(obj->dev->dev, shmem->sgt, + DMA_BIDIRECTIONAL, 0); + sg_free_table(shmem->sgt); + kfree(shmem->sgt); + shmem->sgt = NULL; + } + #ifdef CONFIG_X86 if (shmem->map_wc) set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); @@ -155,23 +163,12 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - if (obj->import_attach) { + if (obj->import_attach) drm_prime_gem_destroy(obj, shmem->sgt); - } else { - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); - - if (shmem->sgt) { - dma_unmap_sgtable(obj->dev->dev, shmem->sgt, - DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - } - if (shmem->pages) - drm_gem_shmem_put_pages_locked(shmem); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); - } + drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); drm_gem_object_release(obj); kfree(shmem); @@ -705,6 +702,9 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (drm_WARN_ON(obj->dev, !shmem->pages)) + return ERR_PTR(-ENOMEM); + return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT); } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); @@ -720,15 +720,10 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ drm_WARN_ON(obj->dev, obj->import_attach); - ret = drm_gem_shmem_get_pages_locked(shmem); - if (ret) - return ERR_PTR(ret); - sgt = drm_gem_shmem_get_sg_table(shmem); - if (IS_ERR(sgt)) { - ret = PTR_ERR(sgt); - goto err_put_pages; - } + if (IS_ERR(sgt)) + return sgt; + /* Map the pages for use by the h/w. */ ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0); if (ret) @@ -741,8 +736,6 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ err_free_sgt: sg_free_table(sgt); kfree(sgt); -err_put_pages: - drm_gem_shmem_put_pages_locked(shmem); return ERR_PTR(ret); } @@ -759,6 +752,14 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ * and difference between dma-buf imported and natively allocated objects. * drm_gem_shmem_get_sg_table() should not be directly called by drivers. * + * Drivers should adhere to these SGT usage rules: + * + * 1. SGT should be allocated only if shmem pages are pinned at the + * time of allocation, otherwise allocation will fail. + * + * 2. Drivers should ensure that pages are pinned during the time of + * SGT usage and should get new SGT if pages were unpinned. + * * Returns: * A pointer to the scatter/gather table of pinned pages or errno on failure. */ From patchwork Sun Oct 29 23:01:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159446 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1895705vqb; Sun, 29 Oct 2023 16:21:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG0hcez6klnGk0JojbnFkU3yjUVxiJNeFurHxLEjSDRbFo5mPdBfzI9k+MWnHLDNeCTvXLg X-Received: by 2002:a17:903:30d4:b0:1ca:86b:7ed9 with SMTP id s20-20020a17090330d400b001ca086b7ed9mr5317988plc.40.1698621719167; Sun, 29 Oct 2023 16:21:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621719; cv=none; d=google.com; s=arc-20160816; b=dhwbJdGDNbCJo2ZBSu+GrKlYOlyxKmSp3S15j8MaQsvq7B9HfDPbTerxPKCCsmOMk8 9N0OabQK0BXbIsys1QdHCZjNnRKsgRUe9nMA2WkrTG/2gpBDYZ+yIRqBl19UEcSoa3s3 cBT41gOAcp5sZx8b4tmgZ0EEClajCm7ngAP6tKPIRbBmoKuF0Dtj7UEjICbln0vM4yfy IFgHEVq2JXcIjwPYtdsnJq4hJMYhUuHdTVaMt8h+dMaF0TNBgG/M9+hm1HzXyuB59f/D fl28DDYEcysZeaRQ9PQYqUG1dcfzJhrC/RYBagImlFx1uqNlZlqgz/+Awe5yDglM+tC0 TRLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZtBLG0SPHYQZs+A16IhzHyIMbgyiSUIp24Xzgtyc2E4=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=voVe46j7EEXJ+mQ7OmcNyzXRIu+kkOp9pj20Dc3kXeRjLm4EsurzxBP0lBnDFDGW4D UDwFBPSrLvKXzgvkxBxALYKryZ1XiFJhbZg84inFiz6gNdaYsIun6Ax2zCmLeEOAio3k MCi42PJJ7DMOQJ8JjvBS+7P3K3vSN0rgsXMrzMl4KLI+iqd2fgh2O04GrNXDcDMbfL6n oZ+yCKpKt4ShIhsdwYYOTVtce4mQ7HmybUFbXJAe+ZwM7X0UtcB3yeQqnB9SB26jPdRE 3ho+5rrvdrCI7ZdUTNg2fijVjcM4yaeZ5yh+Dzl+bqIPuWIbX4aZZOxo5JmksrV1uAGm MZbg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kQ8SdDGM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id jy6-20020a17090342c600b001b8a67f1c10si4107668plb.468.2023.10.29.16.21.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:21:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kQ8SdDGM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 04123807E463; Sun, 29 Oct 2023 16:20:48 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231965AbjJ2XUY (ORCPT + 31 others); Sun, 29 Oct 2023 19:20:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbjJ2XUM (ORCPT ); Sun, 29 Oct 2023 19:20:12 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 314976A7A for ; Sun, 29 Oct 2023 16:04:19 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id DDEC266073AB; Sun, 29 Oct 2023 23:02:53 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620575; bh=/tPZ8jehFaGoybqYA/ok+FF0+UUyVkCXXB+PuzEsBHE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kQ8SdDGMd/Rf7UWZ1e3hhEQ1pUmIw6gIdmsmVJEXnbXakC8Xy5X+8zlNkj5YGprWs I4anX7cKhqSYSUmSl3gi0r/zMjG/H/68W8/ziWemEST1kVnBeXovkjPqMgfXBpWNWD EJBo2OenDg+Iq+OMqjQW99HPI8aGykwOuCL6ekqGUrat/M5bhXjDqQIlPB+N7dxxpZ clliRbHLj+kz4MXEcp51dtfyLYn9cGuMdtr30D+3gStr6B2+cxLTL9ESaxjVzpSL7d pvS5XKBCwp40JJYfxHJLiDX1AUp7DPoLqZl+AjFlgfoo19XwSbLhZMzfPtyHtdO7LS AQiW794xB7yng== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 19/26] drm/shmem-helper: Add common memory shrinker Date: Mon, 30 Oct 2023 02:01:58 +0300 Message-ID: <20231029230205.93277-20-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:20:48 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133967782930952 X-GMAIL-MSGID: 1781133967782930952 Introduce common drm-shmem shrinker for DRM drivers. To start using drm-shmem shrinker drivers should do the following: 1. Implement evict() callback of GEM object where driver should check whether object is purgeable or evictable using drm-shmem helpers and perform the shrinking action 2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device), which will register drm-shmem shrinker 3. Implement madvise IOCTL that will use drm_gem_shmem_madvise() Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 386 +++++++++++++++++- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 9 +- include/drm/drm_device.h | 10 +- include/drm/drm_gem_shmem_helper.h | 68 ++- 4 files changed, 450 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 1420d2166b76..007521bea302 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -88,8 +89,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) if (ret) goto err_release; - INIT_LIST_HEAD(&shmem->madv_list); - if (!private) { /* * Our buffers are kept pinned, so allocating them @@ -128,11 +127,49 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv >= 0) && shmem->base.funcs->evict && + refcount_read(&shmem->pages_use_count) && + !refcount_read(&shmem->pages_pin_count) && + !shmem->base.dma_buf && !shmem->base.import_attach && + !shmem->evicted; +} + +static void +drm_gem_shmem_shrinker_update_lru_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm; + struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker; + + dma_resv_assert_held(shmem->base.resv); + + if (!shmem_shrinker || obj->import_attach) + return; + + if (shmem->madv < 0) + drm_gem_lru_remove(&shmem->base); + else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(shmem)) + drm_gem_lru_move_tail(&shmem_shrinker->lru_evictable, &shmem->base); + else if (shmem->evicted) + drm_gem_lru_move_tail(&shmem_shrinker->lru_evicted, &shmem->base); + else if (!shmem->pages) + drm_gem_lru_remove(&shmem->base); + else + drm_gem_lru_move_tail(&shmem_shrinker->lru_pinned, &shmem->base); +} + static void drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + if (!shmem->pages) { + drm_WARN_ON(obj->dev, !shmem->evicted && shmem->madv >= 0); + return; + } + if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -175,15 +212,25 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +static int +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; + drm_WARN_ON(obj->dev, obj->import_attach); + dma_resv_assert_held(shmem->base.resv); - if (refcount_inc_not_zero(&shmem->pages_use_count)) + if (shmem->madv < 0) { + drm_WARN_ON(obj->dev, shmem->pages); + return -ENOMEM; + } + + if (shmem->pages) { + drm_WARN_ON(obj->dev, !shmem->evicted); return 0; + } pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { @@ -204,8 +251,29 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages = pages; + return 0; +} + +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +{ + int err; + + dma_resv_assert_held(shmem->base.resv); + + if (shmem->madv < 0) + return -ENOMEM; + + if (refcount_inc_not_zero(&shmem->pages_use_count)) + return 0; + + err = drm_gem_shmem_acquire_pages(shmem); + if (err) + return err; + refcount_set(&shmem->pages_use_count, 1); + drm_gem_shmem_shrinker_update_lru_locked(shmem); + return 0; } @@ -222,6 +290,8 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) if (refcount_dec_and_test(&shmem->pages_use_count)) drm_gem_shmem_free_pages(shmem); + + drm_gem_shmem_shrinker_update_lru_locked(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); @@ -238,6 +308,20 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) if (refcount_dec_not_one(&shmem->pages_use_count)) return; + /* + * Destroying the object is a special case because acquiring + * the obj lock can cause a locking order inversion between + * reservation_ww_class_mutex and fs_reclaim. + * + * This deadlock is not actually possible, because no one should + * be already holding the lock when GEM is released. Unfortunately + * lockdep is not aware of this detail. So when the refcount drops + * to zero, we pretend it is already locked. + */ + if (!kref_read(&shmem->base.refcount) && + refcount_dec_and_test(&shmem->pages_use_count)) + return drm_gem_shmem_free_pages(shmem); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages_locked(shmem); dma_resv_unlock(shmem->base.resv); @@ -250,6 +334,11 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); * * This function Increases the use count and allocates the backing pages if * use-count equals to zero. + * + * Note that this function doesn't pin pages in memory. If your driver + * uses drm-shmem shrinker, then it's free to relocate pages to swap. + * Getting pages only guarantees that pages are allocated, and not that + * pages reside in memory. In order to pin pages use drm_gem_shmem_pin(). */ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { @@ -275,6 +364,10 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) if (refcount_inc_not_zero(&shmem->pages_pin_count)) return 0; + ret = drm_gem_shmem_swapin_locked(shmem); + if (ret) + return ret; + ret = drm_gem_shmem_get_pages_locked(shmem); if (!ret) refcount_set(&shmem->pages_pin_count, 1); @@ -473,29 +566,50 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv) madv = shmem->madv; + drm_gem_shmem_shrinker_update_lru_locked(shmem); + return (madv >= 0); } EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked); -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) +{ + struct drm_gem_object *obj = &shmem->base; + int ret; + + ret = dma_resv_lock_interruptible(obj->resv, NULL); + if (ret) + return ret; + + ret = drm_gem_shmem_madvise_locked(shmem, madv); + dma_resv_unlock(obj->resv); + + return ret; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); + +static void +drm_gem_shmem_shrinker_put_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct drm_device *dev = obj->dev; dma_resv_assert_held(shmem->base.resv); - drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); + if (shmem->evicted) + return; - dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - shmem->sgt = NULL; + drm_gem_shmem_free_pages(shmem); + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); +} - drm_gem_shmem_put_pages_locked(shmem); +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; - shmem->madv = -1; + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); - drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + drm_gem_shmem_shrinker_put_pages_locked(shmem); drm_gem_free_mmap_offset(obj); /* Our goal here is to return as much of the memory as @@ -506,9 +620,45 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + + shmem->madv = -1; + shmem->evicted = false; + drm_gem_shmem_shrinker_update_lru_locked(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked); +/** + * drm_gem_shmem_swapin_locked() - Moves shmem GEM back to memory and enables + * hardware access to the memory. + * @shmem: shmem GEM object + * + * This function moves shmem GEM back to memory if it was previously evicted + * by the memory shrinker. The GEM is ready to use on success. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_swapin_locked(struct drm_gem_shmem_object *shmem) +{ + int err; + + dma_resv_assert_held(shmem->base.resv); + + if (!shmem->evicted) + return 0; + + err = drm_gem_shmem_acquire_pages(shmem); + if (err) + return err; + + shmem->evicted = false; + + drm_gem_shmem_shrinker_update_lru_locked(shmem); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_swapin_locked); + /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object * @file: DRM file structure to create the dumb buffer for @@ -555,22 +705,32 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) vm_fault_t ret; struct page *page; pgoff_t page_offset; + int err; /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; dma_resv_lock(shmem->base.resv, NULL); - if (page_offset >= num_pages || - drm_WARN_ON_ONCE(obj->dev, !shmem->pages) || - shmem->madv < 0) { + err = drm_gem_shmem_swapin_locked(shmem); + if (err) { + ret = VM_FAULT_OOM; + goto unlock; + } + + if (page_offset >= num_pages || !shmem->pages) { ret = VM_FAULT_SIGBUS; } else { + /* + * shmem->pages is guaranteed to be valid while reservation + * lock is held and drm_gem_shmem_swapin_locked() succeeds. + */ page = shmem->pages[page_offset]; ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } +unlock: dma_resv_unlock(shmem->base.resv); return ret; @@ -593,6 +753,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) drm_WARN_ON_ONCE(obj->dev, !refcount_inc_not_zero(&shmem->pages_use_count)); + drm_gem_shmem_shrinker_update_lru_locked(shmem); dma_resv_unlock(shmem->base.resv); drm_gem_vm_open(vma); @@ -678,7 +839,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count)); drm_printf_indent(p, indent, "vmap_use_count=%u\n", refcount_read(&shmem->vmap_use_count)); + drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); + drm_printf_indent(p, indent, "madv=%d\n", shmem->madv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); @@ -765,8 +928,12 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ */ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) { - int ret; + struct drm_gem_object *obj = &shmem->base; struct sg_table *sgt; + int ret; + + drm_WARN_ON(obj->dev, drm_gem_shmem_is_evictable(shmem)); + drm_WARN_ON(obj->dev, drm_gem_shmem_is_purgeable(shmem)); ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) @@ -813,6 +980,191 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); +static struct drm_gem_shmem_shrinker * +to_drm_gem_shmem_shrinker(struct shrinker *shrinker) +{ + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); +} + +static unsigned long +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker = + to_drm_gem_shmem_shrinker(shrinker); + unsigned long count = shmem_shrinker->lru_evictable.count; + + if (count >= SHRINK_EMPTY) + return SHRINK_EMPTY - 1; + + return count ?: SHRINK_EMPTY; +} + +void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem)); + drm_WARN_ON(obj->dev, shmem->evicted); + + drm_gem_shmem_shrinker_put_pages_locked(shmem); + + shmem->evicted = true; + drm_gem_shmem_shrinker_update_lru_locked(shmem); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_evict_locked); + +static bool drm_gem_shmem_shrinker_evict_locked(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + int err; + + if (!drm_gem_shmem_is_evictable(shmem) || + get_nr_swap_pages() < obj->size >> PAGE_SHIFT) + return false; + + err = drm_gem_evict_locked(obj); + if (err) + return false; + + return true; +} + +static bool drm_gem_shmem_shrinker_purge_locked(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + int err; + + if (!drm_gem_shmem_is_purgeable(shmem)) + return false; + + err = drm_gem_evict_locked(obj); + if (err) + return false; + + return true; +} + +static unsigned long +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker; + unsigned long nr_to_scan = sc->nr_to_scan; + unsigned long remaining = 0; + unsigned long freed = 0; + + shmem_shrinker = to_drm_gem_shmem_shrinker(shrinker); + + /* purge as many objects as we can */ + freed += drm_gem_lru_scan(&shmem_shrinker->lru_evictable, + nr_to_scan, &remaining, + drm_gem_shmem_shrinker_purge_locked); + + /* evict as many objects as we can */ + if (freed < nr_to_scan) + freed += drm_gem_lru_scan(&shmem_shrinker->lru_evictable, + nr_to_scan - freed, &remaining, + drm_gem_shmem_shrinker_evict_locked); + + return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP; +} + +static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm, + const char *shrinker_name) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker; + int err; + + shmem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects; + shmem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects; + shmem_shrinker->base.seeks = DEFAULT_SEEKS; + + mutex_init(&shmem_shrinker->lock); + drm_gem_lru_init(&shmem_shrinker->lru_evictable, &shmem_shrinker->lock); + drm_gem_lru_init(&shmem_shrinker->lru_evicted, &shmem_shrinker->lock); + drm_gem_lru_init(&shmem_shrinker->lru_pinned, &shmem_shrinker->lock); + + err = register_shrinker(&shmem_shrinker->base, shrinker_name); + if (err) { + mutex_destroy(&shmem_shrinker->lock); + return err; + } + + return 0; +} + +static void drm_gem_shmem_shrinker_release(struct drm_device *dev, + struct drm_gem_shmem *shmem_mm) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker; + + unregister_shrinker(&shmem_shrinker->base); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_evictable.list)); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_evicted.list)); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_pinned.list)); + mutex_destroy(&shmem_shrinker->lock); +} + +static int drm_gem_shmem_init(struct drm_device *dev) +{ + int err; + + if (drm_WARN_ON(dev, dev->shmem_mm)) + return -EBUSY; + + dev->shmem_mm = kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL); + if (!dev->shmem_mm) + return -ENOMEM; + + err = drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique); + if (err) + goto free_gem_shmem; + + return 0; + +free_gem_shmem: + kfree(dev->shmem_mm); + dev->shmem_mm = NULL; + + return err; +} + +static void drm_gem_shmem_release(struct drm_device *dev, void *ptr) +{ + struct drm_gem_shmem *shmem_mm = dev->shmem_mm; + + drm_gem_shmem_shrinker_release(dev, shmem_mm); + dev->shmem_mm = NULL; + kfree(shmem_mm); +} + +/** + * drmm_gem_shmem_init() - Initialize drm-shmem internals + * @dev: DRM device + * + * Cleanup is automatically managed as part of DRM device releasing. + * Calling this function multiple times will result in a error. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drmm_gem_shmem_init(struct drm_device *dev) +{ + int err; + + err = drm_gem_shmem_init(dev); + if (err) + return err; + + err = drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL); + if (err) + return err; + + return 0; +} +EXPORT_SYMBOL_GPL(drmm_gem_shmem_init); + MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); MODULE_IMPORT_NS(DMA_BUF); MODULE_LICENSE("GPL v2"); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 72193bd734e1..1aa94fff7072 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -15,6 +15,13 @@ #include "panfrost_gem.h" #include "panfrost_mmu.h" +static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv > 0) && + !refcount_read(&shmem->pages_pin_count) && shmem->sgt && + !shmem->base.dma_buf && !shmem->base.import_attach; +} + static unsigned long panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { @@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc return 0; list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (drm_gem_shmem_is_purgeable(shmem)) + if (panfrost_gem_shmem_is_purgeable(shmem)) count += shmem->base.size >> PAGE_SHIFT; } diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h index c490977ee250..9ef31573057c 100644 --- a/include/drm/drm_device.h +++ b/include/drm/drm_device.h @@ -16,6 +16,7 @@ struct drm_vblank_crtc; struct drm_vma_offset_manager; struct drm_vram_mm; struct drm_fb_helper; +struct drm_gem_shmem_shrinker; struct inode; @@ -290,8 +291,13 @@ struct drm_device { /** @vma_offset_manager: GEM information */ struct drm_vma_offset_manager *vma_offset_manager; - /** @vram_mm: VRAM MM memory manager */ - struct drm_vram_mm *vram_mm; + union { + /** @vram_mm: VRAM MM memory manager */ + struct drm_vram_mm *vram_mm; + + /** @shmem_mm: SHMEM GEM memory manager */ + struct drm_gem_shmem *shmem_mm; + }; /** * @switch_power_state: diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 6aad3e27d7ee..3bb70616d095 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -13,6 +14,7 @@ #include struct dma_buf_attachment; +struct drm_device; struct drm_mode_create_dumb; struct drm_printer; struct sg_table; @@ -54,8 +56,8 @@ struct drm_gem_shmem_object { * @madv: State for madvise * * 0 is active/inuse. + * 1 is not-needed/can-be-purged * A negative value is the object is purged. - * Positive values are driver specific and not used by the helpers. */ int madv; @@ -102,6 +104,14 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc : 1; + + /** + * @evicted: True if shmem pages are evicted by the memory shrinker. + * Used internally by memory shrinker. The evicted pages can be + * moved back to memory using drm_gem_shmem_swapin_locked(), unlike + * the purged pages (madv < 0) that are destroyed permanently. + */ + bool evicted : 1; }; #define to_drm_gem_shmem_obj(obj) \ @@ -122,14 +132,19 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma); int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv); +int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { - return (shmem->madv > 0) && - !refcount_read(&shmem->pages_pin_count) && shmem->sgt && + return (shmem->madv > 0) && shmem->base.funcs->evict && + refcount_read(&shmem->pages_use_count) && + !refcount_read(&shmem->pages_pin_count) && !shmem->base.dma_buf && !shmem->base.import_attach; } +int drm_gem_shmem_swapin_locked(struct drm_gem_shmem_object *shmem); + +void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); @@ -273,6 +288,53 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v return drm_gem_shmem_mmap(shmem, vma); } +/** + * drm_gem_shmem_object_madvise - unlocked GEM object function for drm_gem_shmem_madvise_locked() + * @obj: GEM object + * @madv: Madvise value + * + * This function wraps drm_gem_shmem_madvise_locked(), providing unlocked variant. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +static inline int drm_gem_shmem_object_madvise(struct drm_gem_object *obj, int madv) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + return drm_gem_shmem_madvise(shmem, madv); +} + +/** + * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory manager + */ +struct drm_gem_shmem_shrinker { + /** @base: Shrinker for purging shmem GEM objects */ + struct shrinker base; + + /** @lock: Protects @lru_* */ + struct mutex lock; + + /** @lru_pinned: List of pinned shmem GEM objects */ + struct drm_gem_lru lru_pinned; + + /** @lru_evictable: List of shmem GEM objects to be evicted */ + struct drm_gem_lru lru_evictable; + + /** @lru_evicted: List of evicted shmem GEM objects */ + struct drm_gem_lru lru_evicted; +}; + +/** + * struct drm_gem_shmem - GEM shmem memory manager + */ +struct drm_gem_shmem { + /** @shrinker: GEM shmem shrinker */ + struct drm_gem_shmem_shrinker shrinker; +}; + +int drmm_gem_shmem_init(struct drm_device *dev); + /* * Driver ops */ From patchwork Sun Oct 29 23:01:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159441 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1895030vqb; Sun, 29 Oct 2023 16:19:40 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEemxFQbaiZLpWVAMXVIEdKsKbKd5Q0B48OHpk4RKFJPp4QOYtdiwVSyzKx+f6TvWLTUxb1 X-Received: by 2002:a05:6a21:a597:b0:172:918b:3907 with SMTP id gd23-20020a056a21a59700b00172918b3907mr12800383pzc.54.1698621580283; Sun, 29 Oct 2023 16:19:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621580; cv=none; d=google.com; s=arc-20160816; b=zijMAJvhRfrTxhZu4/R8y6/mne0xQ+5SFOgEdLCSuQ4tAElIBFoJkOKYNQnV8oyU5t qjrVNjBwehzjgps9lILYrOuOgjOdawtm/eqYVrQagN74j4YyF1nBmCLosRRGSiHWDrOU uWhyCccnMbSkMSglHQGIJ1LnizTWg+XnfeROAmsmWXDY2Gt/StsvEiYDa2mh039okujZ 9LQ3ry7f90GiFHUO9KqHEcEug7cOyJYcPK3wgSBrdCYzLGmd5fCwNKMi2+1+NR5vTzyo Vp8kUCbJi1306Ippv/ZeT0GEBXwbwRO8W12zE1qIFuYHA9nuGZAKjOSKFtZ/Fh1Ubjct CBiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fQKNzLvfkviALcDx6FCbfnv/kFemK5M91sdW31qrOHI=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=Q3w23nBg+jhFyQQ3yn8NhQamMTjpRAyb5BI89IX73QXgITEMw/6sjuViYzuXGUIGwk gireVhopgXSV/sQ6Dg/d/vb/0OkgSKapSKR0Ta3CfHyt1TGwsaub2c2drUVO7mawxoFk Fxgxu0J7Dmopdiq5KIqi8Z6PdcWyM8eHkFRwP1zTdLfmej5JGNdljQ41nWvxf+2fX7Sd GDgNwqwdrqm7XXVK6WN2+JGT1gUFpvOs+C/D/rGMOnPt5FdIC5Tj22yfECECz+J1N2Ie I3UkeFdEt1og3W5+F9sVRWGT1sRyvRPNcJDR34iSDsNyMkfNdHGDpVyN4hLptyXHI0Wt 2Kqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=KwCYSqNL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id z3-20020aa78883000000b006826c8d5a31si4199114pfe.21.2023.10.29.16.19.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:19:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=KwCYSqNL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 7BD8D8043905; Sun, 29 Oct 2023 16:19:39 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231265AbjJ2XTd (ORCPT + 31 others); Sun, 29 Oct 2023 19:19:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232283AbjJ2XTS (ORCPT ); Sun, 29 Oct 2023 19:19:18 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2611E3C25 for ; Sun, 29 Oct 2023 16:04:19 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 5EC9C6607393; Sun, 29 Oct 2023 23:02:55 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620576; bh=C6bLQO2M1JgrOXqUoLdrTRyEvFb4mUnAejhijhITpos=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KwCYSqNLb93JTFlRcJlgindrqbXEZBOKUYuRVjW2+vUf3pQcs3JaNCzWR3bHfkS9e YmxOIc2fiL+ei/n0nDzPzU9DVngHBpTSUcwEUvmE24DKYUAsFpoi+5QUmySAvs13NG huHNCs7NIwgAMW68EbODX2i6ni//z24rZjSqu3GlvuY/Y14DVBfRrP35QoCNhNEoWh XuKt4w03ulVTNcFi4rSoUIHjp5KtE5ZAc7Mrn99jh+suOe4JJ249oHWWMS9HigVYC/ JtqV+PJQIO1oak0vtAC8b4JaiKz0hTbNd1pkG+NVnpG/bBVvrd4zpdxC5FM6UruXKh 7iJJ6gsMHlc+A== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 20/26] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked() Date: Mon, 30 Oct 2023 02:01:59 +0300 Message-ID: <20231029230205.93277-21-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:19:39 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133821979990235 X-GMAIL-MSGID: 1781133821979990235 Export drm_gem_shmem_get_pages_sgt_locked() that will be used by virtio-gpu shrinker during GEM swap-in operation done under the held reservation lock. Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 22 +++++++++++++++++++++- include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 007521bea302..560ce565f376 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -872,12 +872,31 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); -static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem) +/** + * drm_gem_shmem_get_pages_sgt_locked - Provide a scatter/gather table of pinned + * pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This is a locked version of @drm_gem_shmem_get_sg_table that exports a + * scatter/gather table suitable for PRIME usage by calling the standard + * DMA mapping API. + * + * Drivers must hold GEM's reservation lock when using this function. + * + * Drivers who need to acquire an scatter/gather table for objects need to call + * drm_gem_shmem_get_pages_sgt() instead. + * + * Returns: + * A pointer to the scatter/gather table of pinned pages or error pointer on failure. + */ +struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; int ret; struct sg_table *sgt; + dma_resv_assert_held(shmem->base.resv); + if (shmem->sgt) return shmem->sgt; @@ -901,6 +920,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ kfree(sgt); return ERR_PTR(ret); } +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt_locked); /** * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 3bb70616d095..6ac77c2082ed 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -149,6 +149,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); +struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent); From patchwork Sun Oct 29 23:02:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159442 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1895047vqb; Sun, 29 Oct 2023 16:19:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEzpBlpNtiJXmkyy1ADlbGT3XWGmabFe1buDZqOjHO6AoqJr+uQNYCzNTeW521XO8c5MOR/ X-Received: by 2002:a05:6a20:da9f:b0:17a:eff5:fbbe with SMTP id iy31-20020a056a20da9f00b0017aeff5fbbemr12053381pzb.8.1698621582786; Sun, 29 Oct 2023 16:19:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621582; cv=none; d=google.com; s=arc-20160816; b=hrUHnaOyFO6mId1e09k7Ma0vNp7qG5gmncfbWcOnXhQJ3jdHJDS56YfGtjgEsEeRgo kAiHA4oqeRkJJCyL4jcSAYvtM28Eb2ZuSftQiKZ7mUUEyXn6XsmDgb42XK9kL1vgWiT6 XHnHwzuwjG1Op4tNmxBdXR0p1nMZ5P7SJI6QMBSpd0AOn2HJvW754xoZSqvq50PL591m 2kKU4wmk4tZp3LfoHF6GBx4V2RohrdfSoTDQF4N8353b68C+h8Z7aOyp57xQ3V9vb1lB VmBALxnDUjqJkBptMYQWW2pED+l1ysyuMw4W5CHoqeoD9RBp8Jexa5D07TuCH3OAIoJJ 8RTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JKT0gqNJNDX32IdbIlxKeYbuVHIUHQaBY1W3bm1coaM=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=WEFSfLo9yQ3ONdUyev+EeRuByItuparIromboYD/oJE+V4Kh3xjPmiKOPjFUuAKb/4 ZOkGibsul7uL8k0LxUbzQxtSSYl/XIOyaAtyhZfHlxTIERGLMaegQA5gp23aH5qq4M3U hsnjyWrgougSZYEgM/qCM+YBzxsCzUTvwq0dydjQ2KJsSca5kXdWDR9hZ6OZFZepRlJG OxGR1HFNaq4GNe1zUzwKLMkfp9F8vPKt2ZiAYxw8n9QgizeLwhJRCXMSe3vnHLmyCB/G X28Myj5eyATa9i5iWfvLrjubiaJFZIuLPHjCSc6ACrVnqHbyVY+pCnCaqWc67YFxhjlQ VX3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=WOwYdYKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id e11-20020a056a001a8b00b006be2d998584si4184955pfv.47.2023.10.29.16.19.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:19:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=WOwYdYKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id C24A7807C857; Sun, 29 Oct 2023 16:19:40 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232249AbjJ2XTa (ORCPT + 31 others); Sun, 29 Oct 2023 19:19:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232225AbjJ2XTQ (ORCPT ); Sun, 29 Oct 2023 19:19:16 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B8527295 for ; Sun, 29 Oct 2023 16:04:21 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id D51C766073AE; Sun, 29 Oct 2023 23:02:56 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620578; bh=wqGvYp+XsBubezC7g24J4y9IRLkJRkmLc/LBzEkyYBg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WOwYdYKjt/I83fqDDsTMg5Buu1wHFIkeuyU9wAcSaZy3NfTy6QueCldxukReDXMo3 KBPvD83s9Glkx/p+53DWqpsQHJm+S0ci8yNyxTwamtrjRvAFIQz0FU6NOPJe6sJzsb cSuipy3uuEDuZgiK4toXK1IG6qQBdkNeIGgNjnj57QkEtNdzrsm93/1UBbgpAVAdjX Lot0g4ZFK+tKMsJLbrn+6DzBWbCp0cW9/2Jt5leAiGDM91fUu6C5b4afwkU8AQBY0a YECMj0VIWnka9VVDcHBwwap2dQUuNs2oJc+meDtl3mrAoSYWUeQUhvf8k1DRoHIEI1 jZWv4n6jSLZrw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 21/26] drm/shmem-helper: Optimize unlocked get_pages_sgt() Date: Mon, 30 Oct 2023 02:02:00 +0300 Message-ID: <20231029230205.93277-22-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:19:40 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133824634479157 X-GMAIL-MSGID: 1781133824634479157 SGT isn't refcounted. Once SGT pointer has been obtained, it remains the same for both locked and unlocked get_pages_sgt(). Return cached SGT directly without taking a potentially expensive lock. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 560ce565f376..6dd087f19ea3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -955,6 +955,9 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, drm_gem_shmem_is_evictable(shmem)); drm_WARN_ON(obj->dev, drm_gem_shmem_is_purgeable(shmem)); + if (shmem->sgt) + return shmem->sgt; + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ERR_PTR(ret); From patchwork Sun Oct 29 23:02:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159443 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1895321vqb; Sun, 29 Oct 2023 16:20:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHFAMdZ3JBCKoUl0p7RnnqHnIbYhm1IvLjvivm6s4nK+TV7IVvGkIJhbIqlJYopzKpZVAJW X-Received: by 2002:a17:90b:100b:b0:27f:fe79:eb6e with SMTP id gm11-20020a17090b100b00b0027ffe79eb6emr7829749pjb.8.1698621638707; Sun, 29 Oct 2023 16:20:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621638; cv=none; d=google.com; s=arc-20160816; b=Z8+9oTMgKEdAARwdoP8YgbZzCK9Hoev6wxuDkcPjnFskKyYb4cWrj5v1PK+cLpy58A 0EiAmIErgu8gB6kUW0BIln6M/h8rtqmatF1y6QSo413KGPC4Aa2P41W7EH+h78E7LDbv iJd/r1h+O6TQlcy4RsGjAUJsDrozatOKYx1OIsaCXLN3T6xjtaMKl+5oDCNSTOjdPi8q HSvYVn/SoeaJy7w1M7Bi4w93YAAuWyJAncDn9sbpEYjSSqwtcZ/VW2hhz6vQgMSEvY5m opeF5jF8TnzP6x4pSuwHs8I7qcrSn1DZnNYb1Jmy/c/Nzswk13BPpit44jQY3RhPt2Bj 9X3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Wn2UqZ6ZiYruYvg5J5Hi5GHeHLYxvT1f2wGltq/E2es=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=CIZTbn43IvmcGDlpV9i6t5pjfdsTGRbGutYJ1PU2JKGsYVCsUjfX6CdD/mGB1b6ZTT wguqKRPDTtAR3eo6wKijDnofg6XiJ3paB7uM1jwRtOZlxmk2hnaHKfvxqF/Z4LD7Syd9 Rlz9qBX2uODq1VbVYcfGKyH2QCyLCUX+RB1pr61osPs3WczYyzmmxk5yHYTIKzkPLaOm 2Hn3IgzzixX+t0k5DCI4gi0i1Y6SlydTuC7pbLZi3S3OQkWJaDMICv55akhhyxM6Q8XP bJ+/EZbHHGcgX68ftsnPesqOCuQ96gd6raN0OT4sF2FO+gcX9AQ02c5yMMVx1RlLrcK8 tO1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=MRGBZwF1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id v23-20020a17090ae99700b002768ab837bfsi741989pjy.48.2023.10.29.16.20.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:20:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=MRGBZwF1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 9A52D807BA39; Sun, 29 Oct 2023 16:20:36 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231847AbjJ2XUW (ORCPT + 31 others); Sun, 29 Oct 2023 19:20:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231668AbjJ2XUK (ORCPT ); Sun, 29 Oct 2023 19:20:10 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 381CE7692 for ; Sun, 29 Oct 2023 16:04:21 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 546C36607396; Sun, 29 Oct 2023 23:02:58 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620579; bh=8rtXn19fvaBXhb48S1Y4vYdXdzNTfF4j95pl2pB4Jj8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MRGBZwF1NKhPQjB1XQlhRcOVPXHE84vqpNBhZrkK8krkrkiczEXKlHJqJFSt5Ww9A li5hV7sh/Zq/Gg8hOZjrbJ9jpEIg9QckJmNImb4IvRjWhWqpNpxAvikQCnNIelyYfe LBNNSlcea5YhZowelFyeGDaSya0NHk+bytBWQ1/1BxxYsUIgX7OHsQGKsMo+HFNH+s 71KYOKFuX8jktieetnuG3bIEH/j5eJOHRWcuFXlSBi1nt+QhKLXN+XqfN3fV7kTL8d JIwe/KHanbZO0DjJ6j6zPyZTADs7al9+xga2M3r83xbeXc/8hUodG7MCY27isI+Edj xQFYRgY+PCaug== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 22/26] drm/shmem-helper: Don't free refcounted GEM Date: Mon, 30 Oct 2023 02:02:01 +0300 Message-ID: <20231029230205.93277-23-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:20:36 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133883471078096 X-GMAIL-MSGID: 1781133883471078096 Don't free refcounted shmem object to prevent use-after-free bug that is worse than a memory leak. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 6dd087f19ea3..4253c367dc07 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -203,9 +203,10 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (obj->import_attach) drm_prime_gem_destroy(obj, shmem->sgt); - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); + if (drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)) || + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)) || + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count))) + return; drm_gem_object_release(obj); kfree(shmem); From patchwork Sun Oct 29 23:02:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159445 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1895502vqb; Sun, 29 Oct 2023 16:21:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFlvboR9AI82pzfChvlL7hujGsg9H/29G7lnteah3uamrn/UTJQTlxO8M119XV8XWSq4GRL X-Received: by 2002:a17:903:228f:b0:1cc:543b:b361 with SMTP id b15-20020a170903228f00b001cc543bb361mr246785plh.43.1698621679953; Sun, 29 Oct 2023 16:21:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621679; cv=none; d=google.com; s=arc-20160816; b=KvrOLRogr6B9xMmtIZ+4iPIxPBd7prL/ZrXCtTp7Mzvq526BHLidzeCseVTarp6JPS 6+tYER/x3qHfZZMHxRlKqZnEUdaUwN+ruczt2AjVp3YEf6jU6u9o5+qsR3TAY+otCmtg phrGhlUKawiB68fY4SNbPx/DdNZUqw8iKN9twnfDs8W6xIeg8Iq3CuZHmGnKTICZv/Al qKGpYWqssWiCRoQ+OHw6lOjeFw84sLBiYPKRGJfUU0xHndf7bjDSW2tlefoC/NsPudbF J2hZT1bvKshzV4EAtXNWIoHNRwIr1a7vtuAbLxkKQMI2TIS60zETWGnuFY26x5I+5u5m oTdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pX9YJHWwt9IrG8YquQg6qCMY5AMxsqg139t1AypZoU8=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=N4p9BSU1iWLh0FHLd/MNtOSSe6bvssQlVXtnx8OXil2aJ8vKAo9sEhboSfYACF9nKA SCw4CTuahWXbMV0cBHbp0iWfvZPNOwr9jNGvQq2Y2iaEo3RHfff3gyVWesRji9pGxPTB LBIT0RBSfzcfTRLBa/sEUs+M6rsp7wkW7h28y/FUeoeVAZWliiPICFefniYKFyT2vGh+ V0bhb5kBolBwE5BXFVaYQPXpbGK//5tr39GCYZaAKIlYkkWQxlfGZMOn1PZc2W38IdUZ T64ThJMw/KlppxkABQ8riooPLXtrXBqVnuuFO/q7Y6qGY7jr/Kgf8KRRIaypyF+8sZeO rBuQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="bhh/+dHt"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id k18-20020a170902c41200b001cb02e6f174si4474378plk.492.2023.10.29.16.21.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:21:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="bhh/+dHt"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id E4B14807C85E; Sun, 29 Oct 2023 16:20:05 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230519AbjJ2XTx (ORCPT + 31 others); Sun, 29 Oct 2023 19:19:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231363AbjJ2XTg (ORCPT ); Sun, 29 Oct 2023 19:19:36 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 290257AAA for ; Sun, 29 Oct 2023 16:04:29 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id C9B4D66073B5; Sun, 29 Oct 2023 23:02:59 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620581; bh=9VDunboaNrQ0+alNzN87bJVPZTdu5GVJOSm1S5JwyL8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bhh/+dHtFNOZ+FbvkdBKZRTxqXNMGYB7HkZwXtTbvcOiloCDHK7FbKeKTYwRCJC5S DQfQ6yHaC9lXwa6bGT4FhqyI+LQWsa/O+PeOu/rbg1E3HdbDE8J8yafWKuXulU0kWV lX6AMEVMrKRxlXq9yfrIL3l7PN4T7kmFILpa8zRxz4Zq10QI1+v1jhWYOv75Vzq9Zd a0jrgWPsBfFpb8HeBjnl/JUhcssreG88iBdoAfi7WZalN1kVkdNi2HjdEAFEadTAXn J6TnaCgG+AR2jrUVVoF4lQkvcx/uRJ9M4ZH31cipraN5MwmtG+wwvXknrCeY6sZr7w YSqW/hqKf40kQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 23/26] drm/virtio: Pin display framebuffer BO Date: Mon, 30 Oct 2023 02:02:02 +0300 Message-ID: <20231029230205.93277-24-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:20:06 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133926393616071 X-GMAIL-MSGID: 1781133926393616071 Prepare to addition of memory shrinker support by pinning display framebuffer BO pages in memory while they are in use by display on host. Shrinker is free to relocate framebuffer BO pages if it doesn't know that pages are in use, thus pin the pages to disallow shrinker to move them. Acked-by: Gerd Hoffmann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 2 ++ drivers/gpu/drm/virtio/virtgpu_gem.c | 19 +++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_plane.c | 17 +++++++++++++++-- 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 96365a772f77..56269814fb6d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -313,6 +313,8 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); +void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); /* virtgpu_vq.c */ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 7db48d17ee3a..625c05d625bf 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -294,3 +294,22 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) } spin_unlock(&vgdev->obj_free_lock); } + +int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) +{ + int err; + + if (virtio_gpu_is_shmem(bo)) { + err = drm_gem_shmem_pin(&bo->base); + if (err) + return err; + } + + return 0; +} + +void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo) +{ + if (virtio_gpu_is_shmem(bo)) + drm_gem_shmem_unpin(&bo->base); +} diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index a2e045f3a000..def57b01a826 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -238,20 +238,28 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_framebuffer *vgfb; struct virtio_gpu_object *bo; + int err; if (!new_state->fb) return 0; vgfb = to_virtio_gpu_framebuffer(new_state->fb); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)) + + err = virtio_gpu_gem_pin(bo); + if (err) + return err; + + if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob) return 0; if (bo->dumb && (plane->state->fb != new_state->fb)) { vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); - if (!vgfb->fence) + if (!vgfb->fence) { + virtio_gpu_gem_unpin(bo); return -ENOMEM; + } } return 0; @@ -261,15 +269,20 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_object *bo; if (!state->fb) return; vgfb = to_virtio_gpu_framebuffer(state->fb); + bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence = NULL; } + + virtio_gpu_gem_unpin(bo); } static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, From patchwork Sun Oct 29 23:02:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159444 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1895441vqb; Sun, 29 Oct 2023 16:21:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHqUEc5CYP+vjO3FVfhbVzKzioaKgZOKDCbI6OVSo95AZDOe9Zuhiu8PFxJuuzZoMdT8bOR X-Received: by 2002:a17:902:d489:b0:1cc:f8:51d0 with SMTP id c9-20020a170902d48900b001cc00f851d0mr10665086plg.42.1698621663569; Sun, 29 Oct 2023 16:21:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621663; cv=none; d=google.com; s=arc-20160816; b=iaepMTdwLnwG9rrrc1P8hsNhwMyOEAg5T80gcw0K45sZKwwk51Lu0SPuHG5m2etjhs Hs1R7IBiUiXKblCOEyAw6mr6QVw9R4sbCFHO1TRCoPcoLVmXbAy8Xm9GBlHjxcDHMKso y7E+j99aM4ErpcRks7rm+tTmxEDMW1iti6tgpIRnpHtWff4M8vTUZQz3JAagE1H2nRn7 br+pUIOARXleHNRXb/7sgaV4iMx1+dSePljfCCw6FWajVb3sOUd8K4SdAznoT1JYZsTn cCAETPerhh6MCDet7bpZp+Qwf3lmLLDuh2U63ttOSkJK7Nuv0hcoRfBXm/j+Cy1rr69G ZBlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wm6bJuQJChFsAuuYrXyGUvzlqLcssuHBtHVqTMTzmaQ=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=oxCxzFodIT/GL1AhiBx2qgkKMdSEQi7cntGrJqKgVesdpb6kBsbYRc4/5Z8hpp0w2h 07GvqbndF6EvwlRaM0mIFDO8AIOmKUcDlaJi4DiNwNL6rJQd7Sml5kkO6QAXWisme9CD TdsY8yQ9MPRLSLwQZTeTOp49qD/YBYlkNP0MtDDqmWI00mEeEE6OmAfhaHL2k3OenHZ8 RtvQAtan1E/gJAa8R4RdlH4GPWmTVNhc5/iVY0A1MqV9QNPXupWc1rD3x1wpFWnWq2C0 zpKl9UtoiSDVe5k6PzeRWeId3QwUsG2XSZ52wDUbm/OJRu//gYkZnBqdPU8PwZB7wd2I ooYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Yzi3W8Af; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id g19-20020a170902d1d300b001c9ce2b6b9esi4077595plb.262.2023.10.29.16.21.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:21:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Yzi3W8Af; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 9CFF5807C857; Sun, 29 Oct 2023 16:21:00 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232190AbjJ2XUs (ORCPT + 31 others); Sun, 29 Oct 2023 19:20:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232115AbjJ2XUc (ORCPT ); Sun, 29 Oct 2023 19:20:32 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35BA07AAD for ; Sun, 29 Oct 2023 16:04:29 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 48F56660738A; Sun, 29 Oct 2023 23:03:01 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620582; bh=YZpujfqwR6AmPnpzBmGc9/3FlX+rPLPFL/HzVSu4CKg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Yzi3W8Afq1DDLSxF9ukVTTwfwrj58qABchN5r3SkW0gPe8BI910S69aoBsdTsVoVD SlXE9dkAUFWiziQ9pUjbq8CkQecClbuRlYHeFO/5wf26SHo2An0vLCENAn75YU3u95 5ovPy6bPEUJ4D2DCaeDhTF1CbW3xqSkKepNSedLPwidn9tsNBh+gwK5O4aCGRYB9iN jQvluZkmDnOgXe1BOQ0mYiaCljza8hX0Nj+ZDD3kS3JX5VqFkSLSZReUIfoyRP8KaB OSDbglDRRnWTYXpHQbWGCg0bk/yo+zrWFoHsOvk1xVXDj0BQe0RwQ0a31bugfyYY3P A2eNIFtbPHsPQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 24/26] drm/virtio: Attach shmem BOs dynamically Date: Mon, 30 Oct 2023 02:02:03 +0300 Message-ID: <20231029230205.93277-25-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:21:00 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133909315250227 X-GMAIL-MSGID: 1781133909315250227 Prepare for addition of memory shrinker support by attaching shmem pages to host dynamically on first use. Previously the attachment vq command wasn't fenced and there was no vq kick made in the BO creation code path, hence the attachment already was happening dynamically, but implicitly. Making attachment explicitly dynamic will allow to simplify and reuse more code when shrinker will be added. The virtio_gpu_object_shmem_init() now works under the held reservation lock, which will be important to have for shrinker to avoid moving pages while they are in active use by the driver. Acked-by: Gerd Hoffmann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 7 +++ drivers/gpu/drm/virtio/virtgpu_gem.c | 26 +++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 32 +++++++---- drivers/gpu/drm/virtio/virtgpu_object.c | 73 ++++++++++++++++++++----- drivers/gpu/drm/virtio/virtgpu_submit.c | 15 ++++- 5 files changed, 125 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 56269814fb6d..421f524ae1de 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -89,6 +89,7 @@ struct virtio_gpu_object { uint32_t hw_res_handle; bool dumb; bool created; + bool detached; bool host3d_blob, guest_blob; uint32_t blob_mem, blob_flags; @@ -313,6 +314,8 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs); int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); @@ -453,6 +456,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo); +int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo); + +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo); + int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev, uint32_t *resid); /* virtgpu_prime.c */ diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 625c05d625bf..97e67064c97e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -295,6 +295,26 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) spin_unlock(&vgdev->obj_free_lock); } +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs) +{ + struct virtio_gpu_object *bo; + int ret = 0; + u32 i; + + for (i = 0; i < objs->nents; i++) { + bo = gem_to_virtio_gpu_obj(objs->objs[i]); + + if (virtio_gpu_is_shmem(bo) && bo->detached) { + ret = virtio_gpu_reattach_shmem_object_locked(bo); + if (ret) + break; + } + } + + return ret; +} + int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) { int err; @@ -303,6 +323,12 @@ int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) err = drm_gem_shmem_pin(&bo->base); if (err) return err; + + err = virtio_gpu_reattach_shmem_object(bo); + if (err) { + drm_gem_shmem_unpin(&bo->base); + return err; + } } return 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index b24b11f25197..070c29cea26a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -246,6 +246,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); if (!fence) { ret = -ENOMEM; @@ -288,11 +292,25 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, goto err_put_free; } + ret = virtio_gpu_array_lock_resv(objs); + if (ret != 0) + goto err_put_free; + + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) { + ret = -ENOMEM; + goto err_unlock; + } + if (!vgdev->has_virgl_3d) { virtio_gpu_cmd_transfer_to_host_2d (vgdev, offset, args->box.w, args->box.h, args->box.x, args->box.y, - objs, NULL); + objs, fence); } else { virtio_gpu_create_context(dev, file); @@ -301,23 +319,13 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, goto err_put_free; } - ret = virtio_gpu_array_lock_resv(objs); - if (ret != 0) - goto err_put_free; - - ret = -ENOMEM; - fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, - 0); - if (!fence) - goto err_unlock; - virtio_gpu_cmd_transfer_to_host_3d (vgdev, vfpriv ? vfpriv->ctx_id : 0, offset, args->level, args->stride, args->layer_stride, &args->box, objs, fence); - dma_fence_put(&fence->f); } + dma_fence_put(&fence->f); virtio_gpu_notify(vgdev); return 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 998f8b05ceb1..000bb7955a57 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -143,7 +143,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, struct sg_table *pages; int si; - pages = drm_gem_shmem_get_pages_sgt(&bo->base); + pages = drm_gem_shmem_get_pages_sgt_locked(&bo->base); if (IS_ERR(pages)) return PTR_ERR(pages); @@ -177,6 +177,40 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return 0; } +int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_mem_entry *ents; + unsigned int nents; + int err; + + if (!bo->detached) + return 0; + + err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (err) + return err; + + virtio_gpu_object_attach(vgdev, bo, ents, nents); + + bo->detached = false; + + return 0; +} + +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo) +{ + int ret; + + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL); + if (ret) + return ret; + ret = virtio_gpu_reattach_shmem_object_locked(bo); + dma_resv_unlock(bo->base.base.resv); + + return ret; +} + int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_params *params, struct virtio_gpu_object **bo_ptr, @@ -207,45 +241,56 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bo->dumb = params->dumb; - ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); - if (ret != 0) - goto err_put_id; + if (bo->blob_mem == VIRTGPU_BLOB_MEM_GUEST) + bo->guest_blob = true; if (fence) { ret = -ENOMEM; objs = virtio_gpu_array_alloc(1); if (!objs) - goto err_free_entry; + goto err_put_id; virtio_gpu_array_add_obj(objs, &bo->base.base); ret = virtio_gpu_array_lock_resv(objs); if (ret != 0) goto err_put_objs; + } else { + ret = dma_resv_lock(bo->base.base.resv, NULL); + if (ret) + goto err_put_id; } if (params->blob) { - if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST) - bo->guest_blob = true; + ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (ret) + goto err_unlock_objs; + } else { + bo->detached = true; + } + if (params->blob) virtio_gpu_cmd_resource_create_blob(vgdev, bo, params, ents, nents); - } else if (params->virgl) { + else if (params->virgl) virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, objs, fence); - virtio_gpu_object_attach(vgdev, bo, ents, nents); - } else { + else virtio_gpu_cmd_create_resource(vgdev, bo, params, objs, fence); - virtio_gpu_object_attach(vgdev, bo, ents, nents); - } + + if (!fence) + dma_resv_unlock(bo->base.base.resv); *bo_ptr = bo; return 0; +err_unlock_objs: + if (fence) + virtio_gpu_array_unlock_resv(objs); + else + dma_resv_unlock(bo->base.base.resv); err_put_objs: virtio_gpu_array_put_free(objs); -err_free_entry: - kvfree(ents); err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); err_put_pages: diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c index 5c514946bbad..6e4ef2593e8f 100644 --- a/drivers/gpu/drm/virtio/virtgpu_submit.c +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c @@ -464,8 +464,19 @@ static void virtio_gpu_install_out_fence_fd(struct virtio_gpu_submit *submit) static int virtio_gpu_lock_buflist(struct virtio_gpu_submit *submit) { - if (submit->buflist) - return virtio_gpu_array_lock_resv(submit->buflist); + int err; + + if (submit->buflist) { + err = virtio_gpu_array_lock_resv(submit->buflist); + if (err) + return err; + + err = virtio_gpu_array_prepare(submit->vgdev, submit->buflist); + if (err) { + virtio_gpu_array_unlock_resv(submit->buflist); + return err; + } + } return 0; } From patchwork Sun Oct 29 23:02:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159451 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1896348vqb; Sun, 29 Oct 2023 16:24:32 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFVdX+G/KEhFrWJaQs0YkYiYJXGPBQUKBEHOnl/cB4fkst260T1HhlfjxZMEnc4a9lBSRn0 X-Received: by 2002:a17:90b:2404:b0:280:3911:ae02 with SMTP id nr4-20020a17090b240400b002803911ae02mr2589587pjb.16.1698621872455; Sun, 29 Oct 2023 16:24:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621872; cv=none; d=google.com; s=arc-20160816; b=hGLvcvBGA2W8l1C828o3tJOmfzrqHBFXUZcw2xkcp7onECmo3RLPkMyDRG62ZFe0dK +w5hRBmNkS4Kfj4buuSOJghLyIkdXm/SJ/8Y3oKorXSeqnd9cQJKHb+vf2UiPXJcZOFY hFzwZMLykcoqZYK2scg6zQ8D5vkmt61RGlg+11YEfzprDqSpcH8hTpMBP1nWrASfBsDq l4lcqm9mppgbCXVj5h1YG266RjuUm7zHeupC4rQ76eWdlpvCZTcPJn/94zXHPOAVvr5y A8PecYSPkrDPStl4DO1T1WCCUQlPZtd5FbjDmGrDGfHrugUkBlW3Sq+2jLVFuXT1B8i3 2SHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=qwafkZ45qThhQD8brOvQSlqTZTMo71RHKxaHCl1aTac=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=hxCE7iPu7SDxipOrsmOXA7XJEbYnz6wilx2geQHL95tkZq9jzdUe8zgR6sgsxS9q6s 84iqhhwWotZVXmUNwRdTmivtm/TMa8suIaWpYBemMrNBRdGrphG/hHSltp/K8PvtBzw9 BaExnZpW+Baw99kFdgajvoS2BWEKrbaabaPjR0wmOab0CcO5mPCklxF37pRpks9Hn4ee P/r2q2Ft89MbUCBUuJLFgpBdcOQT/nlFRFiBZHSDry3rjr8lfkMGp1PZoZDxA8mpvS23 qtvBqRunuzHGWzloAK+X0VayA4MXyodsPON1B/gAz2eHQbZJqCV3RH70Ndw7JDXZgJ1N E/pw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="Jq/DUYfA"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id d10-20020a17090ab30a00b0027740192bc9si2382448pjr.152.2023.10.29.16.24.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:24:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="Jq/DUYfA"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 5B7418057B11; Sun, 29 Oct 2023 16:24:29 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232324AbjJ2XYH (ORCPT + 31 others); Sun, 29 Oct 2023 19:24:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232400AbjJ2XXx (ORCPT ); Sun, 29 Oct 2023 19:23:53 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 800527D96 for ; Sun, 29 Oct 2023 16:04:30 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id C2F7366073B1; Sun, 29 Oct 2023 23:03:02 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620584; bh=XA9g/+9Ebk4pAxJV2EDJ2bAq9+gP2fTx0x3s9mcqJoY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Jq/DUYfAxcVZ1N7JD4ezva/EHoOl5jNrfaO7y6gqdanja2WZyJrRf6MR+fJvgHLhw QUpA0qbB/uikuZkwqMHiTXBfKVOImokawpDkZBIy+VMo7Ac8eElx9rXnBz0ZICO0s9 97dc0jvmezCqNOUOX6umGA5jwpIMIY47Eapwx3283fZTct6VvNAPMft5juFFDpWHld kOaZOgrMV8MIxDQVuIYHksMwSTT2L0BD5FEnRdvPebjn9MQhI94XxczkfJv4sK+Vhw TyCeU/yHQWRWEj/Ws/eSQUtWzHDMq+B4jQQV4Kc/wHy6SSwPpx/iweOV1V6ovZWjdY j95kItN1mDYhA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 25/26] drm/virtio: Support shmem shrinking Date: Mon, 30 Oct 2023 02:02:04 +0300 Message-ID: <20231029230205.93277-26-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:24:29 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134128649582695 X-GMAIL-MSGID: 1781134128649582695 Support generic drm-shmem memory shrinker and add new madvise IOCTL to the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as "don't need" using the new IOCTL to let shrinker purge the marked BOs on OOM, the shrinker will also evict unpurgeable shmem BOs from memory if guest supports SWAP file or partition. Acked-by: Gerd Hoffmann Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 13 +++++- drivers/gpu/drm/virtio/virtgpu_gem.c | 35 ++++++++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 25 ++++++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 8 ++++ drivers/gpu/drm/virtio/virtgpu_object.c | 61 +++++++++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_vq.c | 40 ++++++++++++++++ include/uapi/drm/virtgpu_drm.h | 14 ++++++ 7 files changed, 195 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 421f524ae1de..33a78b24c272 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -278,7 +278,7 @@ struct virtio_gpu_fpriv { }; /* virtgpu_ioctl.c */ -#define DRM_VIRTIO_NUM_IOCTLS 12 +#define DRM_VIRTIO_NUM_IOCTLS 13 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); @@ -316,6 +316,8 @@ void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, void virtio_gpu_array_put_free_work(struct work_struct *work); int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo); +int virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv); int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); @@ -329,6 +331,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo); +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, @@ -349,6 +353,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *obj, struct virtio_gpu_mem_entry *ents, unsigned int nents); +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence); void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, struct virtio_gpu_output *output); int virtio_gpu_cmd_get_display_info(struct virtio_gpu_device *vgdev); @@ -492,4 +499,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev, int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, struct drm_file *file); +/* virtgpu_gem_shrinker.c */ +int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev); +void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev); + #endif diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 97e67064c97e..748f7bbb0e6d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -147,10 +147,20 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj, struct virtio_gpu_device *vgdev = obj->dev->dev_private; struct virtio_gpu_fpriv *vfpriv = file->driver_priv; struct virtio_gpu_object_array *objs; + struct virtio_gpu_object *bo; if (!vgdev->has_virgl_3d) return; + bo = gem_to_virtio_gpu_obj(obj); + + /* + * Purged BO was already detached and released, the resource ID + * is invalid by now. + */ + if (!virtio_gpu_gem_madvise(bo, VIRTGPU_MADV_WILLNEED)) + return; + objs = virtio_gpu_array_alloc(1); if (!objs) return; @@ -315,6 +325,31 @@ int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, return ret; } +int virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv) +{ + if (virtio_gpu_is_shmem(bo)) + return drm_gem_shmem_object_madvise(&bo->base.base, madv); + + return 1; +} + +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + int err; + + if (bo->created) { + err = virtio_gpu_cmd_release_resource(vgdev, bo); + if (err) + return err; + + virtio_gpu_notify(vgdev); + bo->created = false; + } + + return 0; +} + int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) { int err; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 070c29cea26a..44a99166efdc 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -676,6 +676,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev, return ret; } +static int virtio_gpu_madvise_ioctl(struct drm_device *dev, + void *data, + struct drm_file *file) +{ + struct drm_virtgpu_madvise *args = data; + struct virtio_gpu_object *bo; + struct drm_gem_object *obj; + + if (args->madv > VIRTGPU_MADV_DONTNEED) + return -EOPNOTSUPP; + + obj = drm_gem_object_lookup(file, args->bo_handle); + if (!obj) + return -ENOENT; + + bo = gem_to_virtio_gpu_obj(obj); + args->retained = virtio_gpu_gem_madvise(bo, args->madv); + drm_gem_object_put(obj); + + return 0; +} + struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl, DRM_RENDER_ALLOW), @@ -715,4 +737,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl, DRM_RENDER_ALLOW), + + DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl, + DRM_RENDER_ALLOW), }; diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 5a3b5aaed1f3..43e237082cec 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -245,6 +245,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) goto err_scanouts; } + ret = drmm_gem_shmem_init(dev); + if (ret) { + DRM_ERROR("shmem init failed\n"); + goto err_modeset; + } + virtio_device_ready(vgdev->vdev); if (num_capsets) @@ -259,6 +265,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) } return 0; +err_modeset: + virtio_gpu_modeset_fini(vgdev); err_scanouts: virtio_gpu_free_vbufs(vgdev); err_vbufs: diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 000bb7955a57..8fa5f912ae51 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -98,6 +98,60 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj) virtio_gpu_cleanup_object(bo); } +static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_fence *fence; + + if (bo->detached) + return 0; + + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) + return -ENOMEM; + + virtio_gpu_object_detach(vgdev, bo, fence); + virtio_gpu_notify(vgdev); + + dma_fence_wait(&fence->f, false); + dma_fence_put(&fence->f); + + bo->detached = true; + + return 0; +} + +static int virtio_gpu_shmem_evict(struct drm_gem_object *obj) +{ + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + int err; + + /* blob is not movable, it's impossible to detach it from host */ + if (bo->blob_mem) + return -EBUSY; + + /* + * At first tell host to stop using guest's memory to ensure that + * host won't touch the released guest's memory once it's gone. + */ + err = virtio_gpu_detach_object_fenced(bo); + if (err) + return err; + + if (drm_gem_shmem_is_purgeable(&bo->base)) { + err = virtio_gpu_gem_host_mem_release(bo); + if (err) + return err; + + drm_gem_shmem_purge_locked(&bo->base); + } else { + bo->base.pages_mark_dirty_on_put = 1; + drm_gem_shmem_evict_locked(&bo->base); + } + + return 0; +} + static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { .free = virtio_gpu_free_object, .open = virtio_gpu_gem_object_open, @@ -111,6 +165,7 @@ static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { .vunmap = drm_gem_shmem_object_vunmap_locked, .mmap = drm_gem_shmem_object_mmap, .vm_ops = &drm_gem_shmem_vm_ops, + .evict = virtio_gpu_shmem_evict, }; bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) @@ -187,6 +242,10 @@ int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo) if (!bo->detached) return 0; + err = drm_gem_shmem_swapin_locked(&bo->base); + if (err) + return err; + err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); if (err) return err; @@ -240,6 +299,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, goto err_put_pages; bo->dumb = params->dumb; + bo->blob_mem = params->blob_mem; + bo->blob_flags = params->blob_flags; if (bo->blob_mem == VIRTGPU_BLOB_MEM_GUEST) bo->guest_blob = true; diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index b1a00c0c25a7..14ab470f413a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -545,6 +545,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, virtio_gpu_cleanup_object(bo); } +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo) +{ + struct virtio_gpu_resource_unref *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF); + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + + return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); +} + void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev, uint32_t scanout_id, uint32_t resource_id, uint32_t width, uint32_t height, @@ -645,6 +660,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev, virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); } +static void +virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev, + u32 resource_id, + struct virtio_gpu_fence *fence) +{ + struct virtio_gpu_resource_attach_backing *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING); + cmd_p->resource_id = cpu_to_le32(resource_id); + + virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); +} + static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf) { @@ -1107,6 +1139,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, ents, nents, NULL); } +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence) +{ + virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle, + fence); +} + void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, struct virtio_gpu_output *output) { diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h index b1d0e56565bc..4caba71b2740 100644 --- a/include/uapi/drm/virtgpu_drm.h +++ b/include/uapi/drm/virtgpu_drm.h @@ -48,6 +48,7 @@ extern "C" { #define DRM_VIRTGPU_GET_CAPS 0x09 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a #define DRM_VIRTGPU_CONTEXT_INIT 0x0b +#define DRM_VIRTGPU_MADVISE 0x0c #define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01 #define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02 @@ -211,6 +212,15 @@ struct drm_virtgpu_context_init { __u64 ctx_set_params; }; +#define VIRTGPU_MADV_WILLNEED 0 +#define VIRTGPU_MADV_DONTNEED 1 +struct drm_virtgpu_madvise { + __u32 bo_handle; + __u32 retained; /* out, non-zero if BO can be used */ + __u32 madv; + __u32 pad; +}; + /* * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in * effect. The event size is sizeof(drm_event), since there is no additional @@ -261,6 +271,10 @@ struct drm_virtgpu_context_init { DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT, \ struct drm_virtgpu_context_init) +#define DRM_IOCTL_VIRTGPU_MADVISE \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \ + struct drm_virtgpu_madvise) + #if defined(__cplusplus) } #endif From patchwork Sun Oct 29 23:02:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159450 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1896314vqb; Sun, 29 Oct 2023 16:24:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGCBzZdHiAN358am7c33yO0wy6RfZngT0fprVBGY7sZfA99IGkNPQFXGxMmifY5ENMs6pHN X-Received: by 2002:a17:902:da87:b0:1cc:548d:4252 with SMTP id j7-20020a170902da8700b001cc548d4252mr196933plx.57.1698621864555; Sun, 29 Oct 2023 16:24:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621864; cv=none; d=google.com; s=arc-20160816; b=qNtgBVR/BtvNKy8jZDpls/QFV6OArPNDvv87fQGBu8a9HsLjMrsnTfuez2LWEX+XC6 lsjOZewiEtJXVBf/tSw9b5Ry0aeAfi6TZNfj8EyRP7H3SFuXetm02LgYv7m1AZ6bOtEI 6+y8Jq6/HY2A0QLSodUDZN1Hiu5x8SIjlhtu7H3/GXDijik2SJsvqOKNee2dfzntoOqA M6QVCISmErLMTrINZaXiJFOXYGcrmf7bBXRDjYZTYLFjc3zwNKYr+NSQ1DjHGOnbanJj AytZcGqaeJajXFn+rfm72S+f+zWyK/mT1sal0eZs9jf9Iu3atHltYrv2JOhgyvzNaca7 xovA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6Xc9IBplg6a/e9uYtJ6ETB0zZJ2WCbeaA9uZVr63zUk=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=x5Et9/restK0jjYsz74KcHcySCPSRVJLn3dzBO48dhFMJ65vhaoCHLkRrAH2z1seVe bsY3YfdeAWuwYnt+8SsEENnL9WDbhnIJHVSAyLtAjESe2PnWwRLW70Suv86x1XC815Ub wndyhM1WceL5j829OFJPtx+rrVcPPnciGUPN6D6G6SP4qQnK17c/WBt1aBbmjhULGf6O OFSzY185n4iGuAlOHnZEU6mGpwonxm7IV54enrfbhYSolgqUfSIvRY2AcQ3THXRU+eR7 En1BDaNSiaFfVIUtAQoI4qQWmVjPbkjTJ39BVSJ41U7yV2tB7YaYiChA8bTQopRHw8qC jJiA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=GyzTY6fp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id y7-20020a1709027c8700b001ca000e889bsi4259967pll.175.2023.10.29.16.24.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:24:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=GyzTY6fp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 69BED8057915; Sun, 29 Oct 2023 16:24:21 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232024AbjJ2XYF (ORCPT + 31 others); Sun, 29 Oct 2023 19:24:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48710 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232383AbjJ2XXx (ORCPT ); Sun, 29 Oct 2023 19:23:53 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 801EC7D97 for ; Sun, 29 Oct 2023 16:04:30 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 4CAD466073BE; Sun, 29 Oct 2023 23:03:04 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620585; bh=8tTVkonA4z4u/U/vZpRnG71PrUZzA2BzAufB3G2srOY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GyzTY6fpTivHjiTeghJAHbygrHcWGsAOqZSyIEfn8VlU2uS0rWQxjJSS+CucBujX9 mV7uRwWa7C01AM4AGSMVqfFtQWcP0eePpjm5F1MRpmEb44gJZOix4vtHfEJndAERW4 QofpSjOLjj5fe8gvQopfu4GSt/0L5ncrvdhxaeGexbWL3I+emB7qfxT6b1SuDacAN6 /1+XYNKL4IqoxgEx+1yokPcRjiLNEkugTkmEdTwz4ILvARQB0onT4UuTK6jM6KhAZw QnWplknJHr6D30A+zfKqznH0XuqV56YEqtQha5+NTDOdZgnoV9yvRoGkqamHGp/4qP /kZr3qu86KrJA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 26/26] drm/panfrost: Switch to generic memory shrinker Date: Mon, 30 Oct 2023 02:02:05 +0300 Message-ID: <20231029230205.93277-27-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:24:21 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134120599841019 X-GMAIL-MSGID: 1781134120599841019 Replace Panfrost's custom memory shrinker with a common drm-shmem memory shrinker. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/Makefile | 1 - drivers/gpu/drm/panfrost/panfrost_device.h | 4 - drivers/gpu/drm/panfrost/panfrost_drv.c | 27 ++-- drivers/gpu/drm/panfrost/panfrost_gem.c | 34 +++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 129 ------------------ drivers/gpu/drm/panfrost/panfrost_job.c | 18 ++- drivers/gpu/drm/panfrost/panfrost_mmu.c | 18 ++- include/drm/drm_gem_shmem_helper.h | 7 - 9 files changed, 66 insertions(+), 181 deletions(-) delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile index 2c01c1e7523e..f2cb1ab0a32d 100644 --- a/drivers/gpu/drm/panfrost/Makefile +++ b/drivers/gpu/drm/panfrost/Makefile @@ -5,7 +5,6 @@ panfrost-y := \ panfrost_device.o \ panfrost_devfreq.o \ panfrost_gem.o \ - panfrost_gem_shrinker.o \ panfrost_gpu.o \ panfrost_job.o \ panfrost_mmu.o \ diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 1e85656dc2f7..2b24a0d4f85e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -117,10 +117,6 @@ struct panfrost_device { atomic_t pending; } reset; - struct mutex shrinker_lock; - struct list_head shrinker_list; - struct shrinker shrinker; - struct panfrost_devfreq pfdevfreq; struct { diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 7f2aba96d5b9..ef520d2cc1d2 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -171,7 +171,6 @@ panfrost_lookup_bos(struct drm_device *dev, break; } - atomic_inc(&bo->gpu_usecount); job->mappings[i] = mapping; } @@ -397,7 +396,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, { struct panfrost_file_priv *priv = file_priv->driver_priv; struct drm_panfrost_madvise *args = data; - struct panfrost_device *pfdev = dev->dev_private; struct drm_gem_object *gem_obj; struct panfrost_gem_object *bo; int ret = 0; @@ -410,11 +408,15 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, bo = to_panfrost_bo(gem_obj); + if (bo->is_heap) { + args->retained = 1; + goto out_put_object; + } + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL); if (ret) goto out_put_object; - mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv == PANFROST_MADV_DONTNEED) { struct panfrost_gem_mapping *first; @@ -440,17 +442,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv); - if (args->retained) { - if (args->madv == PANFROST_MADV_DONTNEED) - list_move_tail(&bo->base.madv_list, - &pfdev->shrinker_list); - else if (args->madv == PANFROST_MADV_WILLNEED) - list_del_init(&bo->base.madv_list); - } - out_unlock_mappings: mutex_unlock(&bo->mappings.lock); - mutex_unlock(&pfdev->shrinker_lock); dma_resv_unlock(bo->base.base.resv); out_put_object: drm_gem_object_put(gem_obj); @@ -635,9 +628,6 @@ static int panfrost_probe(struct platform_device *pdev) ddev->dev_private = pfdev; pfdev->ddev = ddev; - mutex_init(&pfdev->shrinker_lock); - INIT_LIST_HEAD(&pfdev->shrinker_list); - err = panfrost_device_init(pfdev); if (err) { if (err != -EPROBE_DEFER) @@ -659,10 +649,14 @@ static int panfrost_probe(struct platform_device *pdev) if (err < 0) goto err_out1; - panfrost_gem_shrinker_init(ddev); + err = drmm_gem_shmem_init(ddev); + if (err < 0) + goto err_out2; return 0; +err_out2: + drm_dev_unregister(ddev); err_out1: pm_runtime_disable(pfdev->dev); panfrost_device_fini(pfdev); @@ -678,7 +672,6 @@ static void panfrost_remove(struct platform_device *pdev) struct drm_device *ddev = pfdev->ddev; drm_dev_unregister(ddev); - panfrost_gem_shrinker_cleanup(ddev); pm_runtime_get_sync(pfdev->dev); pm_runtime_disable(pfdev->dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index bb9d43cf7c3c..a6128e32f303 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) struct panfrost_gem_object *bo = to_panfrost_bo(obj); struct panfrost_device *pfdev = obj->dev->dev_private; - /* - * Make sure the BO is no longer inserted in the shrinker list before - * taking care of the destruction itself. If we don't do that we have a - * race condition between this function and what's done in - * panfrost_gem_shrinker_scan(). - */ - mutex_lock(&pfdev->shrinker_lock); - list_del_init(&bo->base.madv_list); - mutex_unlock(&pfdev->shrinker_lock); - /* * If we still have mappings attached to the BO, there's a problem in * our refcounting. @@ -94,7 +84,11 @@ static void panfrost_gem_mapping_release(struct kref *kref) mapping = container_of(kref, struct panfrost_gem_mapping, refcount); + /* shrinker that may purge mapping at the same time */ + dma_resv_lock(mapping->obj->base.base.resv, NULL); panfrost_gem_teardown_mapping(mapping); + dma_resv_unlock(mapping->obj->base.base.resv); + drm_gem_object_put(&mapping->obj->base.base); panfrost_mmu_ctx_put(mapping->mmu); kfree(mapping); @@ -228,6 +222,25 @@ static size_t panfrost_gem_rss(struct drm_gem_object *obj) return 0; } +static int panfrost_shmem_evict(struct drm_gem_object *obj) +{ + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + + if (!drm_gem_shmem_is_purgeable(&bo->base)) + return -EBUSY; + + if (!mutex_trylock(&bo->mappings.lock)) + return -EBUSY; + + panfrost_gem_teardown_mappings_locked(bo); + + drm_gem_shmem_purge_locked(&bo->base); + + mutex_unlock(&bo->mappings.lock); + + return 0; +} + static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .open = panfrost_gem_open, @@ -242,6 +255,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .status = panfrost_gem_status, .rss = panfrost_gem_rss, .vm_ops = &drm_gem_shmem_vm_ops, + .evict = panfrost_shmem_evict, }; /** diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 13c0a8149c3a..8ddc2d310d29 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -30,12 +30,6 @@ struct panfrost_gem_object { struct mutex lock; } mappings; - /* - * Count the number of jobs referencing this BO so we don't let the - * shrinker reclaim this object prematurely. - */ - atomic_t gpu_usecount; - /* * Object chunk size currently mapped onto physical memory */ @@ -86,7 +80,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo, void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping); void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo); -void panfrost_gem_shrinker_init(struct drm_device *dev); -void panfrost_gem_shrinker_cleanup(struct drm_device *dev); - #endif /* __PANFROST_GEM_H__ */ diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c deleted file mode 100644 index 1aa94fff7072..000000000000 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ /dev/null @@ -1,129 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2019 Arm Ltd. - * - * Based on msm_gem_freedreno.c: - * Copyright (C) 2016 Red Hat - * Author: Rob Clark - */ - -#include - -#include -#include - -#include "panfrost_device.h" -#include "panfrost_gem.h" -#include "panfrost_mmu.h" - -static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) -{ - return (shmem->madv > 0) && - !refcount_read(&shmem->pages_pin_count) && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; -} - -static unsigned long -panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem; - unsigned long count = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return 0; - - list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (panfrost_gem_shmem_is_purgeable(shmem)) - count += shmem->base.size >> PAGE_SHIFT; - } - - mutex_unlock(&pfdev->shrinker_lock); - - return count; -} - -static bool panfrost_gem_purge(struct drm_gem_object *obj) -{ - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - struct panfrost_gem_object *bo = to_panfrost_bo(obj); - bool ret = false; - - if (atomic_read(&bo->gpu_usecount)) - return false; - - if (!mutex_trylock(&bo->mappings.lock)) - return false; - - if (!dma_resv_trylock(shmem->base.resv)) - goto unlock_mappings; - - panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); - ret = true; - - dma_resv_unlock(shmem->base.resv); - -unlock_mappings: - mutex_unlock(&bo->mappings.lock); - return ret; -} - -static unsigned long -panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem, *tmp; - unsigned long freed = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return SHRINK_STOP; - - list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) { - if (freed >= sc->nr_to_scan) - break; - if (drm_gem_shmem_is_purgeable(shmem) && - panfrost_gem_purge(&shmem->base)) { - freed += shmem->base.size >> PAGE_SHIFT; - list_del_init(&shmem->madv_list); - } - } - - mutex_unlock(&pfdev->shrinker_lock); - - if (freed > 0) - pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT); - - return freed; -} - -/** - * panfrost_gem_shrinker_init - Initialize panfrost shrinker - * @dev: DRM device - * - * This function registers and sets up the panfrost shrinker. - */ -void panfrost_gem_shrinker_init(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - pfdev->shrinker.count_objects = panfrost_gem_shrinker_count; - pfdev->shrinker.scan_objects = panfrost_gem_shrinker_scan; - pfdev->shrinker.seeks = DEFAULT_SEEKS; - WARN_ON(register_shrinker(&pfdev->shrinker, "drm-panfrost")); -} - -/** - * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker - * @dev: DRM device - * - * This function unregisters the panfrost shrinker. - */ -void panfrost_gem_shrinker_cleanup(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - - if (pfdev->shrinker.nr_deferred) { - unregister_shrinker(&pfdev->shrinker); - } -} diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index fb16de2d0420..da6be590557f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -289,6 +289,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos, dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE); } +static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count) +{ + struct panfrost_gem_object *bo; + int ret = 0; + + while (!ret && bo_count--) { + bo = to_panfrost_bo(bos[bo_count]); + ret = bo->base.madv ? -ENOMEM : 0; + } + + return ret; +} + int panfrost_job_push(struct panfrost_job *job) { struct panfrost_device *pfdev = job->pfdev; @@ -300,6 +313,10 @@ int panfrost_job_push(struct panfrost_job *job) if (ret) return ret; + ret = panfrost_objects_prepare(job->bos, job->bo_count); + if (ret) + goto unlock; + mutex_lock(&pfdev->sched_lock); drm_sched_job_arm(&job->base); @@ -341,7 +358,6 @@ static void panfrost_job_cleanup(struct kref *ref) if (!job->mappings[i]) break; - atomic_dec(&job->mappings[i]->obj->gpu_usecount); panfrost_gem_mapping_put(job->mappings[i]); } kvfree(job->mappings); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index ac145a98377b..01cd97011ea5 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -326,6 +326,7 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping) struct panfrost_device *pfdev = to_panfrost_device(obj->dev); struct sg_table *sgt; int prot = IOMMU_READ | IOMMU_WRITE; + int ret = 0; if (WARN_ON(mapping->active)) return 0; @@ -333,15 +334,26 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping) if (bo->noexec) prot |= IOMMU_NOEXEC; + if (!obj->import_attach) { + ret = drm_gem_shmem_pin(shmem); + if (ret) + return ret; + } + sgt = drm_gem_shmem_get_pages_sgt(shmem); - if (WARN_ON(IS_ERR(sgt))) - return PTR_ERR(sgt); + if (WARN_ON(IS_ERR(sgt))) { + ret = PTR_ERR(sgt); + goto unpin; + } mmu_map_sg(pfdev, mapping->mmu, mapping->mmnode.start << PAGE_SHIFT, prot, sgt); mapping->active = true; +unpin: + if (!obj->import_attach) + drm_gem_shmem_unpin(shmem); - return 0; + return ret; } void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 6ac77c2082ed..2a506074da46 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -61,13 +61,6 @@ struct drm_gem_shmem_object { */ int madv; - /** - * @madv_list: List entry for madvise tracking - * - * Typically used by drivers to track purgeable objects - */ - struct list_head madv_list; - /** * @sgt: Scatter/gather table for imported PRIME buffers */