From patchwork Fri Jan 5 18:45:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185484 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6402432dyb; Fri, 5 Jan 2024 10:47:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IEahlmaccjTigsAk3isGbHRE8Aw8WPA15TdF4wyyWymh7dp8KmHNN/C7KfXIv9y5Ux8gkcq X-Received: by 2002:a05:6a20:4e21:b0:198:fae2:5c39 with SMTP id gk33-20020a056a204e2100b00198fae25c39mr2995062pzb.34.1704480469804; Fri, 05 Jan 2024 10:47:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480469; cv=none; d=google.com; s=arc-20160816; b=v1gMF2sPg8ASKgt/2AvVIHGOGvkma7sKktx3dJr/0WuCLioun/+mnWDWwi0nMPZn4B 2Z8XOaySXB6Djlad0BMadSLRcNp6FHX2ARhNsm5021DNefW6luV80Ezhf3FRmsxTHWFM SywFz9W/Yk/EGQv/krxr/IuVyXLGj5PYbvQkMVky3dffU7DANTZ7iSpE11g+0DiEpnaO 3mzAklS9dFs4G9TZF16iLcxGKtALkK8iCoqI+Wn0pA8rdnPt1YG3f3z3hf5xRZuE3JQt wLH1XpTVgXmICyvNx6Asx+qKxSX3L3QwqLYzNzFEJoeQ41lN9PQVKoZwe6rN6SlDC6Hz EuHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=aIxBbEwT19E+czl4Pw6mG4NySL6xWXMp9a3GGpuAf3U=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=AKSHFvJG9Qs879pE+xPiTr0oExA5NfHwLw+EXerixehmxjnYcfiEgllOZr/Y2kjH/Q tLXkHJw9kNX8ZtThtafkxlRtAk7chusUMuWxGxcTdS4IuqlcuotvHuWYk4YxRvr0BfmP G83Njb12debBAIQK9hliDECkjKQ56P5+F7h2wkhbBktWFMa6V5BpEtlYj7zV0foAAnNR 9ZkPopIERp+NEfSh7UtGyVPi86svKli/kcm1OVq68QfkK/1V+ftt2ogQASIy0yfKV0dm kRTVtDXUGCl+B91rQrlf6jPU5f2kymyh92JY2xSUat3gMKwKlQah+vUq8LsT/RnOpCj+ 7yrg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=YB23RsRK; spf=pass (google.com: domain of linux-kernel+bounces-18239-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18239-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id ko18-20020a056a00461200b006dac8b64323si1650287pfb.163.2024.01.05.10.47.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:47:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18239-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=YB23RsRK; spf=pass (google.com: domain of linux-kernel+bounces-18239-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18239-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 76191B22448 for ; Fri, 5 Jan 2024 18:47:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8F295360B8; Fri, 5 Jan 2024 18:46:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="YB23RsRK" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B22935EE1 for ; Fri, 5 Jan 2024 18:46:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480406; bh=OG70x+yKpFBifTPcolgUaIEqHd7d1WD6+RZ9Tl5wOes=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YB23RsRKxR+1RkjmVm35UNkXQU6pfoJndLpD35nBY7f5vAM4IMZIp654E4gONbOg1 /gXaTpfUeYZrNoU3JcsRQEKRyPzTrljW5AjjV2pblA9YyMoSkkLJlHR+OoqIRxfSCx MW2Mg/koFz1V8K/spUHGu3IAoEhOvnTASWeK6dfNlFJlxvt/2g3FSfur1MPldvPLgX GMdwjgU4uk0n34InwkEUHXYFXvhJZB5agoOe6wqIIrAi9zUSibA4nFHmLnlrbU2FUh h0tr58nwCh8Wco8tyKqUZncCWgfvataTMjNCSDg6OkU1V/laeNEiRCths6OkIM4xFA YEbhBt+xb4XHg== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id A626C3782039; Fri, 5 Jan 2024 18:46:44 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 01/30] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Date: Fri, 5 Jan 2024 21:45:55 +0300 Message-ID: <20240105184624.508603-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277313252977478 X-GMAIL-MSGID: 1787277313252977478 Make drm/gem API function names consistent by having locked function use the _locked postfix in the name, while the unlocked variants don't use the _unlocked postfix. Rename drm_gem_v/unmap() function names to make them consistent with the rest of the API functions. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_client.c | 6 +++--- drivers/gpu/drm/drm_gem.c | 20 ++++++++++---------- drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +++--- drivers/gpu/drm/drm_internal.h | 4 ++-- drivers/gpu/drm/drm_prime.c | 4 ++-- drivers/gpu/drm/lima/lima_sched.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_dump.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +++--- include/drm/drm_gem.h | 4 ++-- 9 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index 9403b3f576f7..7ee9baf46eaa 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -255,7 +255,7 @@ void drm_client_dev_restore(struct drm_device *dev) static void drm_client_buffer_delete(struct drm_client_buffer *buffer) { if (buffer->gem) { - drm_gem_vunmap_unlocked(buffer->gem, &buffer->map); + drm_gem_vunmap(buffer->gem, &buffer->map); drm_gem_object_put(buffer->gem); } @@ -339,7 +339,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap_unlocked(buffer->gem, map); + ret = drm_gem_vmap(buffer->gem, map); if (ret) return ret; @@ -361,7 +361,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { struct iosys_map *map = &buffer->map; - drm_gem_vunmap_unlocked(buffer->gem, map); + drm_gem_vunmap(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 44a948b80ee1..95327b003692 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1175,7 +1175,7 @@ void drm_gem_unpin(struct drm_gem_object *obj) obj->funcs->unpin(obj); } -int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) +int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map) { int ret; @@ -1192,9 +1192,9 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) return 0; } -EXPORT_SYMBOL(drm_gem_vmap); +EXPORT_SYMBOL(drm_gem_vmap_locked); -void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) +void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map) { dma_resv_assert_held(obj->resv); @@ -1207,27 +1207,27 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) /* Always set the mapping to NULL. Callers may rely on this. */ iosys_map_clear(map); } -EXPORT_SYMBOL(drm_gem_vunmap); +EXPORT_SYMBOL(drm_gem_vunmap_locked); -int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) { int ret; dma_resv_lock(obj->resv, NULL); - ret = drm_gem_vmap(obj, map); + ret = drm_gem_vmap_locked(obj, map); dma_resv_unlock(obj->resv); return ret; } -EXPORT_SYMBOL(drm_gem_vmap_unlocked); +EXPORT_SYMBOL(drm_gem_vmap); -void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) { dma_resv_lock(obj->resv, NULL); - drm_gem_vunmap(obj, map); + drm_gem_vunmap_locked(obj, map); dma_resv_unlock(obj->resv); } -EXPORT_SYMBOL(drm_gem_vunmap_unlocked); +EXPORT_SYMBOL(drm_gem_vunmap); /** * drm_gem_lock_reservations - Sets up the ww context and acquires diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c index 3bdb6ba37ff4..3808f47310bf 100644 --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c @@ -362,7 +362,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map, ret = -EINVAL; goto err_drm_gem_vunmap; } - ret = drm_gem_vmap_unlocked(obj, &map[i]); + ret = drm_gem_vmap(obj, &map[i]); if (ret) goto err_drm_gem_vunmap; } @@ -384,7 +384,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map, obj = drm_gem_fb_get_obj(fb, i); if (!obj) continue; - drm_gem_vunmap_unlocked(obj, &map[i]); + drm_gem_vunmap(obj, &map[i]); } return ret; } @@ -411,7 +411,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, struct iosys_map *map) continue; if (iosys_map_is_null(&map[i])) continue; - drm_gem_vunmap_unlocked(obj, &map[i]); + drm_gem_vunmap(obj, &map[i]); } } EXPORT_SYMBOL(drm_gem_fb_vunmap); diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h index 8e4faf0a28e6..227f58e5b232 100644 --- a/drivers/gpu/drm/drm_internal.h +++ b/drivers/gpu/drm/drm_internal.h @@ -172,8 +172,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, int drm_gem_pin(struct drm_gem_object *obj); void drm_gem_unpin(struct drm_gem_object *obj); -int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map); -void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map); +int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map); /* drm_debugfs.c drm_debugfs_crc.c */ #if defined(CONFIG_DEBUG_FS) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 834a5e28abbe..4a5935a400ec 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -684,7 +684,7 @@ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map) { struct drm_gem_object *obj = dma_buf->priv; - return drm_gem_vmap(obj, map); + return drm_gem_vmap_locked(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vmap); @@ -700,7 +700,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map) { struct drm_gem_object *obj = dma_buf->priv; - drm_gem_vunmap(obj, map); + drm_gem_vunmap_locked(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index c3bf8cda8498..3813f30480ba 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -371,7 +371,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) } else { buffer_chunk->size = lima_bo_size(bo); - ret = drm_gem_vmap_unlocked(&bo->base.base, &map); + ret = drm_gem_vmap(&bo->base.base, &map); if (ret) { kvfree(et); goto out; @@ -379,7 +379,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); - drm_gem_vunmap_unlocked(&bo->base.base, &map); + drm_gem_vunmap(&bo->base.base, &map); } buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/panfrost/panfrost_dump.c b/drivers/gpu/drm/panfrost/panfrost_dump.c index 47751302f1bc..4042afe2fbf4 100644 --- a/drivers/gpu/drm/panfrost/panfrost_dump.c +++ b/drivers/gpu/drm/panfrost/panfrost_dump.c @@ -209,7 +209,7 @@ void panfrost_core_dump(struct panfrost_job *job) goto dump_header; } - ret = drm_gem_vmap_unlocked(&bo->base.base, &map); + ret = drm_gem_vmap(&bo->base.base, &map); if (ret) { dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n"); iter.hdr->bomap.valid = 0; @@ -228,7 +228,7 @@ void panfrost_core_dump(struct panfrost_job *job) vaddr = map.vaddr; memcpy(iter.data, vaddr, bo->base.base.size); - drm_gem_vunmap_unlocked(&bo->base.base, &map); + drm_gem_vunmap(&bo->base.base, &map); iter.hdr->bomap.valid = 1; diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index ba9b6e2b2636..52befead08c6 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -106,7 +106,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, goto err_close_bo; } - ret = drm_gem_vmap_unlocked(&bo->base, &map); + ret = drm_gem_vmap(&bo->base, &map); if (ret) goto err_put_mapping; perfcnt->buf = map.vaddr; @@ -165,7 +165,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, return 0; err_vunmap: - drm_gem_vunmap_unlocked(&bo->base, &map); + drm_gem_vunmap(&bo->base, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -195,7 +195,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); perfcnt->user = NULL; - drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map); + drm_gem_vunmap(&perfcnt->mapping->obj->base.base, &map); perfcnt->buf = NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 369505447acd..decb19ffb2c8 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -527,8 +527,8 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj); void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages, bool dirty, bool accessed); -int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); -void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); +int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map); int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles, int count, struct drm_gem_object ***objs_out); From patchwork Fri Jan 5 18:45:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185486 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6402555dyb; Fri, 5 Jan 2024 10:48:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IFQC1+Dr5IYyiFdCEq8ldj/Y7eM/kMLr/8YYal2D5dc/NVjtkgmyTKmyY4UcbfBh775sXol X-Received: by 2002:a17:90a:d346:b0:28b:2e19:70ad with SMTP id i6-20020a17090ad34600b0028b2e1970admr2091778pjx.25.1704480486980; Fri, 05 Jan 2024 10:48:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480486; cv=none; d=google.com; s=arc-20160816; b=GjnzCvNksAywpCh/L/MP5a9cU7o2vtnPbkeUtB4M5sAyw1Q4EfeoQjCpTpPXvD0ONd fb+6k0+GY+OjaA/34XHwMaV60Nc3dAbNWUzpIKGcNPYwKz96KCNYuwcd40/FPNAYwE/s 2OoSTueFi3M73fvAyVkU6KBCbSFJxjISrFFvT8bEvKyX+hWW4fgtjdiY6rnvh6ZqDnGS 1SvIbQnP5SB0Vg/uIi+r1df7JQVRHuULQO0mG1UyE3KXyqs9+4+cscGTkOtxzpcg8USe /K/Wi4DHM8AUoCbVeI4lG3XTaylKb9BG0f8+aA46wyPnuQBC6EwNXiFiu0wVPXZGWCkh jK/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=0eJzqlljwutPB3q3T/xkW5/93c4LgnQ1V0YKnjblxTs=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=RWwxwOs5eYkaMzs5M/WDTWOD+S4khHxTsEFc6kYlvvICf2xfQ9U4+a+1F2r8IGPima UcOQ1WpkuoKpizN4ZmapL1KJoXQq2zbJzzfnMlazqgH3zDe+L3FhwlGUPHNOxl9qB1ER iV5JOCRbKV4qo/FYQpmDAIWWGUI5FcVioFOHQcxxrW+yZv4A6uSATQvkL49gQN4Fs2Zc /J+arRTaHGiDfxLFq50hgs+xMwvgcyipWvyFjhLoJ6DPp2tqK1wRfH5QKbvcdt8e8JoL SGcw82oBjxSylseefACOzfpnui2RBQg12u2PwU+n4Mk9g/hN2u9FVr+HkrrXZMU6aXpD HRZw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=E7SWBKFy; spf=pass (google.com: domain of linux-kernel+bounces-18240-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18240-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id j3-20020a17090ae60300b0028014e41ff7si1184913pjy.48.2024.01.05.10.48.06 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:48:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18240-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=E7SWBKFy; spf=pass (google.com: domain of linux-kernel+bounces-18240-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18240-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 06776B22060 for ; Fri, 5 Jan 2024 18:47:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 74CAD364B3; Fri, 5 Jan 2024 18:46:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="E7SWBKFy" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 686E335EFA for ; Fri, 5 Jan 2024 18:46:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480407; bh=nR4jYVhSWCLMrjIGrNICVL5mLIA6VSkplrmpOfwke3k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E7SWBKFyyVHM9/BgPb2tO1ghBhNH/e/IaRwE45mMwxLbawQWEePrIhRbKeUMUYg0Z lv2d5w7aWLCfOZpkuq5H8BTm926H4+sMuNOJ57GVVXhU76VJ3VS0+rSWE6P2Q7M+vL 9KVtAkUU2/5WjF7C3yb8GjGa7ofWnHJ1kh8QFQ5iYUuOinYATpUMrdKvEybZgHE12f pr8JbUYAjwIEJDINNHx6Y3SeK12ZRCZndHykfZRHpdivONEeFy0osDfXjaPkwxaaH1 OE82Fe+qXZTvdpSvHW9brpx83hyQkz2/QzO2V3nP3VQttBcaTkqsLBYu4lLXHdoUzd Hn0GZXQdalP2Q== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 469C2378203B; Fri, 5 Jan 2024 18:46:46 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 02/30] drm/gem: Add _locked postfix to functions that have unlocked counterpart Date: Fri, 5 Jan 2024 21:45:56 +0300 Message-ID: <20240105184624.508603-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277331195696163 X-GMAIL-MSGID: 1787277331195696163 Add _locked postfix to drm_gem functions that have unlocked counterpart functions to make GEM functions naming more consistent and intuitive in regards to the locking requirements. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem.c | 6 +++--- include/drm/drm_gem.h | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 95327b003692..4523cd40fb2f 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1490,10 +1490,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, EXPORT_SYMBOL(drm_gem_lru_scan); /** - * drm_gem_evict - helper to evict backing pages for a GEM object + * drm_gem_evict_locked - helper to evict backing pages for a GEM object * @obj: obj in question */ -int drm_gem_evict(struct drm_gem_object *obj) +int drm_gem_evict_locked(struct drm_gem_object *obj) { dma_resv_assert_held(obj->resv); @@ -1505,4 +1505,4 @@ int drm_gem_evict(struct drm_gem_object *obj) return 0; } -EXPORT_SYMBOL(drm_gem_evict); +EXPORT_SYMBOL(drm_gem_evict_locked); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index decb19ffb2c8..f835fdee6a5e 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -551,7 +551,7 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned long *remaining, bool (*shrink)(struct drm_gem_object *obj)); -int drm_gem_evict(struct drm_gem_object *obj); +int drm_gem_evict_locked(struct drm_gem_object *obj); #ifdef CONFIG_LOCKDEP /** From patchwork Fri Jan 5 18:45:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185485 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6402508dyb; Fri, 5 Jan 2024 10:48:01 -0800 (PST) X-Google-Smtp-Source: AGHT+IGvCsTPvB22UmuvtdJprZ2GA1kjfo68kD8Ihe3RlrRnr/9C2q4HY/xVfXH1Wex3Rys+kK4l X-Received: by 2002:a50:9e06:0:b0:554:8c9d:c3ff with SMTP id z6-20020a509e06000000b005548c9dc3ffmr2739939ede.29.1704480481112; Fri, 05 Jan 2024 10:48:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480481; cv=none; d=google.com; s=arc-20160816; b=kNbg0ntytGun0sw0cFxJdNIMv0V7oyOt6XpLsOWEmzUarGT23iEOe19BECSVUknDwe iYVwb0+3+l4V2qJCVMsYAqiMQ+GTehazDr/Cwde2dV3PqrT2ndfI8d5nktvLpRdnMDQB KeTu2KxeKBwnRlnCCXYIbk3ohH4bKsk0jORxQIVBzDVun0Mx78sX3pkPgD4asn2JeYa4 +8ibHePxfwTpPTy6HnMjcm2YddPl6kBuMD7FIqYiFOpgKI4EDyyub7e0/NQhInRMLxz1 jvbfAkJIKUmxpQzh9Q2juIfLVYh0sfLbCZzwLhKqR1epZaKWs+NzYHuwpHjU5WmshxHy UGhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=VAVzM0Bkhs4/hX6WtFua+aANKLF+hZaeRqhKCvrYXvQ=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=bp78W/e8pLDojXykxGHvNovK9UEFN4WPDUqJhdsZrL36Rtv6It+1Nx2Vh5hDL6CyZz 9c8mmVNjzaxdOu/OpCrhHkkcsnCCm7a6SWk53Tw2349/v3HGkLYb5LxftUUfuwYFIm8E C85iVbDWlN4wlvmMENhB2ZN0v6wO87MbgeGDQ1KMleEkNAAHsbuTEFZZ0UHdHg+wwC71 uWOM8xuLE6x4E7IWJQfz99ww/JsBLePT87HTp3b/ZqzCUmQxF1efJLE2BFigjy8fjJ22 b/iRTZcD3dba4vL2Yz/hGpaIiyHDeO9GXIMkZJ4FZtfK5Dw62xETEnSWp/Rwj9IPKWdG C98Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=tJ2pdd+p; spf=pass (google.com: domain of linux-kernel+bounces-18241-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18241-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id i30-20020a50871e000000b005533e1f916csi831001edb.421.2024.01.05.10.48.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:48:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18241-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=tJ2pdd+p; spf=pass (google.com: domain of linux-kernel+bounces-18241-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18241-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B8B241F23F7D for ; Fri, 5 Jan 2024 18:48:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D7F05364A8; Fri, 5 Jan 2024 18:46:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="tJ2pdd+p" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D596135F16 for ; Fri, 5 Jan 2024 18:46:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480409; bh=ptvxcc6NBB3ZGswpKHFL88IdpqznzAuLiZMI6wVLSM8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tJ2pdd+pvOEsV/MddsX+Rhpq10rQUgrH8iMoWqsE7sbqJj2DiHIbSlFYhxXUeJYe7 7BGFC5fvX/r/XKQR0X/OOEbXX/Am1YkrB/hrnW3yrbNUXnVguTcn3dzUqQiGlRkLQy knPw1o4W2gcMLpWddKcmZUTf2OeZPsMq5vWBm1n9X3C1mZg/GLQ+rm2i+FJNmQQIiw Cozz73jTI07/xgH2AFJJ7p19o2+J1YPwqmJzre52rzcpeYv2cqMQR4x+xmI/RtRyph TTjY+9ntgII34qtCuS71YoTCGs7ayEtdxt6O6MAuqsrafsAehlAKaupMgVxxQHf4Bh amDBvdPr55FVQ== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id D499B378203D; Fri, 5 Jan 2024 18:46:47 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 03/30] drm/gem: Document locking rule of vmap and evict callbacks Date: Fri, 5 Jan 2024 21:45:57 +0300 Message-ID: <20240105184624.508603-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277325120044951 X-GMAIL-MSGID: 1787277325120044951 The vmap/vunmap/evict GEM callbacks are always invoked with a held GEM's reservation lock. Document this locking rule for clarity. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- include/drm/drm_gem.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index f835fdee6a5e..021f64371056 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -156,7 +156,8 @@ struct drm_gem_object_funcs { * @vmap: * * Returns a virtual address for the buffer. Used by the - * drm_gem_dmabuf_vmap() helper. + * drm_gem_dmabuf_vmap() helper. Called with a held GEM reservation + * lock. * * This callback is optional. */ @@ -166,7 +167,8 @@ struct drm_gem_object_funcs { * @vunmap: * * Releases the address previously returned by @vmap. Used by the - * drm_gem_dmabuf_vunmap() helper. + * drm_gem_dmabuf_vunmap() helper. Called with a held GEM reservation + * lock. * * This callback is optional. */ @@ -189,7 +191,8 @@ struct drm_gem_object_funcs { * @evict: * * Evicts gem object out from memory. Used by the drm_gem_object_evict() - * helper. Returns 0 on success, -errno otherwise. + * helper. Returns 0 on success, -errno otherwise. Called with a held + * GEM reservation lock. * * This callback is optional. */ From patchwork Fri Jan 5 18:45:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185487 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6402831dyb; Fri, 5 Jan 2024 10:48:39 -0800 (PST) X-Google-Smtp-Source: AGHT+IG5bKW9fKAvCqKebZeEGN8DTpbSzCkJ8ebk0PkU89GpJuRLfenBVv9zLE2YbKQZvE+pvbPB X-Received: by 2002:a17:90a:e54d:b0:28c:c5fd:32eb with SMTP id ei13-20020a17090ae54d00b0028cc5fd32ebmr3254570pjb.45.1704480518945; Fri, 05 Jan 2024 10:48:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480518; cv=none; d=google.com; s=arc-20160816; b=ScfPUaVsQRFGMx/PGd3Loh0eY9f/6mX5vQedAWD2RXVyM4nzcFtjYgsgiHNxr/ub7o lHjHLZfgQnIsHvck0O76J2xmtXAbeE/f8YtI/Fawpn9cpp6ttZdK2yGy9Jn05CCIzPwu aSy8isSVzoC08QiYLE+5+JavSiHnl/8XxkGXRyt9ysYdiXGp3nR6qTj6YZoUEjdYBLTi hUKY+H7th1z85JTDlUuHeoyqPkH3eQXr77PKCurksoyEeUX4iDN45ZlOFYKEHSdMu9uJ oPJ3vp5qQSxi48E92souj/cms0bcMaFnRRdZjw6B2oe/0DqgrstQsIyS83BlvGFCFzJE LfFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=0hV82qyQeeURoOw1VphspNtB304h4w1BHzUU+WLSZ7U=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=dBWWX85oMHnaTrTvqQSAWtlIiB8tgGC283cKkVFlhwLQrQcJosQnPT/mW8DClNfOZj Seh6OtMNR6w4OzlsIjGdbzxJ3Vo2KY5Kq/z2DDQDUUTldnzNm/o2JcBAJX46MwpsGfxp cDp+z713Gm0rZproG7h8Z98ngckm4rvaWQAt834a5dEKgNNqwYcW9pTI0yN3FeuRHckN S+JzkvdjQg1UuumgvUown9V7i+P+mYbmmsLDMhvp/7lphomBL9EnU8l6W56izJqPhMk0 SxaLZ+vV5bSFANaM0OhOgtOX2oZvkboqsIr51grDnD/l0rdcbBiWGyW72gCRs/j6xQn8 Tuqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=YnhMsXv5; spf=pass (google.com: domain of linux-kernel+bounces-18242-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18242-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id nv1-20020a17090b1b4100b0028d014e05b6si1222635pjb.67.2024.01.05.10.48.38 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:48:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18242-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=YnhMsXv5; spf=pass (google.com: domain of linux-kernel+bounces-18242-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18242-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id EB366B238A5 for ; Fri, 5 Jan 2024 18:48:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3F5F036AF8; Fri, 5 Jan 2024 18:46:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="YnhMsXv5" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34AAD360AC for ; Fri, 5 Jan 2024 18:46:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480410; bh=01fH19b8J9wtDDA6iBO9zaLoR7Wzs0NAYl9Q5D2c8ho=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YnhMsXv5TJeDFRC/oLw3JsL3PEYPPglSA7jhtK5aD/8Zq8VmJpj+HXKGDDX3ecWNH kaYgoHg1eTriN5B4Ju/IXn1EuZiSmztfHmyVgpkDm/bVYN8X0XdsVDBIfKLIdaTyWo fdUU3pb5ba0LXz34PTGVubzq/x4eug+t3/zfdtqFARDmWGQWxhYBZqLdjw3Dqo8yvK zVb6z37Wzn6jJ7y1crDq1WgoF9hyofYghIcAsvbcB+PZYdBSVYwhF5yusAjhReViJX sX5d5la315nfSblsSIRUqgMkAGE9HKcoIEwIp9nUUXHnp2hnuAZLSl4SXlr5wbA1k4 fGH9fzWYkO94w== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 6E73F378203F; Fri, 5 Jan 2024 18:46:49 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 04/30] drm/shmem-helper: Make all exported symbols GPL Date: Fri, 5 Jan 2024 21:45:58 +0300 Message-ID: <20240105184624.508603-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277364794415896 X-GMAIL-MSGID: 1787277364794415896 Make all drm-shmem exported symbols GPL to make them consistent with the rest of drm-shmem symbols. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index e435f986cd13..0d61f2b3e213 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -226,7 +226,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) shmem->pages_mark_accessed_on_put); shmem->pages = NULL; } -EXPORT_SYMBOL(drm_gem_shmem_put_pages); +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { @@ -271,7 +271,7 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) return ret; } -EXPORT_SYMBOL(drm_gem_shmem_pin); +EXPORT_SYMBOL_GPL(drm_gem_shmem_pin); /** * drm_gem_shmem_unpin - Unpin backing pages for a shmem GEM object @@ -290,7 +290,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_gem_shmem_unpin_locked(shmem); dma_resv_unlock(shmem->base.resv); } -EXPORT_SYMBOL(drm_gem_shmem_unpin); +EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object @@ -360,7 +360,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, return ret; } -EXPORT_SYMBOL(drm_gem_shmem_vmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap); /* * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object @@ -396,7 +396,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, shmem->vaddr = NULL; } -EXPORT_SYMBOL(drm_gem_shmem_vunmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap); static int drm_gem_shmem_create_with_handle(struct drm_file *file_priv, @@ -435,7 +435,7 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) return (madv >= 0); } -EXPORT_SYMBOL(drm_gem_shmem_madvise); +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { @@ -467,7 +467,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL(drm_gem_shmem_purge); +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge); /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object @@ -642,7 +642,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); } -EXPORT_SYMBOL(drm_gem_shmem_print_info); +EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); /** * drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned From patchwork Fri Jan 5 18:45:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185489 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403068dyb; Fri, 5 Jan 2024 10:49:09 -0800 (PST) X-Google-Smtp-Source: AGHT+IG7wrMqu/+TxlGjl18XLjXcnJ5LIR9izp5f/M6R77qlXnoAM+0bQyZkhGPKDsBWE9/esKq2 X-Received: by 2002:a17:903:228d:b0:1d4:ceab:58b9 with SMTP id b13-20020a170903228d00b001d4ceab58b9mr3634842plh.40.1704480549305; Fri, 05 Jan 2024 10:49:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480549; cv=none; d=google.com; s=arc-20160816; b=wgleqlPTi61NV72btNLAhIWrKYh6/4+G8hpeM5qTQKaXL2xa0aGUh522P9Re5MUbfo XOMhQuZMpmBRu4bIPE98ff88SAPprAG9L9B2XoU3cv8EE8b5cFsgPxoY8FzEHW5iTR0k eJz+rdLhfHR5oE55Ax6ui2gynvEzqw/aENzWLUCn0ZobtWy9ZqRNLzCY+GF7XLrNv7e6 Ia4RLylzaJUFSYPdoyNjqOTasCOvwKKYBTgx6rWa5CoB4hoXrCqgauATGh7KuxbFacoG 6KYy7us+QPdTR4TaKJGkxm40JeO17Cb0r7sY0fn6EgYjpOReCSCYHN8R3ZEt1F7ld13t 6n+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=nLRHRlecjR4Mv4gaoQamS1yeMv9i03ZMzW5sIGpe3Wk=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=ppvdkKvkQDhO9QE+Fx4cpPbaTbbILupKuwS8Hf82IivfW54qCbdzVIkTLX6cPclBJf hwQEYfHSFhQ+2hgGjUHeQtLvb6o9fd00tj4qV38MTUI0NMiF8H6i4qZAWf9/M0RHcYAc 1ZrtduJWtOaEQIbab7S8lCMot53Zh/hDXVWWF8rlu7aLqXwzYTvlyniPAMHeJXM5t/YR Upxt5Otg7TWyMb5ke72tvwB1+gxp+KftiMXypTDND+0OUA1wn82uIvWTpebFX91dzV/j muJn2h7c9g2CYwoA8zgJePWhRWaeGHpfGpRt1xFESgfUeHIaiO9gVWP9JL1Imsgh1HZV 8rmA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=aIJoT8TZ; spf=pass (google.com: domain of linux-kernel+bounces-18243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18243-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id u4-20020a17090282c400b001d3c5d1834esi1584548plz.557.2024.01.05.10.49.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:49:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=aIJoT8TZ; spf=pass (google.com: domain of linux-kernel+bounces-18243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18243-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 5245AB232CA for ; Fri, 5 Jan 2024 18:48:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 96A2F37160; Fri, 5 Jan 2024 18:46:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="aIJoT8TZ" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18046364C4 for ; Fri, 5 Jan 2024 18:46:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480412; bh=1wYCvt01LaIeuu8aI5nsgyJflSzQFjzV7barFxhztn8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aIJoT8TZ8rmiEBIYidFkerml49Lcp9Img6cDvSwx7TgqRLyTFUai61CCVRVgDbYhV G4yuMxVT5XEUFOX9LGtm8a0M1ciZhWOwdmVGpLKNY6EvnT6DwebBL4J+VHSsqTwcsV uijUv8iIziFJrFydWlW2Kz8ejG1sLKMg2hA/nBUrGEpgNS4yXpR56Lt5M7IqJyoQgZ ITHZwZvnlffo7KlM8vTWRrVri1BKa7HYWRfW6nDV+gNFW1n3THBl7T5OvgINpJgGze pCMk9lV2H7FDdFfvSqGbMh/tXlUELPGygAoC+gH8wrXUzKINibBCG0jp0ZfAz44VVt j1IwmRVxYFjVg== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 09756378203E; Fri, 5 Jan 2024 18:46:50 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 05/30] drm/shmem-helper: Refactor locked/unlocked functions Date: Fri, 5 Jan 2024 21:45:59 +0300 Message-ID: <20240105184624.508603-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277396273839224 X-GMAIL-MSGID: 1787277396273839224 Add locked and remove unlocked postfixes from drm-shmem function names, making names consistent with the drm/gem core code. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 60 +++++++++---------- drivers/gpu/drm/lima/lima_gem.c | 6 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 2 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- include/drm/drm_gem_shmem_helper.h | 28 ++++----- 7 files changed, 51 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 0d61f2b3e213..043e8e3b129c 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -153,7 +153,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) kfree(shmem->sgt); } if (shmem->pages) - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); drm_WARN_ON(obj->dev, shmem->pages_use_count); @@ -165,7 +165,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; @@ -199,12 +199,12 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) } /* - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object + * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object * * This function decreases the use count and puts the backing pages when use drops to zero. */ -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; @@ -226,7 +226,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) shmem->pages_mark_accessed_on_put); shmem->pages = NULL; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { @@ -234,7 +234,7 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); return ret; } @@ -243,7 +243,7 @@ static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) { dma_resv_assert_held(shmem->base.resv); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } /** @@ -293,7 +293,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); /* - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object * @map: Returns the kernel virtual address of the SHMEM GEM object's backing * store. @@ -302,13 +302,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); * exists for the buffer backing the shmem GEM object. It hides the differences * between dma-buf imported and natively allocated objects. * - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_locked(). * * Returns: * 0 on success or a negative error code on failure. */ -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; int ret = 0; @@ -331,7 +331,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, return 0; } - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); if (ret) goto err_zero_use; @@ -354,28 +354,28 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); err_zero_use: shmem->vmap_use_count = 0; return ret; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked); /* - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object + * drm_gem_shmem_vunmap_locked - Unmap a virtual mapping for a shmem GEM object * @shmem: shmem GEM object * @map: Kernel virtual address where the SHMEM GEM object was mapped * * This function cleans up a kernel virtual address mapping acquired by - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to - * zero. + * drm_gem_shmem_vmap_locked(). The mapping is only removed when the use count + * drops to zero. * * This function hides the differences between dma-buf imported and natively * allocated objects. */ -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; @@ -391,12 +391,12 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, return; vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); } shmem->vaddr = NULL; } -EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap); +EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap_locked); static int drm_gem_shmem_create_with_handle(struct drm_file *file_priv, @@ -424,7 +424,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, /* Update madvise status, returns true if not purged, else * false or -errno. */ -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv) { dma_resv_assert_held(shmem->base.resv); @@ -435,9 +435,9 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) return (madv >= 0); } -EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked); -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct drm_device *dev = obj->dev; @@ -451,7 +451,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) kfree(shmem->sgt); shmem->sgt = NULL; - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); shmem->madv = -1; @@ -467,7 +467,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL_GPL(drm_gem_shmem_purge); +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked); /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object @@ -564,7 +564,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma) struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); dma_resv_lock(shmem->base.resv, NULL); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); dma_resv_unlock(shmem->base.resv); drm_gem_vm_close(vma); @@ -611,7 +611,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct } dma_resv_lock(shmem->base.resv, NULL); - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); dma_resv_unlock(shmem->base.resv); if (ret) @@ -679,7 +679,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ drm_WARN_ON(obj->dev, obj->import_attach); - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages_locked(shmem); if (ret) return ERR_PTR(ret); @@ -701,7 +701,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ sg_free_table(sgt); kfree(sgt); err_put_pages: - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_put_pages_locked(shmem); return ERR_PTR(ret); } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 4f9736e5f929..433bda72e59b 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -180,7 +180,7 @@ static int lima_gem_pin(struct drm_gem_object *obj) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_pin(&bo->base); + return drm_gem_shmem_object_pin(obj); } static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) @@ -190,7 +190,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_vmap(&bo->base, map); + return drm_gem_shmem_object_vmap(obj, map); } static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) @@ -200,7 +200,7 @@ static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) if (bo->heap_size) return -EINVAL; - return drm_gem_shmem_mmap(&bo->base, vma); + return drm_gem_shmem_object_mmap(obj, vma); } static const struct drm_gem_object_funcs lima_gem_funcs = { diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index a926d71e8131..a15d62f19afb 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -438,7 +438,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, } } - args->retained = drm_gem_shmem_madvise(&bo->base, args->madv); + args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv); if (args->retained) { if (args->madv == PANFROST_MADV_DONTNEED) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index d47b40b82b0b..f268bd5c2884 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -192,7 +192,7 @@ static int panfrost_gem_pin(struct drm_gem_object *obj) if (bo->is_heap) return -EINVAL; - return drm_gem_shmem_pin(&bo->base); + return drm_gem_shmem_object_pin(obj); } static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 3d9f51bd48b6..02b60ea1433a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -51,7 +51,7 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) goto unlock_mappings; panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge(&bo->base); + drm_gem_shmem_purge_locked(&bo->base); ret = true; dma_resv_unlock(shmem->base.resv); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index f38385fe76bb..1ab081bd81a8 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -538,7 +538,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, err_map: sg_free_table(sgt); err_pages: - drm_gem_shmem_put_pages(&bo->base); + drm_gem_shmem_put_pages_locked(&bo->base); err_unlock: dma_resv_unlock(obj->resv); err_bo: diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index bf0c31aa8fbe..9e83212becbb 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -99,16 +99,16 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map); -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map); +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map); +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct iosys_map *map); int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma); -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { @@ -117,7 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); @@ -208,12 +208,12 @@ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_ } /* - * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap() + * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap_locked() * @obj: GEM object * @map: Returns the kernel virtual address of the SHMEM GEM object's backing store. * - * This function wraps drm_gem_shmem_vmap(). Drivers that employ the shmem helpers should - * use it as their &drm_gem_object_funcs.vmap handler. + * This function wraps drm_gem_shmem_vmap_locked(). Drivers that employ the shmem + * helpers should use it as their &drm_gem_object_funcs.vmap handler. * * Returns: * 0 on success or a negative error code on failure. @@ -223,7 +223,7 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - return drm_gem_shmem_vmap(shmem, map); + return drm_gem_shmem_vmap_locked(shmem, map); } /* @@ -231,15 +231,15 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, * @obj: GEM object * @map: Kernel virtual address where the SHMEM GEM object was mapped * - * This function wraps drm_gem_shmem_vunmap(). Drivers that employ the shmem helpers should - * use it as their &drm_gem_object_funcs.vunmap handler. + * This function wraps drm_gem_shmem_vunmap_locked(). Drivers that employ the shmem + * helpers should use it as their &drm_gem_object_funcs.vunmap handler. */ static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj, struct iosys_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - drm_gem_shmem_vunmap(shmem, map); + drm_gem_shmem_vunmap_locked(shmem, map); } /** From patchwork Fri Jan 5 18:46:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185488 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6402909dyb; Fri, 5 Jan 2024 10:48:48 -0800 (PST) X-Google-Smtp-Source: AGHT+IGvCzG2/1rEdd5XSIOQjxGvZ4GHOMNm6cx/PvZRmrFmgCk5TegXi8HBRSbHVmHiRFGFd9n8 X-Received: by 2002:a17:90b:4b41:b0:28d:19cb:c945 with SMTP id mi1-20020a17090b4b4100b0028d19cbc945mr524441pjb.83.1704480528301; Fri, 05 Jan 2024 10:48:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480528; cv=none; d=google.com; s=arc-20160816; b=OxBkeOTvQww4QR0HG6RMw6GuTK0hJCL6vLqxfsleCih9MQIV+vDyRayjZM7t93QKbE 2+l9YTtMjrHN4lkxmdyS+NLuIIuLuYakR9XQQ8408zEqVP4Y94K+F1P7hdTQRp8WwtWh EAlWI9Qj5pkQLA1XTQb8z7Y/vvZ9nVN91DPJjN6OSY1n5yIJbAJIZ+cuhACIZzFZIumd HmMcX2PjvXuCLPkWb3/xe7QeRsQVV4LViNe22T0qf7veiGpeGGg0U8mHztlT2c0y3o2k ORLKawOXDjGv2/mLEM2rPem8E2J9J5rriKmdQkWj2F35C071ax2mEkDKpfnQryAKak9V BANg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=tbgILMUI3duBs/UxlRMvv0w5D6/6296Rt8xcGhFxJkM=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=gDy0FSWGLzCyaSRgiadVPQ/9RBCWc5x6OPhckuoPcwYObvpVxibw+cl6BSjD1YXeqc SPHcHQMOkiKp9WzVqGQZBkjE0nv+BZeitiwzpDONQQr2gKxNX+34PSXgzcs7WlphvQ+J x+rd7/D0aOgN0fB46PGw4iMeg82Jw9NHhQ70nn0v3U2NnUqbesTFAXz3nJ0R1pnAku3X g6zZbjqh+WR4Qn2RiL5VUecKKsgzH26NfTAFys7Qj7lxHBF7XClTXzM/ah9qkIbSHZPw EoRjsAgUIywMAi1DIpQZ4t6Z6eyoBa/+sgy6Dg4BFOMlBF0HS0NjRCQsh/druKaAtY/F o+Dw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=AWeQh8Ef; spf=pass (google.com: domain of linux-kernel+bounces-18244-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18244-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id 11-20020a17090a098b00b0028cfb02353asi1182615pjo.169.2024.01.05.10.48.48 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:48:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18244-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=AWeQh8Ef; spf=pass (google.com: domain of linux-kernel+bounces-18244-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18244-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 1628A2861F7 for ; Fri, 5 Jan 2024 18:48:48 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BE3D9358B8; Fri, 5 Jan 2024 18:46:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="AWeQh8Ef" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1F5236AEF for ; Fri, 5 Jan 2024 18:46:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480414; bh=ogKb7d41PScFLzJ54iuk6ZS0vpsAlBKn+e0Ym6lcvII=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AWeQh8EfRh3NwZ17ZU18fulJX9TFyKsccd1mq/XyYbmkMluipimpxzcn7DicPPTwf iCH66lPnhUorGMqu7uJIBWLlt9uL1JZqC+vVwQTuW+CRhhQ5MvjK6Eq3x9HGtEi2+X CvPdXzYkpy+9ZGFL2OEZsmbniaaULZpZHc0o9vPhhYj1icOZYOqmkD9JjeVc15QyDM avj1W+ipJQuYGqVOWbBCgmU5lWW+nBH8bQIlNqAOj70et0O7QmoS1SLvkcO0n7tQpG 12oiP0cPZ1baqzMsn/Io3ocw+5KNbv3Gw0T6mYmb58qiPpQ6gVJ3CA3rpmymk9tXRy 4plg+ejoTBT5A== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id E692E3782041; Fri, 5 Jan 2024 18:46:52 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 06/30] drm/shmem-helper: Remove obsoleted is_iomem test Date: Fri, 5 Jan 2024 21:46:00 +0300 Message-ID: <20240105184624.508603-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277374219029005 X-GMAIL-MSGID: 1787277374219029005 Everything that uses the mapped buffer should be agnostic to is_iomem. The only reason for the is_iomem test is that we're setting shmem->vaddr to the returned map->vaddr. Now that the shmem->vaddr code is gone, remove the obsoleted is_iomem test to clean up the code. Acked-by: Maxime Ripard Suggested-by: Thomas Zimmermann Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 043e8e3b129c..1f0a66386415 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -315,12 +315,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, if (obj->import_attach) { ret = dma_buf_vmap(obj->import_attach->dmabuf, map); - if (!ret) { - if (drm_WARN_ON(obj->dev, map->is_iomem)) { - dma_buf_vunmap(obj->import_attach->dmabuf, map); - return -EIO; - } - } } else { pgprot_t prot = PAGE_KERNEL; From patchwork Fri Jan 5 18:46:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185491 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403359dyb; Fri, 5 Jan 2024 10:49:49 -0800 (PST) X-Google-Smtp-Source: AGHT+IH1Uif0/xuxVvI5ZeA27T8D36kS50I//6s4ioKj1P6bQv2VkBjOABkHdCHDCVv+kG4ehKsA X-Received: by 2002:a17:903:2291:b0:1d0:6ffd:835b with SMTP id b17-20020a170903229100b001d06ffd835bmr2397351plh.102.1704480589003; Fri, 05 Jan 2024 10:49:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480588; cv=none; d=google.com; s=arc-20160816; b=WCA0OA0jCz3fm21psvvw63Dpt0w+Y6+0uj4Yo1wxE3n0t/oOLzMkY74THz1fsQOJuj iVYnH1P1Xdtegg3fgsgOsh/3+yZyMGwtIXKa3AnDJRPRqoXrjYC+9an3SLDaDpR8KvTH hPU9XTN+S+E+xKbsnxWTXn5BzRYV3JfQMcVvypqESwKIQlQELpiXIa+vwB5uZsmzWIVm rvg2qL/OgBc5V4NBxGMqnD0wHTrBsDpJ7QzSAQ8plvqHTcEnmOJJ6/SE8Rhr2LHxv0gc hxLRU6lg+usLaGQLmTmaZJCfAxOBZer615aS0yp3f0PkJkbU0KgNEX7bgYEzj3tjJEIH B5UQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=rshXXrNkTdp+TOKM1cZl8KNztU+aLVks6xaydr0XYpQ=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=JE0Qyt23V2iwlhaol/5CdQ0cW+IViKAWRGX335vb7ThkXTXZXczgdqRqZiXFY/AEEK 8Bw7z98ANYnMrTJ/4QoFA1AuShJivkOtbBxgAm81b7lTfmwFT/kr5V1dPt9cTh+ghtiz qsjtYtzYqCzUhG+GjInJWQAZOB8/G/TzZZ+lXnheQag+wc5Ttgi3FNileY24DrqyyTuI lVAP6/JXg2/156hdcS5imaPzZ6LZ1tETmeeVZzdDJw9PLmGqkMa0ThYPbTO1ufQCdaQv ovF8lREL+jgN7A8mkewcwL9CElF771w3DNfePSaebkSSfDmusrwz/n5vnUwjpJlp2efe kC7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=cUKmatPY; spf=pass (google.com: domain of linux-kernel+bounces-18245-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18245-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id 21-20020a170902c25500b001d3e547b0eesi1603888plg.184.2024.01.05.10.49.48 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:49:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18245-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=cUKmatPY; spf=pass (google.com: domain of linux-kernel+bounces-18245-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18245-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 95D02B239D1 for ; Fri, 5 Jan 2024 18:49:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 38E21364DA; Fri, 5 Jan 2024 18:47:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="cUKmatPY" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D3E836B10 for ; Fri, 5 Jan 2024 18:46:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480415; bh=4r4O0Txfi6sopJfIVmmSRlOl7CynM2fVl56/NWlOz5k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cUKmatPYuxSLewnyK0wnLHgKnMj/IflthxaD4ptKKDHilcL8uZyPR/xzePklx18st GR87LT57II64QsIW767D76QP8T8WVqmuijhRMXOC3Usx7kZQiax2/ee2baw8IeoAHY TU+T2K/xp2F/d8kT2AIFeTYOpV0QKx6aXV9iLo9qVB005R3PV+eA7vwt+xRtrg0h83 SIopLsPwN7iYLLyEltax6KzukYL6/Ct5mFSWqTIYpupFGhLk04re3USQNiI1UCg10J HGR2iX3OvNj5Y6mWz74uTWF9hMykFXqSPuDF8OMZxVT0GZKnTO7nPKPLklU8qAygTm He7O0zVAbn0nQ== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 76A473782043; Fri, 5 Jan 2024 18:46:54 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 07/30] drm/shmem-helper: Add and use pages_pin_count Date: Fri, 5 Jan 2024 21:46:01 +0300 Message-ID: <20240105184624.508603-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277438167285494 X-GMAIL-MSGID: 1787277438167285494 Add separate pages_pin_count for tracking of whether drm-shmem pages are moveable or not. With the addition of memory shrinker support to drm-shmem, the pages_use_count will no longer determine whether pages are hard-pinned in memory, but whether pages exist and are soft-pinned (and could be swapped out). The pages_pin_count > 1 will hard-pin pages in memory. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 25 +++++++++++++++++-------- include/drm/drm_gem_shmem_helper.h | 11 +++++++++++ 2 files changed, 28 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 1f0a66386415..55b9dd3d4b18 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -156,6 +156,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) drm_gem_shmem_put_pages_locked(shmem); drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); dma_resv_unlock(shmem->base.resv); } @@ -234,18 +235,16 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret = drm_gem_shmem_get_pages_locked(shmem); + if (!ret) + refcount_set(&shmem->pages_pin_count, 1); return ret; } -static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) -{ - dma_resv_assert_held(shmem->base.resv); - - drm_gem_shmem_put_pages_locked(shmem); -} - /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -263,6 +262,9 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; @@ -286,8 +288,14 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (refcount_dec_not_one(&shmem->pages_pin_count)) + return; + dma_resv_lock(shmem->base.resv, NULL); - drm_gem_shmem_unpin_locked(shmem); + + if (refcount_dec_and_test(&shmem->pages_pin_count)) + drm_gem_shmem_put_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); @@ -632,6 +640,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, if (shmem->base.import_attach) return; + drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 9e83212becbb..c708a9f45cbd 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -39,6 +39,17 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; + /** + * @pages_pin_count: + * + * Reference count on the pinned pages table. + * + * Pages are hard-pinned and reside in memory if count + * greater than zero. Otherwise, when count is zero, the pages are + * allowed to be evicted and purged by memory shrinker. + */ + refcount_t pages_pin_count; + /** * @madv: State for madvise * From patchwork Fri Jan 5 18:46:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185490 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403162dyb; Fri, 5 Jan 2024 10:49:20 -0800 (PST) X-Google-Smtp-Source: AGHT+IHYim4jEyn0SNlMnhNPSNLlql1C+j3R0YzS2BGZyrRvj6VCW8QZIwyj43Yy0LWom678yxK5 X-Received: by 2002:a05:622a:110b:b0:425:4043:18a0 with SMTP id e11-20020a05622a110b00b00425404318a0mr2691684qty.83.1704480560541; Fri, 05 Jan 2024 10:49:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480560; cv=none; d=google.com; s=arc-20160816; b=LTSlnj04SP5kBtPdO2pEVOr5BCkkSDngzcOyZSYhuOeOSpLxfKRkhekQQjS8MumC8/ 4M05dkQij+Y+eSs2rzU5u6YEnpNQoeWTqoMGh+P1MwlcjaM32Bq/XDe+iaUG+svwQT22 g5S39fnfYkDO8u4SUum95eKBqavpArPVKoJCIVsNuEexPkXBG4JVtJexqnisXjIuNALL ytHly3V3dWhdhoCAw33kpchef6mngi3uikATJ+fXge/g9HVTFk9uAhe2HErB9tFy+2tW HYeNxodMROfBSUvOgEHqCeblYFJ+lMTeF5kZOiVf0LcfkS/Wdz6zzcPLIPuXlkvitvbi 3zhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=kwvRhX7xa+q7Ncs7j6AnS9vttPy9eSj8G3znAt05K0c=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=FmNbBEDPvXmGME3qKEovBR0jnTQBQtYXBKWphi97PU8VGTAWXVvkuWTEN8M5+YC3nP /72UVtAGBPDCdvb126bdixJO69n9WVKUHM7zIGwHajQ9jdOWSDiK1Cdpqg+x2pj0gCFF W2rp21ZLkJQ+zMmxCEbWhbcQcZMKuUKiYTxRbSh08m63EzCGbNxXxZwFyBG1FcXWTiHb dUrTKMIr6HsLd85cBFf9u0Y/j/WQqG5gLWAhPVu7B4nee1hW+L0Tpw2KjyPwwOoYfsiU DgmzAtt2+caKFqkDzo5fjl07S+WKrkgIM9WamoPdlUBJOPX5NvvCg1UZfNKUajHBC0N8 KHEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Orc0Lbil; spf=pass (google.com: domain of linux-kernel+bounces-18246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18246-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id fb22-20020a05622a481600b0042872da56f3si2414619qtb.313.2024.01.05.10.49.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:49:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Orc0Lbil; spf=pass (google.com: domain of linux-kernel+bounces-18246-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18246-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 4AD4E1C23A8D for ; Fri, 5 Jan 2024 18:49:20 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ABC90374ED; Fri, 5 Jan 2024 18:47:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="Orc0Lbil" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B611237165 for ; Fri, 5 Jan 2024 18:46:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480417; bh=VsBjkTOcwbyuFJpRhuegYHZ49VVIN8ewlBAH0e+7sVA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Orc0LbilUVWlNlvBsm0E/hJbQQYhdvegtDXch5wqjb2KfB2jn302D63iqJ/SxqCn9 kwDmYI78YaT3s3o6AepJ6rf5Mp83sI4ubF0tQ8C+CFHEgGkKWR6eb24LZYdMAaUaZo XIdBzcWntXUyfnW4LgCAsfPQNq9HsHRwid2WVwSNd7DCaU6NkA97ZfUIpPhT0Jneis MSxaPmb6tzT9JABHd5yreuMs5PJDW3O668rlfgPsadqLzXYxkHrx9MYqWypy5y4bA2 7P4MhNuY7SWPrZed/pr/L4d79uijdA7+VanX/69XJIORyrzGAjZFIqOZKNWkPvpzTh c1f9iiErqQjxw== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id F20EB3782046; Fri, 5 Jan 2024 18:46:55 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 08/30] drm/shmem-helper: Use refcount_t for pages_use_count Date: Fri, 5 Jan 2024 21:46:02 +0300 Message-ID: <20240105184624.508603-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277408151601169 X-GMAIL-MSGID: 1787277408151601169 Use atomic refcount_t helper for pages_use_count to optimize pin/unpin functions by skipping reservation locking while GEM's pin refcount > 1. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 33 +++++++++++-------------- drivers/gpu/drm/lima/lima_gem.c | 2 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- include/drm/drm_gem_shmem_helper.h | 2 +- 4 files changed, 18 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 55b9dd3d4b18..cacf0f8c42e2 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -155,7 +155,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (shmem->pages) drm_gem_shmem_put_pages_locked(shmem); - drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); dma_resv_unlock(shmem->base.resv); @@ -173,14 +173,13 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - if (shmem->pages_use_count++ > 0) + if (refcount_inc_not_zero(&shmem->pages_use_count)) return 0; pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count = 0; return PTR_ERR(pages); } @@ -196,6 +195,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages = pages; + refcount_set(&shmem->pages_use_count, 1); + return 0; } @@ -211,21 +212,17 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); - if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - return; - - if (--shmem->pages_use_count > 0) - return; - + if (refcount_dec_and_test(&shmem->pages_use_count)) { #ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); #endif - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; + } } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); @@ -552,8 +549,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) * mmap'd, vm_open() just grabs an additional reference for the new * mm the vma is getting copied into (ie. on fork()). */ - if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) - shmem->pages_use_count++; + drm_WARN_ON_ONCE(obj->dev, + !refcount_inc_not_zero(&shmem->pages_use_count)); dma_resv_unlock(shmem->base.resv); @@ -641,7 +638,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, return; drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); - drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); + drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count)); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); } diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 433bda72e59b..2a97aa85416b 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -47,7 +47,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) } bo->base.pages = pages; - bo->base.pages_use_count = 1; + refcount_set(&bo->base.pages_use_count, 1); mapping_set_unevictable(mapping); } diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 1ab081bd81a8..bd5a0073009d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -489,7 +489,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, goto err_unlock; } bo->base.pages = pages; - bo->base.pages_use_count = 1; + refcount_set(&bo->base.pages_use_count, 1); } else { pages = bo->base.pages; if (pages[page_offset]) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index c708a9f45cbd..2c5dc62df20c 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -37,7 +37,7 @@ struct drm_gem_shmem_object { * Reference count on the pages table. * The pages are put when the count reaches zero. */ - unsigned int pages_use_count; + refcount_t pages_use_count; /** * @pages_pin_count: From patchwork Fri Jan 5 18:46:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185497 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403925dyb; Fri, 5 Jan 2024 10:51:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IF0BLNSJvsAR+lxfehZCWdx82B/DxDuwBZwkOPFtmJATgAKX+91Cx8R5nBBSbKAeveARHxd X-Received: by 2002:a17:902:74cb:b0:1d3:479d:3d56 with SMTP id f11-20020a17090274cb00b001d3479d3d56mr2585963plt.133.1704480667926; Fri, 05 Jan 2024 10:51:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480667; cv=none; d=google.com; s=arc-20160816; b=X8mRrYL8lz3SPRuvURbkFekKvz3rhxq0r3kfWRa0Pl4qjVpqiZrS0xmoGtAOvEmqt4 4izz5/lJn8TsvIjCIaOZLwOzfHO3dP6vhkH30cL0cmz30Gsj8QdqpRg3aJIfGeIPrFn3 oZ3GTKJN2uFOs9fhjG/rAtEZvc2aybINhnoaB5ARkTWz/0oFFMDOilmj39cQyU8eoab6 AN9vpg2OVzaOLkctPVYHi+nPNyO4UNRDMxg3Nx/1aUu1qL0+0+XeTiICTtFxKVa8eLUl EJkMrrCitW/IlqC1llL8VAHsQ8bIiz4J9N3wwyAB17nX8v5bBY9BEl8/UZwwDpFwcEvz zMFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=39sf9oMKuAIqdIjA45ye+e3dPV6ZRcHHnGFV7k7jF+A=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=nNKTdcwoP46FM5hHFBHhuljGfMulNltW8pEp85Y+EZkO/rFY8JkHKb0dhJBqNgXV6h oGtzkGMKBtA8/Pak0G3ix4rzDdknjWNjZ+yBeOur68/tgxpvGj7YXQy4TkLViJltqgFq t7J1SETm5G9LgHKC7qah9AzSwsB9i22/Xpte42XKBrpP3EeliQ9+1F1y/Qo+q99IQnPC 25K9Gx0a1aJ56BLnaVhTVdK9//re1Yrw0zjPvCtRkGSRI0FQYLCY+Nn7OJTIdy81oncw d0O0tXmiT2nuyGx5Ku1cpIHngXcLEbeQVS/6WjkGTQIsUAXbHtrh13bcN5jat/fbjDRG m3jQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=3FX6C1Gf; spf=pass (google.com: domain of linux-kernel+bounces-18247-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18247-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id j13-20020a170902c3cd00b001d4f32f16desi868057plj.319.2024.01.05.10.51.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:51:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18247-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=3FX6C1Gf; spf=pass (google.com: domain of linux-kernel+bounces-18247-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18247-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 208F7B22A58 for ; Fri, 5 Jan 2024 18:49:38 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 475F7381B5; Fri, 5 Jan 2024 18:47:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="3FX6C1Gf" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 530C935EFD for ; Fri, 5 Jan 2024 18:47:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480419; bh=hwlCMWKCwWq+FTlAozKtV1c6NXEsmQgXkAKTAl4/P/Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=3FX6C1Gf+urqihuAHe188P3CvAOPc3Kru2MKDUDKN4uTIEgbX3lMfS9y7FyoeG8B9 Od2AE3+82Ow23PkbPb0rI9E8sEGsKmBlo0Rta6o7+qoFlficKQS7Od//ZdEIEYnbIz fkaSUhuJpTba3VPNM9Y+RVgZ2hc+SDTVd3n8HrWGqMpJIknqswaqzG4ny49Ylf/Oxa nu98Ylxpv2q+wUnZh7UVFWlzX3VrldN2MGRP1uCbBh/3ae9XsiqY1YX9FMx6V0yKjM 5mqniy6HoKmSITyLbrfdwGuzsoZnLq9AshXLBCSonPtVuc+7cxoqYJY4t+JsV7lpCO DI4yKiNy9aRbw== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 934B33782047; Fri, 5 Jan 2024 18:46:57 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 09/30] drm/shmem-helper: Add and use lockless drm_gem_shmem_get_pages() Date: Fri, 5 Jan 2024 21:46:03 +0300 Message-ID: <20240105184624.508603-10-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277520874044664 X-GMAIL-MSGID: 1787277520874044664 Add lockless drm_gem_shmem_get_pages() helper that skips taking reservation lock if pages_use_count is non-zero, leveraging from atomicity of the refcount_t. Make drm_gem_shmem_mmap() to utilize the new helper. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index cacf0f8c42e2..1c032513abf1 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -226,6 +226,20 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +{ + int ret; + + if (refcount_inc_not_zero(&shmem->pages_use_count)) + return 0; + + dma_resv_lock(shmem->base.resv, NULL); + ret = drm_gem_shmem_get_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); + + return ret; +} + static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { int ret; @@ -609,10 +623,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct return ret; } - dma_resv_lock(shmem->base.resv, NULL); - ret = drm_gem_shmem_get_pages_locked(shmem); - dma_resv_unlock(shmem->base.resv); - + ret = drm_gem_shmem_get_pages(shmem); if (ret) return ret; From patchwork Fri Jan 5 18:46:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185492 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403431dyb; Fri, 5 Jan 2024 10:49:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IHM5olw/tglaL8sWy6n8wyS6qGx4uCYPHU3s0FKv0xiLaUHnT2KPh63j4mEBgA5gbJoJx80 X-Received: by 2002:a2e:7008:0:b0:2c9:f288:cb15 with SMTP id l8-20020a2e7008000000b002c9f288cb15mr1489671ljc.63.1704480596556; Fri, 05 Jan 2024 10:49:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480596; cv=none; d=google.com; s=arc-20160816; b=Yw6dNlGJlqPnObgP+67Bu7AcKbIo/Z8brUT+b3gdhdUJQ67GNDbLqUbZxHKTW+KFHK LAdu7owDpzs5OX7p2ZQRKLyTDzyvhW6cJAw2x2Yxj/F0UiDe6tjxBZPDKwGMBfbrKKZq JBdl2l5CngPcTFCHGrXfjcNH2DF07DdbeOqfMUoDfFTjsE2Te39b3XZnhmVK0oFThj2q Kp9ixs/6LmJJxG4uKwOy/RJVhW8SUgRJ8ugI2YnDGppHZnnIv5Y4dp5Jn/1jzdiq2QmQ LbDkVyj3LDQMWW4QyIb3Px7xsU0POHSYqJ7q5/q7c83YemTiWsiLLCcyO3PQ7zHUWpVo +tNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=+lntD0zDdA9lKwFJWJgVuItHEt2KjM7PLuNolPBfnyA=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=i246bbKBVm03f5RR+b8epYTz5Qnd7I2uBT1KGa0fUaOEitcYE6U7YHoJGbwV9e+Ysa jQY6fIFbltJmz0lCEul++c0Vj7xc0VUNuhwJgJ7XL/xzgsvONfngcS2oc5yvq8w1oecW /+f6AAI0euCRdlYO4bnqew37druuFUZqYoFhpLbz6HZ9iuqnOL+zVVeW+aPt/LPt//YT WXuBL6e4Y20ZWFumYhqhhWTBNUBDKoRjhqhmMxzZ8gYIUY/1avG2wEfcXvffIl4Izck8 OdRuhn3+fR9qsg6VjLHZM4b2pMrgcTresb1zTXK7YK3ReAEkS+WcRhJY4uxHkJ4TwaSV pgWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=foN9vxtm; spf=pass (google.com: domain of linux-kernel+bounces-18248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18248-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id cy13-20020a0564021c8d00b00554107338f6si875583edb.58.2024.01.05.10.49.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:49:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=foN9vxtm; spf=pass (google.com: domain of linux-kernel+bounces-18248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18248-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 2D62E1F242A0 for ; Fri, 5 Jan 2024 18:49:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2E989381DE; Fri, 5 Jan 2024 18:47:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="foN9vxtm" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 327D8374E3 for ; Fri, 5 Jan 2024 18:47:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480420; bh=8xy9qG3NliGtZPVd2hDSN91KOpRZFrkHzfliPWL1Kr8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=foN9vxtmljHtDoFxKZXdVfUlvUk2ldHQPkysEuMIrETOzSthVpHbW8FrNMkkctFbI DUPAoLi9rJaMXAE1q+5pDZrSwThFYaQrk9oUJPH5bE7RY4VFdzjA1CAGbcLUL5qi3n SOJ26VbLZDldYr7D9USPHVQ9Mx0SjnYr6gCe9BLSnr6+chRZ5JWn/hJzpJCR8HtHW7 nac4YIRyxvZdGG31a0mq9S46EA6fdhtNzngGy03ko8VIF36vggCoISL58oQq4MKQQW Ip0WEQ6JQRjX8E2x+cVBXfIumzyu2kCtG4yu4uM2ZpL5h5tNGwX4ofghxUSqRGkruE wSWbCppHn48VA== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 30AC2378204E; Fri, 5 Jan 2024 18:46:59 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 10/30] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Date: Fri, 5 Jan 2024 21:46:04 +0300 Message-ID: <20240105184624.508603-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277446059962392 X-GMAIL-MSGID: 1787277446059962392 The vmapped pages shall be pinned in memory and previously get/put_pages() were implicitly hard-pinning/unpinning the pages. This will no longer be the case with addition of memory shrinker because pages_use_count > 0 won't determine anymore whether pages are hard-pinned (they will be soft-pinned), while the new pages_pin_count will do the hard-pinning. Switch the vmap/vunmap() to use pin/unpin() functions in a preparation of addition of the memory shrinker support to drm-shmem. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 19 ++++++++++++------- include/drm/drm_gem_shmem_helper.h | 2 +- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 1c032513abf1..9c89183f81b7 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -256,6 +256,14 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) return ret; } +static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + if (refcount_dec_and_test(&shmem->pages_pin_count)) + drm_gem_shmem_put_pages_locked(shmem); +} + /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -303,10 +311,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) return; dma_resv_lock(shmem->base.resv, NULL); - - if (refcount_dec_and_test(&shmem->pages_pin_count)) - drm_gem_shmem_put_pages_locked(shmem); - + drm_gem_shmem_unpin_locked(shmem); dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); @@ -344,7 +349,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, return 0; } - ret = drm_gem_shmem_get_pages_locked(shmem); + ret = drm_gem_shmem_pin_locked(shmem); if (ret) goto err_zero_use; @@ -367,7 +372,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_unpin_locked(shmem); err_zero_use: shmem->vmap_use_count = 0; @@ -404,7 +409,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, return; vunmap(shmem->vaddr); - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_unpin_locked(shmem); } shmem->vaddr = NULL; diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 2c5dc62df20c..80623b897803 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -124,7 +124,7 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { return (shmem->madv > 0) && - !shmem->vmap_use_count && shmem->sgt && + !refcount_read(&shmem->pages_pin_count) && shmem->sgt && !shmem->base.dma_buf && !shmem->base.import_attach; } From patchwork Fri Jan 5 18:46:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185493 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403605dyb; Fri, 5 Jan 2024 10:50:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IE2AseaMTFHPU7USWvWGJtsoag+Wcs3qdezNH0mtG7W24oPizC/RB34VhDuzVr6khBwY4yd X-Received: by 2002:a17:90b:3846:b0:28b:ddff:7127 with SMTP id nl6-20020a17090b384600b0028bddff7127mr2134315pjb.36.1704480616892; Fri, 05 Jan 2024 10:50:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480616; cv=none; d=google.com; s=arc-20160816; b=Bfz4n4cPF22koIkcyhj6yJnZ59a2Zit+o1Hrwa9ZjKlMG6b7P5eMhSS9qZA4f63PYi pW9xaT/WX+VdDolCKKfY+4KEFdlVoYi04OaG+v9kWcx68liutol3k8vcMS/MzGKSxZfz AOMH6hJnXPKBa6iM4mMGOHxIrw3zxwQwrzQiRg0SF/yDr4gR75fXp++W5A8KAmrxgJnq EY1W8EDsSwt47B5V7D9DcZFkZVLKpvsJpLArwS8zQE6tXqBMYmkqQuEtGJbDTgk5EnN4 trH55sjvFr7OetsgkzRUPTfI0iIEny0cDPPiff4sg6VmYCDSYOTZnUuBnflK9HMwGUbW Ksug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ldn6U+QkjiO92IE1pxFf2IiYz/CqWGXQ7W6zh9pZv7c=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=xk+bZ3yriZFvrSUEaiVJu2FDeU3igSF+8Cp4OUIA8HW+jB4zzVxlchJ7Ty7cqUIgfj a6TFA7aVAADX0tV7SOmnRmTCahoa/5sAa8R5XawT2LoDJbiddqXvso03fayJtzL4ZS20 2bK5/zbitlYE5Fx1/Ssbv5yFj7I1sq9JefBVOXM6bJzkNE5OIRL2t6Gzzzg6Mw+njNxh AmTK7hUwzX0l7pZLr5g/+bRkFwUa1q0ldJgTSG0Y+nX2bHwpXckElPRWgljTII8u/LBe H9gd8c71yBpGpQPdpgujPkZ3mCTo/HDz1Ob31Ksc/u0exMPK70vhUJXt86H8+IZGRxsd BJ0g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=gn2LuCh9; spf=pass (google.com: domain of linux-kernel+bounces-18249-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18249-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id 11-20020a17090a098b00b0028cfb02353asi1182615pjo.169.2024.01.05.10.50.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:50:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18249-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=gn2LuCh9; spf=pass (google.com: domain of linux-kernel+bounces-18249-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18249-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id DEF20281E18 for ; Fri, 5 Jan 2024 18:50:09 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CB51538DEF; Fri, 5 Jan 2024 18:47:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="gn2LuCh9" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5B1C381A9 for ; Fri, 5 Jan 2024 18:47:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480422; bh=v5+cngbX/04V7Dxe+pQvc9AS8RouTUe4vyh9L/4x71M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gn2LuCh99bU9HukS0JPvUM/SdGxVuLu3KG4B7p//6T/6Xd51HI/0cFW4FI1SXu+zG lBKM/u481FWqOztXhOl27xbk+N3ZMDtw0dgrgfVk3eeWglaD4CSVKW3WtLc9eelfs3 sTU+w7wdIU9BKVVQxivG7C5YwJraSKen5F9pZ+ZwVdlmKnjwoCsSCNdFAPl21Fm/WL E+wkHc6aUCc++UroSIvJ5SqjMBoYOeYwhV6efG1J+SKH6pERamCmE9J1nVcXLCZAyD t17fH9FSdicgXK5TPH8udebWxykN4ir8jp8+cNuKNFv55NNzvsMDVkKQnhfijKjfL2 oQfDirfL6w82A== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id C5BBA3782049; Fri, 5 Jan 2024 18:47:00 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 11/30] drm/shmem-helper: Use refcount_t for vmap_use_count Date: Fri, 5 Jan 2024 21:46:05 +0300 Message-ID: <20240105184624.508603-12-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277466905467475 X-GMAIL-MSGID: 1787277466905467475 Use refcount_t helper for vmap_use_count to make refcounting consistent with pages_use_count and pages_pin_count that use refcount_t. This also makes vmapping to benefit from the refcount_t's overflow checks. Acked-by: Maxime Ripard Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 28 +++++++++++--------------- include/drm/drm_gem_shmem_helper.h | 2 +- 2 files changed, 13 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 9c89183f81b7..3403700780c3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -144,7 +144,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } else { dma_resv_lock(shmem->base.resv, NULL); - drm_WARN_ON(obj->dev, shmem->vmap_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, @@ -344,23 +344,25 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, dma_resv_assert_held(shmem->base.resv); - if (shmem->vmap_use_count++ > 0) { + if (refcount_inc_not_zero(&shmem->vmap_use_count)) { iosys_map_set_vaddr(map, shmem->vaddr); return 0; } ret = drm_gem_shmem_pin_locked(shmem); if (ret) - goto err_zero_use; + return ret; if (shmem->map_wc) prot = pgprot_writecombine(prot); shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, VM_MAP, prot); - if (!shmem->vaddr) + if (!shmem->vaddr) { ret = -ENOMEM; - else + } else { iosys_map_set_vaddr(map, shmem->vaddr); + refcount_set(&shmem->vmap_use_count, 1); + } } if (ret) { @@ -373,8 +375,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) drm_gem_shmem_unpin_locked(shmem); -err_zero_use: - shmem->vmap_use_count = 0; return ret; } @@ -402,14 +402,10 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, } else { dma_resv_assert_held(shmem->base.resv); - if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) - return; - - if (--shmem->vmap_use_count > 0) - return; - - vunmap(shmem->vaddr); - drm_gem_shmem_unpin_locked(shmem); + if (refcount_dec_and_test(&shmem->vmap_use_count)) { + vunmap(shmem->vaddr); + drm_gem_shmem_unpin_locked(shmem); + } } shmem->vaddr = NULL; @@ -655,7 +651,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count)); - drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); + drm_printf_indent(p, indent, "vmap_use_count=%u\n", refcount_read(&shmem->vmap_use_count)); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); } EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 80623b897803..18020f653d7e 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -82,7 +82,7 @@ struct drm_gem_shmem_object { * Reference count on the virtual address. * The address are un-mapped when the count reaches zero. */ - unsigned int vmap_use_count; + refcount_t vmap_use_count; /** * @pages_mark_dirty_on_put: From patchwork Fri Jan 5 18:46:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185494 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403666dyb; Fri, 5 Jan 2024 10:50:28 -0800 (PST) X-Google-Smtp-Source: AGHT+IEsJHooruQ/9gxjWfjGCaq0T/y30a+h68PB6dL1BVe4UfMlhkrLuaLwB9WEuXTKrfqyU7Ku X-Received: by 2002:a05:622a:1b26:b0:428:1e43:346d with SMTP id bb38-20020a05622a1b2600b004281e43346dmr2571202qtb.99.1704480627861; Fri, 05 Jan 2024 10:50:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480627; cv=none; d=google.com; s=arc-20160816; b=ALG40oS9N8+NtccinQYIvK7CeobHBEHvpgpmUipyDPWLedWaZF7FiTbUm2Fxb3DirA zxpRqceuB7hBUFfbXMvOq7yld5+M/do/hO3FLRjD2O0OjwORqX/IUKXTj+SzmVf/aYGD jibFgBkV0V0xJysrfOzvIA46YjOr7mQoQcGob5VRdHQVtF30G9vPOTZiyLV9qVUxDKH8 sj28n9MpESuOWWA+1TPseNjSzF6pYTPYDRfiabV5tsggU/0k/nTmj5bbLqmPCpfxtMMT ktJE1/AqZ7AshiRvtoriyHpweU2B34NcOzHyhe7GHh2vDXYVHutZH6De2FOnsTsHOf/3 glVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=HG8xtjGKlzb1iX9kRct4tAAyeRO0kf/udAOhuq2wib8=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=b9qFIeGTixa8ABDwu1zjH5LUcl2OHI5T83zVCAXTL6LZ+lBcR5uy1hW4rV71/X6t+G Wxx+ettpsVFuKYMJaF8IKiVgb/L0a+VKwAvmen03CCzL2QiWqqAIOjrSZFJu7VSY1TlZ GKdewc3uxyVuqUBArtef1OHE3F9HBmNCD3uYiwtXIeohK/qbUB3MHuwPuTx5ckWjjA4j 38T2IAJmtp+V6PZ0ExNg0l+QEBxN7WyB3HkAkCcBzVc+d89Jk3vUDh9TECefn+VPy5Fm yGrEOiz7o5wIApGeeUm0FO9/Dk0EMR2V1sZf85Y8lgAXx5Oj/v52m6jO1Yl8Iskgv/Da GqXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=aVeYBhwZ; spf=pass (google.com: domain of linux-kernel+bounces-18250-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18250-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id g22-20020ac85d56000000b00423a141ebfasi2461273qtx.111.2024.01.05.10.50.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:50:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18250-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=aVeYBhwZ; spf=pass (google.com: domain of linux-kernel+bounces-18250-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18250-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 964B01C235E9 for ; Fri, 5 Jan 2024 18:50:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8CEF338FA5; Fri, 5 Jan 2024 18:47:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="aVeYBhwZ" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9981A381D1 for ; Fri, 5 Jan 2024 18:47:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480423; bh=xi7umfn3odK5/kuqYD+1GYoRfQ0wyb5ZdiidzOwD6Fg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aVeYBhwZRIhsDGR8JNg4W1K4IyB5of3osKSyffgpHpmO7cUAD1oSr6XDnxCxplEGt cbbPrZH8nlbs87hGn59EXGf+0cQikhVOXvLjnXqTEMhsQQmeXU2TOprdWPaqmKQU2Y S4xcYlV6Z5xpnLTyRXl+NS75Akvw9m3E1XRZh8n79DDi/WFqzdtCkqQKYi1WwPNDEv yOmMAYeaUOYaM5VGQsj3+pkIc+qjgOOfsC+L6yRRpZRu5Ztns/KvU4Zz2X8wnPYJNx 9Hz4sv675X/aEWT/iAeIqa+5ZYk0hgvY78HCj09JEyy+RhzHeff/yYtuvAT1PiC4WJ WYUY8nQfqD1Aw== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 6B0083782042; Fri, 5 Jan 2024 18:47:02 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 12/30] drm/shmem-helper: Prepare drm_gem_shmem_free() to shrinker addition Date: Fri, 5 Jan 2024 21:46:06 +0300 Message-ID: <20240105184624.508603-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277478999070334 X-GMAIL-MSGID: 1787277478999070334 Prepare drm_gem_shmem_free() to addition of memory shrinker support to drm-shmem by adding and using variant of put_pages() that doesn't touch reservation lock. Reservation shouldn't be touched because lockdep will trigger a bogus warning about locking contention with fs_reclaim code paths that can't happen during the time when GEM is freed and lockdep doesn't know about that. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 40 ++++++++++++++------------ 1 file changed, 21 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 3403700780c3..799a3c5015ad 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -128,6 +128,22 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static void +drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); +#endif + + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -142,8 +158,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { - dma_resv_lock(shmem->base.resv, NULL); - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); if (shmem->sgt) { @@ -152,13 +166,12 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) - drm_gem_shmem_put_pages_locked(shmem); + if (shmem->pages && + refcount_dec_and_test(&shmem->pages_use_count)) + drm_gem_shmem_free_pages(shmem); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); - - dma_resv_unlock(shmem->base.resv); } drm_gem_object_release(obj); @@ -208,21 +221,10 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { - struct drm_gem_object *obj = &shmem->base; - dma_resv_assert_held(shmem->base.resv); - if (refcount_dec_and_test(&shmem->pages_use_count)) { -#ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); -#endif - - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; - } + if (refcount_dec_and_test(&shmem->pages_use_count)) + drm_gem_shmem_free_pages(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); From patchwork Fri Jan 5 18:46:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185495 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403766dyb; Fri, 5 Jan 2024 10:50:40 -0800 (PST) X-Google-Smtp-Source: AGHT+IGciJOsMR64P8m4JqtxeQ1F1EvMz4bVMei5FF7G0CPjcuUtndCLpr7XlBMC/N3Tu+kmBbb4 X-Received: by 2002:a17:902:e743:b0:1d3:485a:833d with SMTP id p3-20020a170902e74300b001d3485a833dmr3560293plf.39.1704480639966; Fri, 05 Jan 2024 10:50:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480639; cv=none; d=google.com; s=arc-20160816; b=esqpe8/X6XTIiajQLdtHcNY0BNZ2DmgQtsZV9ow+GasiqG7I4ONM6oYAqHf33/E0nK jcfS79oOqDzvJPi2oCaDiWVTTmBU6HJu2TjKdt3nHK5jIZwPoCem4U7PFYwVFR9MgQHg RyjgkWbESCuaJkrcL2GHNW2IwqN0/UNw2KvJT4m3YjVdn6PfhY9Cn6KXg/L2DuC1lZ2Y pRU7Et0MF7KZY+7QiehyHRwN68CSCb7aTLE94uwSLRTG4IJWbeCyjkmMJcgfGrtNvFVi SCyPfkDgQuqikSO+FJnvQYui5wF0ah/Z+JFwdCl0p6Bdi0jbtwIKU4t14zIE7C/t5ewC 0F5g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=L3/d1THGEL44Ziw2nkhvhXG7wbk+5hEUSLEiMzWWVUA=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=Iu5aS+ajO24D3tuRU53HTu8f2gJwVZmQwcDsr5pNHMr22B6V6ZY5NGNsYVtM2t0rTh iLuPiVoUabAZTUfLqoSRih5EYe7IJzGQHod4Y/k2VN21c4q4aK+pfr67t1XgtO9CjefM zxLRwqn0x66XVb8sghxWOSPJ0kWm/cvDfVDsM/zC+Qkb1ahQVBGVHWiowOoEjwridTwB xeF8BZ8v5svca7vdG15pqSof1ZYahi1aoyx27ZvsUztaVhiiPTP7mjEmyeWtnWKYq51k 2Kp5fYVVeXvtkmaC015L/25GE+5769HdAL8YiAL504aikKzJAqV2VlkBKWQrKxZvd/k7 H58g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=PLJ35Kr9; spf=pass (google.com: domain of linux-kernel+bounces-18251-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18251-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id jv3-20020a170903058300b001d4c90538adsi1610078plb.111.2024.01.05.10.50.39 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:50:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18251-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=PLJ35Kr9; spf=pass (google.com: domain of linux-kernel+bounces-18251-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18251-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id E5986281E47 for ; Fri, 5 Jan 2024 18:50:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AA80439842; Fri, 5 Jan 2024 18:47:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="PLJ35Kr9" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7E0838DD8 for ; Fri, 5 Jan 2024 18:47:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480425; bh=n3hnUXa5rluasw0Ba4uFJ6xIbIY38qEptryUL9/Mlug=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PLJ35Kr9M6RCca7l9xI++W2rK719IWUm+1nZ4aTWDZJEj67CVmMfoN6D6U6Gzq5Ld jL40ZlU06lMn4SdhmaKxSz2EaQ0jcvzq4zKKUTzi2kzu7jJQ8WP7Z5ZGmontcK+OLJ h/gzZRlOmajo6zE50Yrs9l3t+I3i1CVk/onosyKEv8UYcyxyvZYmC61QExPsK7HhXi opsaVm1g1AzUU4OQV/hlz3H+2XcjdB5WfU0iubq7Xz7hXNmgW48AQileJT5n+NXAyg qEPvVibHijLxHJwWa+IJlwdJtwGaull+JMULndzEE5HUQ2ZZOo4iYsEb7I84cnQIfh lrg1NFN8azc3g== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 0A6B73782056; Fri, 5 Jan 2024 18:47:03 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 13/30] drm/shmem-helper: Make drm_gem_shmem_get_pages() public Date: Fri, 5 Jan 2024 21:46:07 +0300 Message-ID: <20240105184624.508603-14-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277491434510989 X-GMAIL-MSGID: 1787277491434510989 We're going to move away from having implicit get_pages() done by get_pages_sgt() to simplify refcnt handling. Drivers will manage get/put_pages() by themselves. Expose the drm_gem_shmem_get_pages() in a public drm-shmem API. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 10 +++++++++- include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 799a3c5015ad..dc416a4bce1b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -228,7 +228,14 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +/* + * drm_gem_shmem_get_pages - Increase use count on the backing pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This function Increases the use count and allocates the backing pages if + * use-count equals to zero. + */ +int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { int ret; @@ -241,6 +248,7 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) return ret; } +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages); static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) { diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 18020f653d7e..6dedc0739fbc 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -110,6 +110,7 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); From patchwork Fri Jan 5 18:46:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185501 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404456dyb; Fri, 5 Jan 2024 10:52:26 -0800 (PST) X-Google-Smtp-Source: AGHT+IHxHTaaFhqkUx/FZGPjpbDjXAMXoKyGWPTf/F8vlssvIf63Tk1Y+Fj3UKCqlhMDdaDGB/Ve X-Received: by 2002:a17:903:a87:b0:1d4:ed47:5c1f with SMTP id mo7-20020a1709030a8700b001d4ed475c1fmr944138plb.16.1704480745855; Fri, 05 Jan 2024 10:52:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480745; cv=none; d=google.com; s=arc-20160816; b=ds8lr1eVKgrLcnGMF3bllXEvCDyi0F5f4axh1MJqnViWExszlHK5ZvFNOyRlklC2Lr E2ajQNACkZGXragaStWcJqRgiOEj7VC3fV+vHv1DYAuEdabAGRylhEUhVE2NN602kfFl AY6ImMCMPqZ8tXIP4RCueiI0NrCEO8QitXRF32NvrPlPnIRqAX+8FNVTuikxmLq0rVM3 vXdjSLoEtttbVx/30uQK+HResmtKmKjThjrK+B8E4UivoidOr4OCICynM6c2mBcGE9wC NpzZfrRov8lb/SDupt2elGyR1Lv0vd7F5/p1pHaaAYJyu3dAf2nPo/Ebhu9Y7UQUB6q+ MWmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=RHs0jcKcyA1A80ogPHsrtOPkZ33OKW8HyQsVPvJCA24=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=JTDO9Zmh7GFFspqo2BgDBsQXiWLiz7+VRb3uA7tXz6sNzwwYOBZJjpMBc/Z5GnnbQR tMuUGIr+gueaTmQ4hdFvFb48dDVclrYZJAWJqDZlhcAhzcezdastm24iQqNhdLJD8vVY UOhpzUc1cfxj5kePGNnbrJEaXLatHPqFa72OwSOP0HnAgN5mnVj+SWn8AeN+pvwQsZVa KzKJmYgpuFf8lCt7VcMYNAwfOTLM3lir6X7FuPQUcblUI1RSRI6Xfz0dtTe+jiTHStC5 LBSZ5axJi8vDKddzKkwEmHbYwtwGAHIW6xf6hlO+A7jZVwlOwdBXQXM1hvjzAQxrzp58 2O7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kg85oLbd; spf=pass (google.com: domain of linux-kernel+bounces-18252-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18252-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id z7-20020a170903018700b001b7d2b55d8asi1598994plg.626.2024.01.05.10.52.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:52:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18252-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kg85oLbd; spf=pass (google.com: domain of linux-kernel+bounces-18252-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18252-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 8B858B21047 for ; Fri, 5 Jan 2024 18:50:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9EFC639ACA; Fri, 5 Jan 2024 18:47:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="kg85oLbd" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA94538F8B for ; Fri, 5 Jan 2024 18:47:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480427; bh=MjcmD6gU68LDlFoDzu5LGGvN6a0gG7s2tx477nPNH80=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kg85oLbd4ffNzHiDa0cajzNYVvBhIZ8MqdJWOoByqrMIpeMq3B2j4jZ3r/uKszve9 ssD3lwpTpX0nS/g2JlPEmcTwT+innDur/vJRigpwF4LjWDmLQNgBO/pvHu3/UtX2J4 wWKmpfOaeog7a1pvI46vrbN3yAuUbP9cUV0LhHD3OPXyyJbhcnZPRgoniUqL/UW55q c7tApQ2ohn1MamZElQjeFpbqd6E4+S1xolcBOmXu1F6RhPpfb2fj+ISblkkhap7o8G or8+FNrLoahMA5Lk+ycWfagGRHd7SgTy+VDDCBMWB+NV+SDHKz5TUt6DaPByeN/O0e vXu8MqNZvoDqw== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 9FD0F378205C; Fri, 5 Jan 2024 18:47:05 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 14/30] drm/shmem-helper: Add drm_gem_shmem_put_pages() Date: Fri, 5 Jan 2024 21:46:08 +0300 Message-ID: <20240105184624.508603-15-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277602641506491 X-GMAIL-MSGID: 1787277602641506491 We're going to move away from having implicit get_pages() done by get_pages_sgt() to ease simplify refcnt handling. Drivers will manage get/put_pages() by themselves. Add drm_gem_shmem_put_pages(). Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++ include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 21 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index dc416a4bce1b..f5ed64f78648 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -218,6 +218,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) * @shmem: shmem GEM object * * This function decreases the use count and puts the backing pages when use drops to zero. + * Caller must hold GEM's reservation lock. */ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { @@ -228,6 +229,25 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); +/* + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This function decreases the use count and puts the backing pages when use drops to zero. + * It's unlocked version of drm_gem_shmem_put_pages_locked(), caller must not hold + * GEM's reservation lock. + */ +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +{ + if (refcount_dec_not_one(&shmem->pages_use_count)) + return; + + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_put_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); + /* * drm_gem_shmem_get_pages - Increase use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 6dedc0739fbc..525480488451 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -111,6 +111,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); From patchwork Fri Jan 5 18:46:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185496 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6403922dyb; Fri, 5 Jan 2024 10:51:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IFSJzh25SuUgS6rJcdtkRCJFW+GIYh3h404uB93JBv0JooV1bCTdry/vRzcHwAetovojckm X-Received: by 2002:a17:90a:d98d:b0:28b:ffa6:3165 with SMTP id d13-20020a17090ad98d00b0028bffa63165mr1943169pjv.65.1704480667601; Fri, 05 Jan 2024 10:51:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480667; cv=none; d=google.com; s=arc-20160816; b=xEtDCVCYfGlajc3Zn7b9oHVfrL4ovkzU5+SxpIf33uPveg4pdYr5/gv4cTq47uy8AM bkSCARdNjCKolsnNeUd6IZj5PaIkXwVDXSsGSDFk8tjeaZvZa8N5ipAKmMO9qSt4r0pK bPLG334xK4Y+KYi53wlNAyV6iZz01tQsCRcYtAtD5Wr9uScep9zJhncyjCT6Fht6TtNx Dge/XODpGcVDONjOfsRuFECLNswnSBgOT5Dab5ihKHnXU4b9YScCUHHLi7WLJV2QJP4+ rLXjUpipgCnL8vF8UZXDGMDigGfJY1irPJhwtV6zIJ27oqg4ZhxXLvKlHzyoftL2N/0S v2Tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=hBrlWznzXpVboti4vw0bHHVz5zMt+VnrcxP2WUIIuY8=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=buZ069IuXDYqR5c2XbOJnk5A6Gw2u+jE6U+IElfDL3KmEstL3lxEw6O47tvnGEVh9E e1xVtfvfL5WFwS/p+MadPkNI5m1P+wDphlQckSJNp0ZyZ0dA85G24F0jyZ0mCZFCz6cO Wlk/Vyc156y5hrQY8ybWLmYJFu9M3/VTJIzR7222DE5ykHXN0A3fWfXfkRpTw4yQ3fEQ npTBRpWyEidyNPjgTCZee7m5w+Tf5s0ujUe8Bop/XEsurIOqBJNxEd48okBu4fB0xGFY rZ8U9ZWynfXE0EhImRQTaSq1QCuXK5DSkgekd2VpFEZWNJPEi0hB6Dt3qiorOehMO4uc V4OQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=HYfbKWsO; spf=pass (google.com: domain of linux-kernel+bounces-18253-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18253-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id 11-20020a17090a098b00b0028cfb02353asi1182615pjo.169.2024.01.05.10.51.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:51:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18253-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=HYfbKWsO; spf=pass (google.com: domain of linux-kernel+bounces-18253-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18253-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 4D4472863F5 for ; Fri, 5 Jan 2024 18:51:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C41D439AE8; Fri, 5 Jan 2024 18:47:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="HYfbKWsO" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E73C438FAF for ; Fri, 5 Jan 2024 18:47:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480428; bh=tpJP8aUfkvakUPIRIDPJWRInC6tt5GQtcS8InGkdO1c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HYfbKWsOORxwVSeGsBruwzoIrfH9HktyzFNW0jMXbWXqEQkTjCbpZsndScMrWW09M T7gQhPY8U51XeIxeWj+GRsVQSCh8ogIg4SQuEERDKXAOLNs78Yi1454KAIox07MShF rlk4BYMQ+lOapPf7Dvp9+56o8g6rLI63w9CFh/5MYVtXkGk/uvIKW9R7ErUKWzWer0 4+GaUiAy34d4w5iPNh4HQX1u6JiWMFNgAP5y5ixbzfo1gaKdQNoxf3w8qCHPkWD+Q6 u/bK8sl5peQRSfn1ZzL4Lr1OADocg2OFUcLkCZR9t1INpZ9aZoXbUhSCCiHiQt/hM8 nyqubNWPN4X0A== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 37097378203D; Fri, 5 Jan 2024 18:47:07 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 15/30] drm/shmem-helper: Avoid lockdep warning when pages are released Date: Fri, 5 Jan 2024 21:46:09 +0300 Message-ID: <20240105184624.508603-16-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277520150007462 X-GMAIL-MSGID: 1787277520150007462 All drivers will be moved to get/put pages explicitly and then the last put_pages() will be invoked during gem_free() time by some drivers. We can't touch reservation lock when GEM is freed because that will cause a spurious warning from lockdep when shrinker support will be added. Lockdep doesn't know that fs_reclaim isn't functioning for a freed object, and thus, can't deadlock. Release pages directly without taking reservation lock if GEM is freed and its refcount is zero. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f5ed64f78648..c7357110ca76 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -242,6 +242,22 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) if (refcount_dec_not_one(&shmem->pages_use_count)) return; + /* + * Destroying the object is a special case because acquiring + * the obj lock can cause a locking order inversion between + * reservation_ww_class_mutex and fs_reclaim. + * + * This deadlock is not actually possible, because no one should + * be already holding the lock when GEM is released. Unfortunately + * lockdep is not aware of this detail. So when the refcount drops + * to zero, we pretend it is already locked. + */ + if (!kref_read(&shmem->base.refcount)) { + if (refcount_dec_and_test(&shmem->pages_use_count)) + drm_gem_shmem_free_pages(shmem); + return; + } + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages_locked(shmem); dma_resv_unlock(shmem->base.resv); From patchwork Fri Jan 5 18:46:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185498 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404031dyb; Fri, 5 Jan 2024 10:51:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IFLAfY+35PygvO45dxjlQi5NXtzeJw/Px7lSmfnBf868Gmu1pshl5xPqw1OmNOfTAIupq1a X-Received: by 2002:ac8:7f48:0:b0:425:4043:7624 with SMTP id g8-20020ac87f48000000b0042540437624mr3150439qtk.76.1704480681687; Fri, 05 Jan 2024 10:51:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480681; cv=none; d=google.com; s=arc-20160816; b=h+xWI1oi9ewB3JeW1kBZL4PCKtHvypA9IWIB9VhWYuzpf8dCClIr1dfC72EhE1mlU9 6lgOXtuED08tHcF1HUi5C8TELTMxtn5dbWRNwOKky3z5Vhlx1qUBopZ5720WZvdcyRI5 6g55mpTRm17tT2bo17TPtHetVol/7bEyV4Icf9Q+23bpG4XaUIApcc6zZ7JtXYAmEZH6 F10y10QlGZVyYecMDQoRfGV/82JlRjYSjjeNlWmVit2QUWUT8vXDdc6aT2AiWr8AzKie BFHOiAmCx2kHFbVrVUCnB357ek8C4Ls7Kzxv6RqGRqt6wnqHrwa0FPBuHRXHmeM3IQ9z hwzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ASQ6y4kxcG8k+ULy0KmTtSiJ6MG5BWbnNbfIPOZzdFg=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=Zwqszj6ZaIn2fWjxLjZ9Zw3GyPAln6r5Y10PhFM/Hwn+9rW9/wIVe4qfWNPa6Uzj7q w9bSYbJKjnqyQKAF+TkXqRVh9r/jnUS8Lx5zdn/9YAoqF86uhn56Jy3GEVaLwyprN7K6 B37gACB4YpubAF7LZ702GVyVcotf6XqlySnjZbrNDuYpUObBh+fT1ldJyEAtlxRFuwZf /jV9hLZ32RQ5i09bnVVOBOxLv/Hc5Z/Me78WxsxS0EiCS42UioQMjBrftvcrPoLwQ8YV DGRlhyjHzXsIhMRj8j1Quq7dZ1sYKybFimKvx1BssQEnitI/KZ/HU2GEW/jlg0wcuPPn 55uQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=2u20bBo2; spf=pass (google.com: domain of linux-kernel+bounces-18254-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18254-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id f22-20020a05622a105600b0042771688f03si2375071qte.213.2024.01.05.10.51.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:51:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18254-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=2u20bBo2; spf=pass (google.com: domain of linux-kernel+bounces-18254-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18254-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 791951C2388A for ; Fri, 5 Jan 2024 18:51:21 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9B61839FCE; Fri, 5 Jan 2024 18:47:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="2u20bBo2" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3A883986D for ; Fri, 5 Jan 2024 18:47:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480430; bh=QuYX1ZbeqzzlgphkIZfy751IRqx3iZbH6NUvd1ayS2E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2u20bBo2hSpc1THMfYB6JNEoJgDpKys+lBggfjoSpFj7t7nm01B0QgElOM1/8apZk heSFvZSGfDyeK4oBTragO5Jjg+1pj8WfjyW8MoP37fsgqcday1bN7HTIiNY4yMWiGU ZmyV2Nse5W+lS1k8OnULetaokn82kfZ5YFD+TOPSmeglq/8xrE3F/PKfT5kOSQ7c/R Lp9LXu9NfDcabVPGgHe2KFeTjP2Xhp90zai1qmWDVmRzsSY5VSLj4QWduFvGDSviGB kgj6RFK7hUo+V22r6Mo/tJ0FaVHmWQjXa5AYICEd+hCH+vl1ksPleUzeqf7Bg33VrR 3mrt5q4QwCrFg== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id C4A6F3782046; Fri, 5 Jan 2024 18:47:08 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 16/30] drm/lima: Explicitly get and put drm-shmem pages Date: Fri, 5 Jan 2024 21:46:10 +0300 Message-ID: <20240105184624.508603-17-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277535061759396 X-GMAIL-MSGID: 1787277535061759396 To simplify the drm-shmem refcnt handling, we're moving away from the implicit get_pages() that is used by get_pages_sgt(). From now on drivers will have to pin pages while they use sgt. Lima driver doesn't have shrinker, hence pages are pinned and sgt is valid as long as pages' use-count > 0. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/lima/lima_gem.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 2a97aa85416b..9c3e34a7fbed 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -115,6 +115,7 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, return PTR_ERR(shmem); obj = &shmem->base; + bo = to_lima_bo(obj); /* Mali Utgard GPU can only support 32bit address space */ mask = mapping_gfp_mask(obj->filp->f_mapping); @@ -123,13 +124,17 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, mapping_set_gfp_mask(obj->filp->f_mapping, mask); if (is_heap) { - bo = to_lima_bo(obj); err = lima_heap_alloc(bo, NULL); if (err) goto out; } else { - struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(shmem); + struct sg_table *sgt; + err = drm_gem_shmem_get_pages(shmem); + if (err) + goto out; + + sgt = drm_gem_shmem_get_pages_sgt(shmem); if (IS_ERR(sgt)) { err = PTR_ERR(sgt); goto out; @@ -139,6 +144,9 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, err = drm_gem_handle_create(file, obj, handle); out: + if (err && refcount_read(&bo->base.pages_use_count)) + drm_gem_shmem_put_pages(shmem); + /* drop reference from allocate - handle holds it now */ drm_gem_object_put(obj); @@ -152,6 +160,9 @@ static void lima_gem_free_object(struct drm_gem_object *obj) if (!list_empty(&bo->va)) dev_err(obj->dev->dev, "lima gem free bo still has va\n"); + if (refcount_read(&bo->base.pages_use_count)) + drm_gem_shmem_put_pages(&bo->base); + drm_gem_shmem_free(&bo->base); } From patchwork Fri Jan 5 18:46:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185499 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404125dyb; Fri, 5 Jan 2024 10:51:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IEeM2v8qja8ORjJ1txjGLFSiZI48RrMj1c2cneCOzjMF+HAxrKMXQBiEIq3qdMpyUeCAI/i X-Received: by 2002:a05:6358:912:b0:175:7201:577f with SMTP id r18-20020a056358091200b001757201577fmr1737719rwi.59.1704480696398; Fri, 05 Jan 2024 10:51:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480696; cv=none; d=google.com; s=arc-20160816; b=IjJFmTKxoHTkbxxspql+2KVuqdfu5SzJA/a2AZW1Ap9Yv6363uybpFzDv8bkR2YIIW MSuz3mlqeeaOClYNCyCIaFOZdk9FixFUvqcNQYI20OJmyNfOi7xcTPwOGmEmn4rv978Q FVD1xlksBW2xT1rqXh4ttYOVW00uDyZoJo2NbsRKMDuL1Xv62Jx2/ZCA2G8kgOX+Ox25 jN5heJPpNnI3WLCF/ECNtcQXgfl930XGRSwv+WZW01FQHMSB5Z/6omcnncJN30XKYp1S aqUVRuAH6I4h6SDWjT4Q7XXgRQpocwtVKsGca5Relngq0uyG/PZSb/Rvz6kJ0pJwX8WZ QCaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=1JJF2U/dyNRBvnu5If3KhmT14fb1RMU5RuEXGiZqV78=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=mSsQY9fpuCQgnlP/2MDZt6qoIWX4GgKpgj4+kC9ZLxQGG1H0QqjvHkT50kzzwSN57e yYDcX8TuaxNI1dMfvppRwmYW9Adb9+lzvzcn/jTLKrdTDyKcWKssGAsDU2aa9CU3mr1g L2P9Vbq0BVh5vYdg2SIv729N44odqJqMlYTCaXcWnkC4aYSRRtzt7YuVjTzUi+bWCZe7 oFMs6aB4XL5+mH6HrTEwxMDNrT//ZM+gZiLKxtisHhQ2gYKxS9FR5pDYeggMrFcpCQip EOoOTAVWEqn816jeyOtyQUk2dCVpYdNr/XzHRWOJukhbvrKOAbiHUGfSYgFJQcbOEDLe q9ig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=2vsXtcPc; spf=pass (google.com: domain of linux-kernel+bounces-18255-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18255-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id m20-20020a6562d4000000b005cdf8b1fcdasi1650147pgv.194.2024.01.05.10.51.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:51:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18255-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=2vsXtcPc; spf=pass (google.com: domain of linux-kernel+bounces-18255-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18255-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 9A99A28630F for ; Fri, 5 Jan 2024 18:51:33 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3618C39FEE; Fri, 5 Jan 2024 18:47:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="2vsXtcPc" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EA0539ADE for ; Fri, 5 Jan 2024 18:47:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480431; bh=IRLl1Wz/kpPQ7d9mzb5RwhkejAZITB4j+h1FXgfgdUk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2vsXtcPcm3BWlv5C+uaLudrnm+3UJx8YA4C7ZABgtllA53LRJMuTZm2sxLxJsuKBD jgHxkTl5bd0rX9nFsYW//zapPr0kIwbDjgGAhfMSgUVHAgyhTD3npKgNU3VFTvxUsS PC0C0xNTQ3uolrB7a74019xftcIF+r+JMuufT6E2Q1g9ePkYsoS5RVkQZstDCAytHA xVmUQ72Lm5V+t0zb5uJdTaGcx5Rba2+UBiUpHMgPjYrPFfLvKtcr4Co4sKzIyc3hDn 4yXb50OrDD87hvSm0Vu20L6HrilolAD485afUeUdB3tSx3GGt5lVk9uEyoyLYnyFdZ En259qSP7150Q== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 674583782054; Fri, 5 Jan 2024 18:47:10 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 17/30] drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr() Date: Fri, 5 Jan 2024 21:46:11 +0300 Message-ID: <20240105184624.508603-18-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277551093707101 X-GMAIL-MSGID: 1787277551093707101 From: Boris Brezillon If some the pages or sgt allocation failed, we shouldn't release the pages ref we got earlier, otherwise we will end up with unbalanced get/put_pages() calls. We should instead leave everything in place and let the BO release function deal with extra cleanup when the object is destroyed, or let the fault handler try again next time it's called. Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations") Cc: Signed-off-by: Boris Brezillon Co-developed-by: Dmitry Osipenko Signed-off-by: Dmitry Osipenko Reviewed-by: AngeloGioacchino Del Regno --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index bd5a0073009d..4a0b4bf03f1a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -502,11 +502,18 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, mapping_set_unevictable(mapping); for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { + /* Can happen if the last fault only partially filled this + * section of the pages array before failing. In that case + * we skip already filled pages. + */ + if (pages[i]) + continue; + pages[i] = shmem_read_mapping_page(mapping, i); if (IS_ERR(pages[i])) { ret = PTR_ERR(pages[i]); pages[i] = NULL; - goto err_pages; + goto err_unlock; } } @@ -514,7 +521,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, ret = sg_alloc_table_from_pages(sgt, pages + page_offset, NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); if (ret) - goto err_pages; + goto err_unlock; ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0); if (ret) @@ -537,8 +544,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, err_map: sg_free_table(sgt); -err_pages: - drm_gem_shmem_put_pages_locked(&bo->base); err_unlock: dma_resv_unlock(obj->resv); err_bo: From patchwork Fri Jan 5 18:46:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185503 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404775dyb; Fri, 5 Jan 2024 10:53:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IGSrsM2C7HveDaAvO86kKeNBGMuuVHCzI5bYZsZ5G7VrDcFAiY5Y9fq0gDA5K1hGy3vRG/+ X-Received: by 2002:a17:902:780e:b0:1d4:19c6:dfff with SMTP id p14-20020a170902780e00b001d419c6dfffmr2253790pll.20.1704480795382; Fri, 05 Jan 2024 10:53:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480795; cv=none; d=google.com; s=arc-20160816; b=pyuWbT43Jfn/WH8B4uLeCsGD6/rUE86AykoTGzxbBSs8psRjUaj6VYeo/xMEjOMRm0 BKUNjHMgOTNPekypz/1WFtvnH6D+Wn1Qv3E2jAAd7eA2j5BEgAnTLLygStSMr4df9Zqb 3u/Rf26UL0TH9X/RVpGkqONyA7zG5bQ2N18PzXm9u7UxsWMd28QmRFRoVi4cW8CPPc2Q w7nifSDUhs1cZ/NHOvixfOpFB7c+t3ijVaR1O7ZYaYSN7qIK1k9ixNHQ/b6kfWUlAsfq zml4uYF5FLTkxK8BMkqb3bH40AjceylO2hkwmXzr7CCxhdFBG2s0uE5oFdzmt+3h8heg 1ZGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=WGepiqYKsfgCh/MRMXAkiatkEzgEkVPz/Pk3Kp3B3dg=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=vXz/z6tgyAprN8HVWJpYwnoCKsDRJIDmN8Xzba5OwBRv16wJSca1Vd1MzamiCGcoZ7 J3mLbvMx4+3U8qcBSTy01axYlHI4t/lNU/N7uj5bALrSMVpwcsqXZNGH/uUnneyFb382 H4BbJYtNpmpu5r6GsZbCGeuB/wkzh0twNEODnc58QEJbtQjg34/xvE+M8DDgoASoeCrK nvaX2EFowlxWark6IxsjdeNsVQde75g3TeXslH3/ZyhDOt9bXpTJZFmOXCXtqyNrA0fM CxR9tmZnFUg4UkodoW06592TCqtaQ5c/xVtWTDiwdrGxLa4ycO+RJDNvaPBDQSuBV+aD 9siA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=eL0GBCOi; spf=pass (google.com: domain of linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id p9-20020a170902e74900b001d49c0617f6si1617934plf.530.2024.01.05.10.53.14 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:53:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=eL0GBCOi; spf=pass (google.com: domain of linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id C40C7B24717 for ; Fri, 5 Jan 2024 18:51:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E17513A1B5; Fri, 5 Jan 2024 18:47:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="eL0GBCOi" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCFE339AFD for ; Fri, 5 Jan 2024 18:47:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480433; bh=sTS/2S+BYwtPCc7TuBkqhut/HYwRwiXmAJgkEuu7P8Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eL0GBCOiEAFErgy+WytN0hVIe/1GuDil9psBDToaEuM+ziwat7seTyzWb+Im8H3DB k0YBICdm1jU4HQPFe0GzcR/Z7uU/m2tUv9r+N53baQ5zZOcFWfrSO1a9yd0o1yGnCY 1Eu+FibV8JgITDVgncjPPUs+hVAwEwNiA4b57NRx4mox6SUoprO1u9y0ULNTtHufR7 BbRad+Bma5gTftk3pbioR/lQ7pk5CRLinByW6G9s450LSC7dTuK4geSVsDSR9zYArP I1GvVFL3/vk7xCouAEu+mur+CezOKgiBZgY4+ABsfWzXCxxcO01GKUp6pLrPPWFM+D OAVMDk9G/Qtqg== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id F3158378204B; Fri, 5 Jan 2024 18:47:11 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 18/30] drm/panfrost: Explicitly get and put drm-shmem pages Date: Fri, 5 Jan 2024 21:46:12 +0300 Message-ID: <20240105184624.508603-19-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277654434233045 X-GMAIL-MSGID: 1787277654434233045 To simplify the drm-shmem refcnt handling, we're moving away from the implicit get_pages() that is used by get_pages_sgt(). From now on drivers will have to pin pages while they use sgt. Panfrost's shrinker doesn't support swapping out BOs, hence pages are pinned and sgt is valid as long as pages' use-count > 0. In Panfrost, panfrost_gem_mapping, which is the object representing a GPU mapping of a BO, owns a pages ref. This guarantees that any BO being mapped GPU side has its pages retained till the mapping is destroyed. Since pages are no longer guaranteed to stay pinned for the BO lifetime, and MADVISE(DONT_NEED) flagging remains after the GEM handle has been destroyed, we need to add an extra 'is_purgeable' check in panfrost_gem_purge(), to make sure we're not trying to purge a BO that already had its pages released. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon Reviewed-by: Steven Price --- drivers/gpu/drm/panfrost/panfrost_gem.c | 63 ++++++++++++++----- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 ++ 2 files changed, 52 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index f268bd5c2884..7edfc12f7c1f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -35,20 +35,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) */ WARN_ON_ONCE(!list_empty(&bo->mappings.list)); - if (bo->sgts) { - int i; - int n_sgt = bo->base.base.size / SZ_2M; - - for (i = 0; i < n_sgt; i++) { - if (bo->sgts[i].sgl) { - dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], - DMA_BIDIRECTIONAL, 0); - sg_free_table(&bo->sgts[i]); - } - } - kvfree(bo->sgts); - } - drm_gem_shmem_free(&bo->base); } @@ -85,11 +71,40 @@ panfrost_gem_teardown_mapping(struct panfrost_gem_mapping *mapping) static void panfrost_gem_mapping_release(struct kref *kref) { - struct panfrost_gem_mapping *mapping; - - mapping = container_of(kref, struct panfrost_gem_mapping, refcount); + struct panfrost_gem_mapping *mapping = + container_of(kref, struct panfrost_gem_mapping, refcount); + struct panfrost_gem_object *bo = mapping->obj; + struct panfrost_device *pfdev = bo->base.base.dev->dev_private; panfrost_gem_teardown_mapping(mapping); + + /* On heap BOs, release the sgts created in the fault handler path. */ + if (bo->sgts) { + int i, n_sgt = bo->base.base.size / SZ_2M; + + for (i = 0; i < n_sgt; i++) { + if (bo->sgts[i].sgl) { + dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], + DMA_BIDIRECTIONAL, 0); + sg_free_table(&bo->sgts[i]); + } + } + kvfree(bo->sgts); + } + + /* Pages ref is owned by the panfrost_gem_mapping object. We must + * release our pages ref (if any), before releasing the object + * ref. + * Non-heap BOs acquired the pages at panfrost_gem_mapping creation + * time, and heap BOs may have acquired pages if the fault handler + * was called, in which case bo->sgts should be non-NULL. + */ + if (!bo->base.base.import_attach && (!bo->is_heap || bo->sgts) && + bo->base.madv >= 0) { + drm_gem_shmem_put_pages(&bo->base); + bo->sgts = NULL; + } + drm_gem_object_put(&mapping->obj->base.base); panfrost_mmu_ctx_put(mapping->mmu); kfree(mapping); @@ -125,6 +140,20 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) if (!mapping) return -ENOMEM; + if (!bo->is_heap && !bo->base.base.import_attach) { + /* Pages ref is owned by the panfrost_gem_mapping object. + * For non-heap BOs, we request pages at mapping creation + * time, such that the panfrost_mmu_map() call, further down in + * this function, is guaranteed to have pages_use_count > 0 + * when drm_gem_shmem_get_pages_sgt() is called. + */ + ret = drm_gem_shmem_get_pages(&bo->base); + if (ret) { + kfree(mapping); + return ret; + } + } + INIT_LIST_HEAD(&mapping->node); kref_init(&mapping->refcount); drm_gem_object_get(obj); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 02b60ea1433a..d4fb0854cf2f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -50,6 +50,12 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) if (!dma_resv_trylock(shmem->base.resv)) goto unlock_mappings; + /* BO might have become unpurgeable if the last pages_use_count ref + * was dropped, but the BO hasn't been destroyed yet. + */ + if (!drm_gem_shmem_is_purgeable(shmem)) + goto unlock_mappings; + panfrost_gem_teardown_mappings_locked(bo); drm_gem_shmem_purge_locked(&bo->base); ret = true; From patchwork Fri Jan 5 18:46:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185500 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404301dyb; Fri, 5 Jan 2024 10:52:05 -0800 (PST) X-Google-Smtp-Source: AGHT+IE+qvDgUc5wK0kDGSeCaDjMDVmkiIQ2nvg5Jq/0TyRSNVjg5De4xYnIB+W9j9td5I3Zm8k1 X-Received: by 2002:ac8:5a86:0:b0:427:a2f7:43b8 with SMTP id c6-20020ac85a86000000b00427a2f743b8mr2555338qtc.47.1704480724873; Fri, 05 Jan 2024 10:52:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480724; cv=none; d=google.com; s=arc-20160816; b=VLX/+e86bJmfD03HUZtkrTTs98CoL769zzc/QBXBk749Ar5ZFfRXvV6BRz6xw50xDz jIUM8I4arYSL3LBj4CZHPc9VHNm/SatbEEBZmP6RJJvTVSdYmddD+kb1okKoAkUM3TOj SPNd6E8lBOt29Lzgwphkw1W3jMN/23jPRkMZGaiPjdp8Zk10uc/bI6FYFUcIlCWmeVeE G3+OX8dxkRl/rxCnp50VDBrOSjScj3PnQl16IxDtGrG/k6I2KcVVBdavaRe6RptUG92B kRHigO1NNptJpIwWVES/s7Mj+dJiAwNN810MVd0G9xJUlLKHh9nyTokg9tO9tcQTdHN/ cTjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=8py+TlC4Jl2fSH7FalK2pnflGO/vrChK/B1Ekgowfjc=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=trsVO5AdgyYkZn7enssNjZOBA5GJU5gCO4sAV/gBTJFk+Z89L4aQqpbzIxep6SITKM uSnQ3ai6WeXm6zGY7XMzp8V4vxaRbJQNyYsnVTj45cGFgV2sNHDP+R7Hrkm0c0UsrV6e 5HpU/2gwXdIN38KuxTVVN4O49ok0ZVDxay2vc3aFKBr6WlvEYNXc2tmeTDYZDGisWqDs /mulQ90d0MkP+sb7K1l2/kCUNqiembGJSBZJ5hoo+9FixMew0Rj9ZDDTwlZZwG1eFJdB nobCje3vZDG4c8BFDvv9iFlkRVKHafSk8rPr1IPf/8lPkrdh6D3f1D7wTJDjXGdZ++wP Ol7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=QgOl2UJs; spf=pass (google.com: domain of linux-kernel+bounces-18257-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18257-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id fb27-20020a05622a481b00b00429764acf27si2419585qtb.464.2024.01.05.10.52.04 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:52:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18257-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=QgOl2UJs; spf=pass (google.com: domain of linux-kernel+bounces-18257-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18257-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id A80761C23AB9 for ; Fri, 5 Jan 2024 18:52:04 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7BA0A3A1D8; Fri, 5 Jan 2024 18:47:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="QgOl2UJs" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EFAF39FE4 for ; Fri, 5 Jan 2024 18:47:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480435; bh=YBkVn5jKdYvc4x1kT23ZC29JJbOYH68uDMV2t2TN/6Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QgOl2UJs08qDGmwrciCZCNsVoDOox4BtpEAkEHtlwtVBVrIZuNVP8ke8kqwz1D+iq AkWrKDiRFjWApzsL2X/+2iSrDEy++LL4hfhOJb+R9UM7iHnnAjrK8IVR0lDImUZUnl +3pGTpxWS787PSte07Ad+jmiINWOknqI7KiATFYTyOy4vrAua1N9PzA44bP4um745V crV6/5sZa24o5ti+d1DL//yfy3+Ukn7VOkRt/dr0NM+VYPH7OBYwSUZYVD64hmfq8U dN7fcx2Jw29ut3l0oZs6g27aAD49iE2Am2vgdA/wnPfEpSH3Mnxyl+Vy96sYY5H6g4 X1wP/7EUuD0XQ== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id A5ACB378205F; Fri, 5 Jan 2024 18:47:13 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 19/30] drm/virtio: Explicitly get and put drm-shmem pages Date: Fri, 5 Jan 2024 21:46:13 +0300 Message-ID: <20240105184624.508603-20-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277580464841862 X-GMAIL-MSGID: 1787277580464841862 We're moving away from implicit get_pages() that is done by get_pages_sgt() to simplify the refcnt handling. Drivers will have to pin pages while they use sgt. VirtIO-GPU doesn't support shrinker, hence pages are pinned and sgt is valid as long as pages' use-count > 0. Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index c7e74cf13022..e58528c562ef 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -67,6 +67,7 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); if (virtio_gpu_is_shmem(bo)) { + drm_gem_shmem_put_pages(&bo->base); drm_gem_shmem_free(&bo->base); } else if (virtio_gpu_is_vram(bo)) { struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo); @@ -196,9 +197,13 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, return PTR_ERR(shmem_obj); bo = gem_to_virtio_gpu_obj(&shmem_obj->base); + ret = drm_gem_shmem_get_pages(shmem_obj); + if (ret) + goto err_free_gem; + ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle); if (ret < 0) - goto err_free_gem; + goto err_put_pages; bo->dumb = params->dumb; @@ -243,6 +248,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, kvfree(ents); err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); +err_put_pages: + drm_gem_shmem_put_pages(shmem_obj); err_free_gem: drm_gem_shmem_free(shmem_obj); return ret; From patchwork Fri Jan 5 18:46:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185506 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404944dyb; Fri, 5 Jan 2024 10:53:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IGwkz5GuyGd4XawbyLAnYTVrCarrUbzmvtWZSi3M0xPQoDZPSDSb2S8043WYeUOPkEamfx6 X-Received: by 2002:a17:902:b104:b0:1d4:ef3a:3264 with SMTP id q4-20020a170902b10400b001d4ef3a3264mr870375plr.3.1704480817272; Fri, 05 Jan 2024 10:53:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480817; cv=none; d=google.com; s=arc-20160816; b=BCzThsiqVfNfP3etHPvsKCY66nSyAH7cJjez9EvWYm8gUn+mi9vKwySGgdO5g8Iee8 nskV6DoskD5D1MqdrhiC3o+8ZkRKng0j2aXoB7pci4S46rdO8rRhX1SmThk2SF6s9J5Q Mzg94A73DJlQu+NYhS/lkq/5XszuF9ufnNAyKL4eFK0XIaf9grMxl1xbkYQ3qXZSGduO YA2xWl9YOlRwz7Eg7OS0h0r12BVZT6dh+emt4ALilQEDbuCKv+9zZAmiaj6jCSa2AIx1 orkFg1GollOnkgyR5iq/JqHWh2DQnXo/Pl6StJlOrze+mU14mEywWdyCkztPnJ2b5H9n iMjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=fGzdvafTKnkzBKiSEmiwdp9/G7aUGCBsu0iw2MRdkyo=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=L2thO8i/k1JasnKE5DlAer8Ko9MNOZ4c52/BdLrQQQ2SJeXxAPZXonBSzSjl4Ypugc tNhYlkt3qWw1Gj3HyTmzk/KQH9R+OcIl+3USa2+gfzVSnU5s4zIaLcRsEROTrSpXvWlX 9Tq9zgd7UCwqamshCEURbGoQ8N/PrNb1a5zs65ZZD98pKpMr8L3D9nnt0xlVyrdoMLTV 1hgpfs+NdMCnQPTQOdza0TA1x59y0jAwdtYWy9CB758QlcXskhoFh31Jy9XOuDVp6Da2 oEdI7EIHIAdQ/BD4S0ZIlGPRPtucPtMWX4Q8gjxEx4v1/2H39dOxPoiS1/AjDhSdpzDi geig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=miPaG4Hu; spf=pass (google.com: domain of linux-kernel+bounces-18258-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18258-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id l13-20020a170902f68d00b001d4edc48565si1163319plg.629.2024.01.05.10.53.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:53:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18258-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=miPaG4Hu; spf=pass (google.com: domain of linux-kernel+bounces-18258-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18258-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 8C247B24C7D for ; Fri, 5 Jan 2024 18:52:19 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ED8A63A8C4; Fri, 5 Jan 2024 18:47:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="miPaG4Hu" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2496F3A1A5 for ; Fri, 5 Jan 2024 18:47:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480436; bh=zkWIJ7P9a1m72+FqeXAbXwbM9DM9q7qPsdOCL/PnHw8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=miPaG4HuD6TrsZB5vjF09RmNn3Kv/vL8zJWm0NmB4CXk5oPvt/0pLFN+4ug8zAQXr QyIVqu8GNWov925rkVbmgYnI591tXKO8ahE62rh/EQEdcE+3QP03LmNEpSLVNIqpA0 Pa4Iv5g9cd5oVudpHQFKDineB2eId4+6Z6lLG+Dpks0WJx1ISR9IqenNM7/M1VidcG gezJlH6cj3gWqB4Gx67gw71X74QW1RzpBJ/vQyVXjjrTUvQ3NczXrOr4+MaADY0Azh rz1TypBo10P0asqwX4BuZyr6EJ8ZIvEYvmlM0lxrQsoG/Vmg8nK4ZHHrMgggyFKLQQ dRHp1x3enna4A== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 4E4F83782047; Fri, 5 Jan 2024 18:47:15 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 20/30] drm/v3d: Explicitly get and put drm-shmem pages Date: Fri, 5 Jan 2024 21:46:14 +0300 Message-ID: <20240105184624.508603-21-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277677684022085 X-GMAIL-MSGID: 1787277677684022085 To simplify the drm-shmem refcnt handling, we're moving away from the implicit get_pages() that is used by get_pages_sgt(). From now on drivers will have to pin pages while they use sgt. V3D driver doesn't support shrinker, hence pages are pinned and sgt is valid as long as pages' use-count > 0. Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/v3d/v3d_bo.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c index 1bdfac8beafd..ccf04ce93e8c 100644 --- a/drivers/gpu/drm/v3d/v3d_bo.c +++ b/drivers/gpu/drm/v3d/v3d_bo.c @@ -50,6 +50,9 @@ void v3d_free_object(struct drm_gem_object *obj) /* GPU execution may have dirtied any pages in the BO. */ bo->base.pages_mark_dirty_on_put = true; + if (!obj->import_attach) + drm_gem_shmem_put_pages(&bo->base); + drm_gem_shmem_free(&bo->base); } @@ -139,12 +142,18 @@ struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct drm_file *file_priv, bo = to_v3d_bo(&shmem_obj->base); bo->vaddr = NULL; - ret = v3d_bo_create_finish(&shmem_obj->base); + ret = drm_gem_shmem_get_pages(shmem_obj); if (ret) goto free_obj; + ret = v3d_bo_create_finish(&shmem_obj->base); + if (ret) + goto put_pages; + return bo; +put_pages: + drm_gem_shmem_put_pages(shmem_obj); free_obj: drm_gem_shmem_free(shmem_obj); return ERR_PTR(ret); From patchwork Fri Jan 5 18:46:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185502 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404481dyb; Fri, 5 Jan 2024 10:52:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IGaIq+XbMZ/bW4IEgniwS1PVFr3b3jiY3JfZ/8k3cfTLp2NtQF4Rz9eBRCZQuK0bD/Zy0Cz X-Received: by 2002:a05:651c:d7:b0:2cc:b4c2:af5c with SMTP id 23-20020a05651c00d700b002ccb4c2af5cmr1248279ljr.74.1704480749768; Fri, 05 Jan 2024 10:52:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480749; cv=none; d=google.com; s=arc-20160816; b=ZbNC0wCGPJ0M6KZPAzyMp7RsJHmFsUk9ocpzRSlxtTjvHlkSUPEtnqrw1OeRfiWgYk lKo/P+iro7MRK1qMCv8nzLTJS0lBzWqPEggY8MT38kfCW2xqpQqhfvAwXFSLs4xeog+w xI8HzMotLnlHBFmyIxGps6xPwqjdCxKrnArkRSqJ2GWQQnRwuvimLCP5Q9V5YouyPN50 /zfF3ufyDW/j77TYbt2khg/0cKT5lP71TsgwDufzyRNJi7ocdADvrvi5UibGgYKCHUSM KBECQtzyvZuBFBAbNKZw6rWHhmwsB0qTEKxm012KkXo65rQPJkZqLFoAiJ404pPFerhV CAnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=dk2Wfu+uIjSyPWsfUqAH8EvJtaiYmvOmnRX5UzYlkvg=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=CFGnhrNOjdceqiwLV5pjtyn9u6Cb02gY6eGiIIBmaTXTEU2djLLfJNZQzs2gmgthCM kEgYIgsSerx0YKHkX2Yvei0OjzxMJdbgwb8CVkeJ4WR9ba/7WsaRH4FnGMnhwGBNg/5t Qdk+r8C8CaVdCVhkvzbsh9mEky7HlLKbfbEC4krc5ssR1ja/ZsByKw5j0/1ms4XUaObu gF4TnDC4i7UEKm0JmJcajrfXlokFpxLWwQvN3Z0a69FXwBp6yFvOE6kMIvlC8AYkLCZK RsYbxFUthR52AnyQSQWZ03gA6QASbkAuqBUFGpl5305ZZmrX5G8/WpHpgC5TZzIP0lJ9 MejA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=L4TOndeX; spf=pass (google.com: domain of linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id cq10-20020a056402220a00b005538a8f8b64si871258edb.376.2024.01.05.10.52.29 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:52:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=L4TOndeX; spf=pass (google.com: domain of linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 612F61F2229F for ; Fri, 5 Jan 2024 18:52:29 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8C2013A8E1; Fri, 5 Jan 2024 18:47:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="L4TOndeX" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9536A3A1C6 for ; Fri, 5 Jan 2024 18:47:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480438; bh=TqXT2PdRKdsfVWfuHdIUz5qWMXCVZU9Nx7RXrVE6nQU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L4TOndeX4uNvvr+jO3tvm5FF6uhPCkatj8ub4WMpwmcWSjb8nntV6FCGfNHR8+jgB 7X22VUvb7tAVulKPH6afTNYJYIxsPkkwCD8AdJko/lxApBmxy4wC157P9HWRwk5Crg sGFQipdQDyfZvFsKUZlPfnpsJ8LX7E6r8PpC63LH+dyUXY4N936NnWKPnWweKUC5Bp n6G/sal2sxhPJGAxdzOmdivhzFMV0EJdpRUT3uD+iyDuI53cQf8S4xWEkzb1tqtIH9 SeRCg1ArebDe5Qs7hTwAOzQw7j/2W3IT2Bd3eqIJ1ntDmpFOQ0+chg16NeOXax/plZ 5CgG0uJUFk9gA== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id DF24F378204D; Fri, 5 Jan 2024 18:47:16 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 21/30] drm/shmem-helper: Change sgt allocation policy Date: Fri, 5 Jan 2024 21:46:15 +0300 Message-ID: <20240105184624.508603-22-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277606758778291 X-GMAIL-MSGID: 1787277606758778291 In a preparation to addition of drm-shmem memory shrinker support, change the SGT allocation policy in this way: 1. SGT can be allocated only if shmem pages are pinned at the time of allocation, otherwise allocation fails. 2. Drivers must ensure that pages are pinned during the time of SGT usage and should get new SGT if pages were unpinned. This new policy is required by the shrinker because it will move pages to/from SWAP unless pages are pinned, invalidating SGT pointer once pages are relocated. Previous patches prepared drivers to the new policy. Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 55 ++++++++++++++------------ 1 file changed, 29 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index c7357110ca76..ff5437ab2c95 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -133,6 +133,14 @@ drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + if (shmem->sgt) { + dma_unmap_sgtable(obj->dev->dev, shmem->sgt, + DMA_BIDIRECTIONAL, 0); + sg_free_table(shmem->sgt); + kfree(shmem->sgt); + shmem->sgt = NULL; + } + #ifdef CONFIG_X86 if (shmem->map_wc) set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); @@ -155,24 +163,12 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - if (obj->import_attach) { + if (obj->import_attach) drm_prime_gem_destroy(obj, shmem->sgt); - } else { - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); - if (shmem->sgt) { - dma_unmap_sgtable(obj->dev->dev, shmem->sgt, - DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - } - if (shmem->pages && - refcount_dec_and_test(&shmem->pages_use_count)) - drm_gem_shmem_free_pages(shmem); - - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); - } + drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); drm_gem_object_release(obj); kfree(shmem); @@ -722,6 +718,9 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (drm_WARN_ON(obj->dev, !shmem->pages)) + return ERR_PTR(-ENOMEM); + return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT); } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); @@ -737,15 +736,10 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ drm_WARN_ON(obj->dev, obj->import_attach); - ret = drm_gem_shmem_get_pages_locked(shmem); - if (ret) - return ERR_PTR(ret); - sgt = drm_gem_shmem_get_sg_table(shmem); - if (IS_ERR(sgt)) { - ret = PTR_ERR(sgt); - goto err_put_pages; - } + if (IS_ERR(sgt)) + return sgt; + /* Map the pages for use by the h/w. */ ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0); if (ret) @@ -758,8 +752,6 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ err_free_sgt: sg_free_table(sgt); kfree(sgt); -err_put_pages: - drm_gem_shmem_put_pages_locked(shmem); return ERR_PTR(ret); } @@ -776,6 +768,17 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ * and difference between dma-buf imported and natively allocated objects. * drm_gem_shmem_get_sg_table() should not be directly called by drivers. * + * Drivers should adhere to these SGT usage rules: + * + * 1. SGT should be allocated only if shmem pages are pinned at the + * time of allocation, otherwise allocation will fail. + * + * 2. Drivers should ensure that pages are pinned during the time of + * SGT usage and should get new SGT if pages were unpinned. + * + * Drivers don't own returned SGT and must take care of the SGT pointer + * lifetime. SGT is valid as long as GEM pages that backing SGT are pinned. + * * Returns: * A pointer to the scatter/gather table of pinned pages or errno on failure. */ From patchwork Fri Jan 5 18:46:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185508 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6405119dyb; Fri, 5 Jan 2024 10:54:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IGBgw82vhqhoMNQHb68sYqyeiblxx0uqU0os9e9ebRuaa99vcptCJquyOJDZqJw5jr9JyhA X-Received: by 2002:a17:902:6f01:b0:1d4:15f6:9239 with SMTP id w1-20020a1709026f0100b001d415f69239mr2363830plk.107.1704480843212; Fri, 05 Jan 2024 10:54:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480843; cv=none; d=google.com; s=arc-20160816; b=kqrJIjbCvd7ECG4Hk6XRwFl4eLQ0pdcU8LkoS4GSGoLqiTVt1XrlQ/+F3CQJsT3/R2 zsy0P40RxE+E6hxkOksQTunmNeTY9J4MVaKXTXFPzviJys+JE0MN5lJH/aZrycj3ERbr VSJuDQnQQMT/ihjClZdvs0sPN6cos/zXXC5pUcqK2bNo7UqkUSSHUeTODiAvNY9vh9CH WFiWLFY3GiUjhIiCNRoBccFKQq3CEhYxTCRXq1jPJ1bmoRkMo+KMJ7v4GcJG66j+mXmM B8hM8hGRd8BqXoDIeE7YgiCWefAMCd0rgjhkjT1ctZjbkXh0K+6ORJwQrCzzSjzO6Onk /0CA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=564c1uTNM/kMMsTVzU7J1AdvpTGE2duorNgh2OgtlbY=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=tikwI4R77p8cQ9x7gzv9YOLuQA9q2dpyPpmuNVhlxAI1WsJ++2WPmobM8m3gyooU1x XeLpr+SBgG2zFr6n77fm6as/ZtQLO4QAxf1rW10z/YLTbJL7uXm+aRKmOHBzP6sS5Bll OfbdHXNcBf3b0TZBhXjWldyYwkRs6cYnPmRcE9NX2N4fS2WbH0fmbUEpBFCDTK7MBi/S nHXItPt1A61vI1zL+1Wx9zfRKz4OY9n9DLAcoQlAlFQYgj+ZXOSNEyb59LY9kCAgEKFt cLDiIzMQ1iXcZFYwU2SaMyXD2vPqv9wJFl4XbN+uAZ68puKz1HBwyxxAsvuuFOopyYb6 IK6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=3haU1Vzj; spf=pass (google.com: domain of linux-kernel+bounces-18260-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18260-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id q10-20020a170902daca00b001d4a24c4bc4si1637810plx.441.2024.01.05.10.54.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:54:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18260-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=3haU1Vzj; spf=pass (google.com: domain of linux-kernel+bounces-18260-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18260-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 52250B2370D for ; Fri, 5 Jan 2024 18:52:55 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1DD953B194; Fri, 5 Jan 2024 18:47:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="3haU1Vzj" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52D3E3A28E for ; Fri, 5 Jan 2024 18:47:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480439; bh=+8e0QAeKAWDBfpes0Dqwf0VnZLjXx9G444D968I2fOI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=3haU1VzjiBYVCarIGUakr2cPP97B4ZrpxiDTsmjhgRsC4N6e8qO2TyoeuSmGxDDCn YCl5LMfBbq8VjgfjS1yd/QrvLSYDLzVuZv6q4UuN/JnltttGSWq3VAOQLPRUSFuO32 q6XEZmc8TXvUW9WRmaYFTjAJFwy6tu7Etry0bCyt6XedUF86Sfb6npmJgMjGnMxQ6X gi87KyfWeR5PU0ry+gZcFDUX64BmipkGJE1pYQNoUs3PzYmGVXxvoRo/CKmJFxqIqL INhA67o0INXx6Pfaid3M+ahikGRv2hdBzeSALCH8kUgg2uMnBjjxxxhUo3A8c4sh63 hCITK/u09S3GQ== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 7476C3782064; Fri, 5 Jan 2024 18:47:18 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 22/30] drm/shmem-helper: Add common memory shrinker Date: Fri, 5 Jan 2024 21:46:16 +0300 Message-ID: <20240105184624.508603-23-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277704269928215 X-GMAIL-MSGID: 1787277704269928215 Introduce common drm-shmem shrinker for DRM drivers. To start using drm-shmem shrinker drivers should do the following: 1. Implement evict() callback of GEM object where driver should check whether object is purgeable or evictable using drm-shmem helpers and perform the shrinking action 2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device), which will register drm-shmem shrinker 3. Implement madvise IOCTL that will use drm_gem_shmem_madvise() Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 365 +++++++++++++++++- drivers/gpu/drm/panfrost/panfrost_gem.c | 3 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 13 +- include/drm/drm_device.h | 10 +- include/drm/drm_gem_shmem_helper.h | 68 +++- 5 files changed, 433 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index ff5437ab2c95..59cebd1e35af 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -128,11 +129,49 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv >= 0) && shmem->base.funcs->evict && + refcount_read(&shmem->pages_use_count) && + !refcount_read(&shmem->pages_pin_count) && + !shmem->base.dma_buf && !shmem->base.import_attach && + !shmem->evicted; +} + +static void +drm_gem_shmem_shrinker_update_lru_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm; + struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker; + + dma_resv_assert_held(shmem->base.resv); + + if (!shmem_shrinker || obj->import_attach) + return; + + if (shmem->madv < 0) + drm_gem_lru_remove(&shmem->base); + else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(shmem)) + drm_gem_lru_move_tail(&shmem_shrinker->lru_evictable, &shmem->base); + else if (shmem->evicted) + drm_gem_lru_move_tail(&shmem_shrinker->lru_evicted, &shmem->base); + else if (!shmem->pages) + drm_gem_lru_remove(&shmem->base); + else + drm_gem_lru_move_tail(&shmem_shrinker->lru_pinned, &shmem->base); +} + static void drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + if (!shmem->pages) { + drm_WARN_ON(obj->dev, !shmem->evicted && shmem->madv >= 0); + return; + } + if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -175,15 +214,26 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +static int +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; + if (drm_WARN_ON(obj->dev, obj->import_attach)) + return -EINVAL; + dma_resv_assert_held(shmem->base.resv); - if (refcount_inc_not_zero(&shmem->pages_use_count)) + if (shmem->madv < 0) { + drm_WARN_ON(obj->dev, shmem->pages); + return -ENOMEM; + } + + if (shmem->pages) { + drm_WARN_ON(obj->dev, !shmem->evicted); return 0; + } pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { @@ -204,8 +254,29 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages = pages; + return 0; +} + +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +{ + int err; + + dma_resv_assert_held(shmem->base.resv); + + if (shmem->madv < 0) + return -ENOMEM; + + if (refcount_inc_not_zero(&shmem->pages_use_count)) + return 0; + + err = drm_gem_shmem_acquire_pages(shmem); + if (err) + return err; + refcount_set(&shmem->pages_use_count, 1); + drm_gem_shmem_shrinker_update_lru_locked(shmem); + return 0; } @@ -222,6 +293,8 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) if (refcount_dec_and_test(&shmem->pages_use_count)) drm_gem_shmem_free_pages(shmem); + + drm_gem_shmem_shrinker_update_lru_locked(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); @@ -266,6 +339,11 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); * * This function Increases the use count and allocates the backing pages if * use-count equals to zero. + * + * Note that this function doesn't pin pages in memory. If your driver + * uses drm-shmem shrinker, then it's free to relocate pages to swap. + * Getting pages only guarantees that pages are allocated, and not that + * pages reside in memory. In order to pin pages use drm_gem_shmem_pin(). */ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { @@ -291,6 +369,10 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) if (refcount_inc_not_zero(&shmem->pages_pin_count)) return 0; + ret = drm_gem_shmem_swapin_locked(shmem); + if (ret) + return ret; + ret = drm_gem_shmem_get_pages_locked(shmem); if (!ret) refcount_set(&shmem->pages_pin_count, 1); @@ -489,29 +571,48 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv) madv = shmem->madv; + drm_gem_shmem_shrinker_update_lru_locked(shmem); + return (madv >= 0); } EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked); -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { struct drm_gem_object *obj = &shmem->base; - struct drm_device *dev = obj->dev; + int ret; - dma_resv_assert_held(shmem->base.resv); + ret = dma_resv_lock_interruptible(obj->resv, NULL); + if (ret) + return ret; - drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); + ret = drm_gem_shmem_madvise_locked(shmem, madv); + dma_resv_unlock(obj->resv); - dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - shmem->sgt = NULL; + return ret; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise); - drm_gem_shmem_put_pages_locked(shmem); +static void +drm_gem_shmem_shrinker_put_pages_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_device *dev = obj->dev; - shmem->madv = -1; + dma_resv_assert_held(shmem->base.resv); + if (shmem->evicted) + return; + + drm_gem_shmem_free_pages(shmem); drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); +} + +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + drm_gem_shmem_shrinker_put_pages_locked(shmem); drm_gem_free_mmap_offset(obj); /* Our goal here is to return as much of the memory as @@ -522,9 +623,45 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + + shmem->madv = -1; + shmem->evicted = false; + drm_gem_shmem_shrinker_update_lru_locked(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked); +/** + * drm_gem_shmem_swapin_locked() - Moves shmem GEM back to memory and enables + * hardware access to the memory. + * @shmem: shmem GEM object + * + * This function moves shmem GEM back to memory if it was previously evicted + * by the memory shrinker. The GEM is ready to use on success. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_swapin_locked(struct drm_gem_shmem_object *shmem) +{ + int err; + + dma_resv_assert_held(shmem->base.resv); + + if (!shmem->evicted) + return 0; + + err = drm_gem_shmem_acquire_pages(shmem); + if (err) + return err; + + shmem->evicted = false; + + drm_gem_shmem_shrinker_update_lru_locked(shmem); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_swapin_locked); + /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object * @file: DRM file structure to create the dumb buffer for @@ -571,22 +708,32 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) vm_fault_t ret; struct page *page; pgoff_t page_offset; + int err; /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; dma_resv_lock(shmem->base.resv, NULL); - if (page_offset >= num_pages || - drm_WARN_ON_ONCE(obj->dev, !shmem->pages) || - shmem->madv < 0) { + err = drm_gem_shmem_swapin_locked(shmem); + if (err) { + ret = VM_FAULT_OOM; + goto unlock; + } + + if (page_offset >= num_pages || !shmem->pages) { ret = VM_FAULT_SIGBUS; } else { + /* + * shmem->pages is guaranteed to be valid while reservation + * lock is held and drm_gem_shmem_swapin_locked() succeeds. + */ page = shmem->pages[page_offset]; ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } +unlock: dma_resv_unlock(shmem->base.resv); return ret; @@ -609,6 +756,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) drm_WARN_ON_ONCE(obj->dev, !refcount_inc_not_zero(&shmem->pages_use_count)); + drm_gem_shmem_shrinker_update_lru_locked(shmem); dma_resv_unlock(shmem->base.resv); drm_gem_vm_open(vma); @@ -694,7 +842,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count)); drm_printf_indent(p, indent, "vmap_use_count=%u\n", refcount_read(&shmem->vmap_use_count)); + drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); + drm_printf_indent(p, indent, "madv=%d\n", shmem->madv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info); @@ -784,8 +934,13 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ */ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) { - int ret; + struct drm_gem_object *obj = &shmem->base; struct sg_table *sgt; + int ret; + + if (drm_WARN_ON(obj->dev, drm_gem_shmem_is_evictable(shmem)) || + drm_WARN_ON(obj->dev, drm_gem_shmem_is_purgeable(shmem))) + return ERR_PTR(-EBUSY); ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) @@ -832,6 +987,184 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); +static unsigned long +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker = shrinker->private_data; + unsigned long count = shmem_shrinker->lru_evictable.count; + + if (count >= SHRINK_EMPTY) + return SHRINK_EMPTY - 1; + + return count ?: SHRINK_EMPTY; +} + +void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem)); + drm_WARN_ON(obj->dev, shmem->evicted); + + drm_gem_shmem_shrinker_put_pages_locked(shmem); + + shmem->evicted = true; + drm_gem_shmem_shrinker_update_lru_locked(shmem); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_evict_locked); + +static bool drm_gem_shmem_shrinker_evict_locked(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + int err; + + if (!drm_gem_shmem_is_evictable(shmem) || + get_nr_swap_pages() < obj->size >> PAGE_SHIFT) + return false; + + err = drm_gem_evict_locked(obj); + if (err) + return false; + + return true; +} + +static bool drm_gem_shmem_shrinker_purge_locked(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + int err; + + if (!drm_gem_shmem_is_purgeable(shmem)) + return false; + + err = drm_gem_evict_locked(obj); + if (err) + return false; + + return true; +} + +static unsigned long +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker = shrinker->private_data; + unsigned long nr_to_scan = sc->nr_to_scan; + unsigned long remaining = 0; + unsigned long freed = 0; + + /* purge as many objects as we can */ + freed += drm_gem_lru_scan(&shmem_shrinker->lru_evictable, + nr_to_scan, &remaining, + drm_gem_shmem_shrinker_purge_locked); + + /* evict as many objects as we can */ + if (freed < nr_to_scan) + freed += drm_gem_lru_scan(&shmem_shrinker->lru_evictable, + nr_to_scan - freed, &remaining, + drm_gem_shmem_shrinker_evict_locked); + + return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP; +} + +static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm, + const char *shrinker_name) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker; + struct shrinker *shrinker; + + shrinker = shrinker_alloc(0, shrinker_name); + if (!shrinker) + return -ENOMEM; + + shrinker->count_objects = drm_gem_shmem_shrinker_count_objects; + shrinker->scan_objects = drm_gem_shmem_shrinker_scan_objects; + shrinker->private_data = shmem_shrinker; + shrinker->seeks = DEFAULT_SEEKS; + + mutex_init(&shmem_shrinker->lock); + shmem_shrinker->shrinker = shrinker; + drm_gem_lru_init(&shmem_shrinker->lru_evictable, &shmem_shrinker->lock); + drm_gem_lru_init(&shmem_shrinker->lru_evicted, &shmem_shrinker->lock); + drm_gem_lru_init(&shmem_shrinker->lru_pinned, &shmem_shrinker->lock); + + shrinker_register(shrinker); + + return 0; +} + +static void drm_gem_shmem_shrinker_release(struct drm_device *dev, + struct drm_gem_shmem *shmem_mm) +{ + struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker; + + shrinker_free(shmem_shrinker->shrinker); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_evictable.list)); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_evicted.list)); + drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_pinned.list)); + mutex_destroy(&shmem_shrinker->lock); +} + +static int drm_gem_shmem_init(struct drm_device *dev) +{ + int err; + + if (drm_WARN_ON(dev, dev->shmem_mm)) + return -EBUSY; + + dev->shmem_mm = kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL); + if (!dev->shmem_mm) + return -ENOMEM; + + err = drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique); + if (err) + goto free_gem_shmem; + + return 0; + +free_gem_shmem: + kfree(dev->shmem_mm); + dev->shmem_mm = NULL; + + return err; +} + +static void drm_gem_shmem_release(struct drm_device *dev, void *ptr) +{ + struct drm_gem_shmem *shmem_mm = dev->shmem_mm; + + drm_gem_shmem_shrinker_release(dev, shmem_mm); + dev->shmem_mm = NULL; + kfree(shmem_mm); +} + +/** + * drmm_gem_shmem_init() - Initialize drm-shmem internals + * @dev: DRM device + * + * Cleanup is automatically managed as part of DRM device releasing. + * Calling this function multiple times will result in a error. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drmm_gem_shmem_init(struct drm_device *dev) +{ + int err; + + err = drm_gem_shmem_init(dev); + if (err) + return err; + + err = drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL); + if (err) + return err; + + return 0; +} +EXPORT_SYMBOL_GPL(drmm_gem_shmem_init); + MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); MODULE_IMPORT_NS(DMA_BUF); MODULE_LICENSE("GPL v2"); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 7edfc12f7c1f..8c26b7e41b95 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -99,8 +99,7 @@ static void panfrost_gem_mapping_release(struct kref *kref) * time, and heap BOs may have acquired pages if the fault handler * was called, in which case bo->sgts should be non-NULL. */ - if (!bo->base.base.import_attach && (!bo->is_heap || bo->sgts) && - bo->base.madv >= 0) { + if (!bo->base.base.import_attach && (!bo->is_heap || bo->sgts)) { drm_gem_shmem_put_pages(&bo->base); bo->sgts = NULL; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index d4fb0854cf2f..7b4deba803ed 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -15,6 +15,13 @@ #include "panfrost_gem.h" #include "panfrost_mmu.h" +static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv > 0) && + !refcount_read(&shmem->pages_pin_count) && shmem->sgt && + !shmem->base.dma_buf && !shmem->base.import_attach; +} + static unsigned long panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { @@ -26,7 +33,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc return 0; list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (drm_gem_shmem_is_purgeable(shmem)) + if (panfrost_gem_shmem_is_purgeable(shmem)) count += shmem->base.size >> PAGE_SHIFT; } @@ -53,7 +60,7 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) /* BO might have become unpurgeable if the last pages_use_count ref * was dropped, but the BO hasn't been destroyed yet. */ - if (!drm_gem_shmem_is_purgeable(shmem)) + if (!panfrost_gem_shmem_is_purgeable(shmem)) goto unlock_mappings; panfrost_gem_teardown_mappings_locked(bo); @@ -80,7 +87,7 @@ panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) { if (freed >= sc->nr_to_scan) break; - if (drm_gem_shmem_is_purgeable(shmem) && + if (panfrost_gem_shmem_is_purgeable(shmem) && panfrost_gem_purge(&shmem->base)) { freed += shmem->base.size >> PAGE_SHIFT; list_del_init(&shmem->madv_list); diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h index 63767cf24371..6e729e716505 100644 --- a/include/drm/drm_device.h +++ b/include/drm/drm_device.h @@ -15,6 +15,7 @@ struct drm_vblank_crtc; struct drm_vma_offset_manager; struct drm_vram_mm; struct drm_fb_helper; +struct drm_gem_shmem_shrinker; struct inode; @@ -289,8 +290,13 @@ struct drm_device { /** @vma_offset_manager: GEM information */ struct drm_vma_offset_manager *vma_offset_manager; - /** @vram_mm: VRAM MM memory manager */ - struct drm_vram_mm *vram_mm; + union { + /** @vram_mm: VRAM MM memory manager */ + struct drm_vram_mm *vram_mm; + + /** @shmem_mm: SHMEM GEM memory manager */ + struct drm_gem_shmem *shmem_mm; + }; /** * @switch_power_state: diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 525480488451..df97c11fc99a 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -13,6 +14,7 @@ #include struct dma_buf_attachment; +struct drm_device; struct drm_mode_create_dumb; struct drm_printer; struct sg_table; @@ -54,8 +56,8 @@ struct drm_gem_shmem_object { * @madv: State for madvise * * 0 is active/inuse. + * 1 is not-needed/can-be-purged * A negative value is the object is purged. - * Positive values are driver specific and not used by the helpers. */ int madv; @@ -102,6 +104,14 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc : 1; + + /** + * @evicted: True if shmem pages are evicted by the memory shrinker. + * Used internally by memory shrinker. The evicted pages can be + * moved back to memory using drm_gem_shmem_swapin_locked(), unlike + * the purged pages (madv < 0) that are destroyed permanently. + */ + bool evicted : 1; }; #define to_drm_gem_shmem_obj(obj) \ @@ -122,14 +132,19 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma); int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv); +int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { - return (shmem->madv > 0) && - !refcount_read(&shmem->pages_pin_count) && shmem->sgt && + return (shmem->madv > 0) && shmem->base.funcs->evict && + refcount_read(&shmem->pages_use_count) && + !refcount_read(&shmem->pages_pin_count) && !shmem->base.dma_buf && !shmem->base.import_attach; } +int drm_gem_shmem_swapin_locked(struct drm_gem_shmem_object *shmem); + +void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); @@ -273,6 +288,53 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v return drm_gem_shmem_mmap(shmem, vma); } +/** + * drm_gem_shmem_object_madvise - unlocked GEM object function for drm_gem_shmem_madvise_locked() + * @obj: GEM object + * @madv: Madvise value + * + * This function wraps drm_gem_shmem_madvise_locked(), providing unlocked variant. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +static inline int drm_gem_shmem_object_madvise(struct drm_gem_object *obj, int madv) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + return drm_gem_shmem_madvise(shmem, madv); +} + +/** + * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory manager + */ +struct drm_gem_shmem_shrinker { + /** @lock: Protects @lru_* */ + struct mutex lock; + + /** @shrinker: Shrinker for purging shmem GEM objects */ + struct shrinker *shrinker; + + /** @lru_pinned: List of pinned shmem GEM objects */ + struct drm_gem_lru lru_pinned; + + /** @lru_evictable: List of shmem GEM objects to be evicted */ + struct drm_gem_lru lru_evictable; + + /** @lru_evicted: List of evicted shmem GEM objects */ + struct drm_gem_lru lru_evicted; +}; + +/** + * struct drm_gem_shmem - GEM shmem memory manager + */ +struct drm_gem_shmem { + /** @shrinker: GEM shmem shrinker */ + struct drm_gem_shmem_shrinker shrinker; +}; + +int drmm_gem_shmem_init(struct drm_device *dev); + /* * Driver ops */ From patchwork Fri Jan 5 18:46:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185510 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6405186dyb; Fri, 5 Jan 2024 10:54:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IHLt+097FltVv992bCSvPRvsujduJM1VI4aexH0UAlIvRO5CV0q16mNm9jyS5agAOlZlQb7 X-Received: by 2002:a05:6a00:d49:b0:6da:2f8f:5670 with SMTP id n9-20020a056a000d4900b006da2f8f5670mr2364487pfv.22.1704480850383; Fri, 05 Jan 2024 10:54:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480850; cv=none; d=google.com; s=arc-20160816; b=hrDYq0cZ8m+4quw5slZ8sR9kK7BLiznjZXgsMMWDXGO8u7r87uew2CG1ztiZoJqRcx 6+I5LwqyjEOmXZ1RrVl2VTncV2Qn/zyzZrJyNvoAyt0AvWEf1YIcjwAoxIBbc4LL8RRV iMrOl3rFfPZzTDMsL/o1JppjOfHaNnrHXfMLcrm5CzZmTHC9Spe+UDlteViZLrLk9qHC DnFdJJim2m82PUoLkgV/0qwje15bgKzB7qmvjJXZ7/7ppvCGS6EzWhDPotCGFl5cJVF7 dns3dPdNlYzyTd6Zg7OkmjHCx92dAGTecs2gPpf3ysgS74yFJ7x4rprkeFBTeb9TKhFs GnBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=bl4Stn0U07rFiIwM4bIt2wOUs0fat/6ybFFRE7pRjE8=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=b/h/v1PG1ktYTexwCGBMbiZlJVd3y1mcRC9vCPYOZlblaEStOCcpHWqEaM0e8qfSRm Vdr1iVRI2NH5DshZHd7zam6pbvKQZYNRxlkFXq83vbjh40TsodETS67LOUpbnniEYuZ2 hmEV0PquGZe7Ex4wOskmjrN1116uXO+LEpvSP3qc1TxcW3V+P2ixwIzAOW4hADbtaHbL Xr/rOoeF8hfiyQIst/Q4sLbojm5Nf8v3yYIn9nm47AJWazzynfrRxMH77vT+SP4bvwX6 DQCsfn9yb9wsPYNyg6HdW+YpjNr4+BjHM38v8rKyFtRZyyf2am7XUIV9Q9hTGxPxAzRt BjMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kEoyrUuq; spf=pass (google.com: domain of linux-kernel+bounces-18261-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18261-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id a14-20020a62d40e000000b006dab55d5b37si1659655pfh.195.2024.01.05.10.54.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:54:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18261-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kEoyrUuq; spf=pass (google.com: domain of linux-kernel+bounces-18261-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18261-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 6C901B22C5B for ; Fri, 5 Jan 2024 18:53:04 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EC3F83B296; Fri, 5 Jan 2024 18:47:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="kEoyrUuq" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 037F23A8DA for ; Fri, 5 Jan 2024 18:47:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480441; bh=92DtaMjqXiYcBus3ShTQudZc7/346pWIqwyiwc+pNg4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kEoyrUuqdlwGs9/4O1PrKLsmZzI4Qej2W6V9qzlpK5WXJD5FO5/7xgqBoX8Ye5FOg UOulsj9FNHuqHJO6tCgTyhe6URp00htm15iutOBQhSX09/qLL6Ad2PHzYg2XpltfXE lAOx/3vdvleFz0Yz3tcgltqtVEl3+1HHKBDcWKSGulLCaYZfRYClEs5tuG4o5yJ/H3 T3ZAjeT4JQ2L+Mqpb5QigjMzoytAMbLi+PiGtjX6QZdn2kP3zaHiB+h5BiM5wq3tZ/ dJZtoQmTCTFFxOqUAVQaEUg+ZvvOBLsfg/5P5nlPyFRDxwC5+69nCOlo+aKsQgewhr gKNZgZcK76bSg== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 273D3378204A; Fri, 5 Jan 2024 18:47:20 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 23/30] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked() Date: Fri, 5 Jan 2024 21:46:17 +0300 Message-ID: <20240105184624.508603-24-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277711853356190 X-GMAIL-MSGID: 1787277711853356190 Export drm_gem_shmem_get_pages_sgt_locked() that will be used by virtio-gpu shrinker during GEM swap-in operation done under the held reservation lock. Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 22 +++++++++++++++++++++- include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 59cebd1e35af..8fd7851c088b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -875,12 +875,31 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); -static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem) +/** + * drm_gem_shmem_get_pages_sgt_locked - Provide a scatter/gather table of pinned + * pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This is a locked version of @drm_gem_shmem_get_sg_table that exports a + * scatter/gather table suitable for PRIME usage by calling the standard + * DMA mapping API. + * + * Drivers must hold GEM's reservation lock when using this function. + * + * Drivers who need to acquire an scatter/gather table for objects need to call + * drm_gem_shmem_get_pages_sgt() instead. + * + * Returns: + * A pointer to the scatter/gather table of pinned pages or error pointer on failure. + */ +struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; int ret; struct sg_table *sgt; + dma_resv_assert_held(shmem->base.resv); + if (shmem->sgt) return shmem->sgt; @@ -904,6 +923,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ kfree(sgt); return ERR_PTR(ret); } +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt_locked); /** * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index df97c11fc99a..167f00f089de 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -149,6 +149,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); +struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent); From patchwork Fri Jan 5 18:46:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185504 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404833dyb; Fri, 5 Jan 2024 10:53:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IGcW7L2qnYuXqcDXtRdwjgEXSwtbHo4WY8XPhEag/mhbBFkzmbWjE+MCI3zNGqZ5iNf0WsI X-Received: by 2002:a50:9519:0:b0:554:32ec:eada with SMTP id u25-20020a509519000000b0055432eceadamr2755891eda.18.1704480801848; Fri, 05 Jan 2024 10:53:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480801; cv=none; d=google.com; s=arc-20160816; b=GVDuYysq9lq/X3joFbe09VU0g3YOyJnBVi7B7uQzSFBs/hBJ/wlx9n61J/7che9xjG 6MBZrudooJP28h/gGMZeJWys8BcPJACRYltYoF9620gZsA64kvYq6VvPH5U5MsQJrPWw ywHUyktGk537DpUrqiUc6JWiZunRfRNTFZmVBCjziJO68dIyDgUTfh9dAW6umLEFlSAn eilKf0yFCDwlkrft1U3g0oBHKekPy5wL9r0FlJr/1elY0x9+pmn5Pl0IsZe35Mq+9i5x wvnkamrglsSycHGU2FbQ1gOtbtr0SYWlZP3BeB1Z2N2Ker8hKuzpwnJOGtCq+RHImGcd TVuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=WkDVJRo3kCdmEo0wBEcQ2vQNZN1Vw/Tb2tnQ+0ISayw=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=ioAfmmD0frcFEMbj+vUTZhgtZALEixuNG860nNZZxEiPbOKxiI5zSJW8W56p0DepOJ 8TC18SmPVMqcPLwXSZL4Ubuil5ncgz+Q84GuKjA89o7uYEkaVGLTZ0mwl8Ld5jiNIWqp o6Y76vDWZNmlRvIQ/QoSRJ7lqyBgp+Cl3o4GfrGrHfZ7v/jvbVC0VE/KAUBqVGNnvRa/ Qt6KV2dmoYm48s69GeUsznTEyV8phin5N0KH8uw3MwMkd3UEkxkpTBRybjMuhXPlPJci wElehX0o9UxQ98/qgRBtE+/Vc9aMy81KzmBPKxryjxNiiBQak873c1GEpmn4iPnqd2mE +mnw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=RGppGiad; spf=pass (google.com: domain of linux-kernel+bounces-18262-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18262-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id f19-20020a50a6d3000000b0055451a2dc7dsi872452edc.531.2024.01.05.10.53.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:53:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18262-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=RGppGiad; spf=pass (google.com: domain of linux-kernel+bounces-18262-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18262-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 75E181F24678 for ; Fri, 5 Jan 2024 18:53:21 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BDD8D3B79D; Fri, 5 Jan 2024 18:47:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="RGppGiad" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D12443A8F8 for ; Fri, 5 Jan 2024 18:47:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480443; bh=0EkorB7mGPNDlGWHg99Nm8EGClX4jIfjE4CRLhc0fAI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RGppGiadlpknVJqbMhfDp1lvlPAp9TkSEX0q/EJ0Huh2rNnWwCelriirktxoBJab+ OBxtKcS0p3Gl7+D9rSBxsnCAe3o6FuTyICpXTIEOkx7RYZbo27iW3nR4ciLz1gbbef yt1/BSGQKTWNuZj88DDNbTo8LgaJ8kcv3Mdy57S/d5EAy423YIjpE8ow69Ha9dj3kE VfBSnwK7CzGE6GJEf8pQV1/EGfsRtvGrn+n7VEa3M/oGgM7bJZwmmEIGSaoFFDXN34 j1dBqf20ce5nEqbVS2CcL7gU4BhHY1s65YTl1XCXZ26RAK5t7JVQt8egKqT6kHj6kZ 20zdOgB7Zt3BQ== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id AA0D53782067; Fri, 5 Jan 2024 18:47:21 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 24/30] drm/shmem-helper: Optimize unlocked get_pages_sgt() Date: Fri, 5 Jan 2024 21:46:18 +0300 Message-ID: <20240105184624.508603-25-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277661419450176 X-GMAIL-MSGID: 1787277661419450176 SGT isn't refcounted. Once SGT pointer has been obtained, it remains the same for both locked and unlocked get_pages_sgt(). Return cached SGT directly without taking a potentially expensive lock. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 8fd7851c088b..e6e6e693ab95 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -962,6 +962,18 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, drm_gem_shmem_is_purgeable(shmem))) return ERR_PTR(-EBUSY); + /* + * Drivers that use shrinker should take into account that shrinker + * may relocate BO, thus invalidating the returned SGT pointer. + * Such drivers should pin GEM while they use SGT. + * + * Drivers that don't use shrinker should take into account that + * SGT is released together with the GEM pages. Pages should be kept + * alive while SGT is used. + */ + if (shmem->sgt) + return shmem->sgt; + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ERR_PTR(ret); From patchwork Fri Jan 5 18:46:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185505 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404923dyb; Fri, 5 Jan 2024 10:53:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IEbcYdnv8fUmV27/CtBFftKKySW6PuNIzJJTJCt1oKvU5i3A4zXTIPnBE5vckEw04oHHzkI X-Received: by 2002:a17:902:bb10:b0:1d3:e8b1:c2cf with SMTP id im16-20020a170902bb1000b001d3e8b1c2cfmr2341278plb.116.1704480815443; Fri, 05 Jan 2024 10:53:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480815; cv=none; d=google.com; s=arc-20160816; b=irNt4gSnt9xOiiyZoIf2BqG+XpxUowZlplozSvZBaEDP3gXL35WO7aCYQc/ztHVcvC +ZcrEz9plniDRqBLVyM07ozFRKYRbBNvQbd8CA94BQCLp45WN23hPfSKA2Q7BaOYPaxg 6tHaZqaKN0T+5euFCTiV+OZ/xJn69k6WEol1KXTv9bVnZFT+zgS5jeuYeh2LSnVXxiTz qg29W3t29dF/1omMW12EIFYyj2su67rMkKvt50dgpLx0y+HZrYl5lFCmvde/rsNbCuww 4qqVwdGCyQkz6KlHTJD0RIwzwIiVPaAoYRHov1VEwHr6876jbgyL3EfCc73sLrGVJFuz 4YWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=sL6LvkBQEqLS66m6jd/juufzq5b3ymkotl8lXmDTNYY=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=VZt8WSrdGGA9bQo1gX3XnAbYjX625yIoFBAsPWzpoJSBqbfrE5lbPvufghraRlkFgo NYHTymYbE+J1IqunWyJKrRfkdU2g4TNCztnqNWM6++F6QiP0MNMsbdQQ5CxphKINVKXQ wjo89efqo5V/GU5Lpo9TsEqsdj+/nNZ0ETngTffkM3ikc2gDVDarxAO4v5HRf6JHLJqQ 1Gc/9OF1Q88psSA+nlBbUTkQuXvXbYZ2dqfBlCj5G7plJf7jeYwUjbdIVfniNx22yRm4 UMgB2iyPI/CBBWRPi27mA7oNxUOYyF4Uxz7Fm1r8O9m39HxOjne/E3TCYyTwqLNg/Gmw cF7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=bC32xTjF; spf=pass (google.com: domain of linux-kernel+bounces-18263-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18263-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id q10-20020a170902daca00b001d4c4bd4b34si1633069plx.155.2024.01.05.10.53.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:53:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18263-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=bC32xTjF; spf=pass (google.com: domain of linux-kernel+bounces-18263-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18263-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 3BFB1285D69 for ; Fri, 5 Jan 2024 18:53:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 38C213BB3B; Fri, 5 Jan 2024 18:47:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="bC32xTjF" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 452603B19B for ; Fri, 5 Jan 2024 18:47:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480444; bh=fLJxMhHQEYSpt8Bqc4/bJy7tBd8v7hDnk2PHVorE3qc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bC32xTjF4i+0BoGAcr5FknxLJTdvHfAZsGZ7CI25/elJxlKVuE3vR6OrI4/hvmfCa ThiwAnjeHHzZDCJRTuX2A8OwgkO2x+1+kcDl/KxfVobeazReMlnDhpm9QhLTtOfFdo R5RnbvFX6znHhPkHoEv7fDKAfzt7Psl7QnWxYeWNUbj/ZxJxmxwUNaL6hOMbZfK0jp hXiZblVuJ9mVWrTR4w/GzjJS1IAQ2qHE5F3R7ZxexU6Zsvz0qub7pSxo3zrJsxvCs7 zVn+JG5PlHelFoc2HMTKOnOfQ1QldimrwoTjebvnpk/OtRQZK7lNR7tTn+X37ZbwTz dujX/5z5C4kCA== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 50EDA3782046; Fri, 5 Jan 2024 18:47:23 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 25/30] drm/shmem-helper: Don't free refcounted GEM Date: Fri, 5 Jan 2024 21:46:19 +0300 Message-ID: <20240105184624.508603-26-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277675533184508 X-GMAIL-MSGID: 1787277675533184508 Don't free shmem object if it has pages that are in use at the time of the GEM's freeing if DRM driver doesn't manage GEM/pages lifetime properly. This prevents memory corruption due to the use-after-free bug in exchange to leaking GEM. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index e6e6e693ab95..0d95d723b90d 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -205,9 +205,15 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (obj->import_attach) drm_prime_gem_destroy(obj, shmem->sgt); - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); + /* + * Prevent memory corruption caused by the use-after-free bug in a + * case where shmem user erroneously holds reference to pages while + * GEM is freed by leaking the GEM. + */ + if (drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)) || + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)) || + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count))) + return; drm_gem_object_release(obj); kfree(shmem); From patchwork Fri Jan 5 18:46:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185507 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6405010dyb; Fri, 5 Jan 2024 10:53:47 -0800 (PST) X-Google-Smtp-Source: AGHT+IFMhTUSAWXq/nAmvXsAMSz/goS/wpzQ3FplVa0lWU7r9vZWoV3BtVfENePIHK2rnvWUkRe7 X-Received: by 2002:ac2:4a6d:0:b0:50e:a919:1c5b with SMTP id q13-20020ac24a6d000000b0050ea9191c5bmr1374764lfp.43.1704480827507; Fri, 05 Jan 2024 10:53:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480827; cv=none; d=google.com; s=arc-20160816; b=FEBkVz7SHgFXueEq/UMyelulmu0NEMg5EW8iTIMnxBh+BGvCbeDfesgs54oIZIoJac p4FZcy5xrohhDV8s9zbFvIdpoCmpR5vkzcMd3ex4QssrW13aqetLzULm0wxkBTrc2qFa MgTslkWg3IFxQMlLwFb+HQjlw98iLKLzB03Oa7GmDTxvKjqmxunFYxmH8k6n66RLhGDR /JRFjuisKVJ5oNFsOLSpEI4/Y6ehd2AQMJFdAXI+UEEftXrj0YR+nzRfjdxfJ+6opxre KbX3H+/q8fLqYnIKgoIT5R4vDSmYrzBG5XxZGZR78VqRe3NAEpyQuncTt/k+8qlMS+cH aqSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=E5RLd7ocLVw2MqI9ZxAL7gW2fuibCoWG+rN0QayZndw=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=WrCuJjsSJubiC6/tUNxbHUmEnEB5l/HAzOwguySF7E76nSFRPpXjnDs7Dv0yQdH9sR oxgILMKtYyNbOvlrrLSE9CrjI9cF4p28JprpvVHwNP22o3ZsnacbeunJzTK1AUWYl7wY WE/yzFiOn9ImPWajG+kMq0gcvNlmjJ6QJHGqyFslcBYO/zGARJ2XDzo+pTFb2DLtwiR1 SGs6IkKqd6BFTKxrRhbPcgDnQeeAzGhJDmW2R/YAHZCgfed6PGeijfy3u+0GXCTT10Qp oeWmXXdj0evb2Oowo3SoY+rlvR1oDtNwWt/dseAt9vv6crMvPYzFr9MX1HYLH1BcDxpF DKYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=koD1jTYA; spf=pass (google.com: domain of linux-kernel+bounces-18264-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18264-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id kl25-20020a170907995900b00a27eddc5e40si818764ejc.532.2024.01.05.10.53.47 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:53:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18264-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=koD1jTYA; spf=pass (google.com: domain of linux-kernel+bounces-18264-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18264-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 207001F246B7 for ; Fri, 5 Jan 2024 18:53:47 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 769303C06B; Fri, 5 Jan 2024 18:47:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="koD1jTYA" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 935343B2BD for ; Fri, 5 Jan 2024 18:47:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480446; bh=Xy26gJL0rfEIAQ3GTzMQ9EaB0QSmN8Rrgbq8+tRhsXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=koD1jTYASaaq0lH30DCRGwRlLF4icZe6o4RtGhMtxdBXY6SQHCj+AAoMWGtCV5jal 3o38uo19aKWom4br+OTJmNgxqtAXnvCQUYJjZ8hDcNButj0PQLLHni40YT501PCe6q ErF+M+MYMwDfZlVE+2dFDViPRVojFs1vXPkPZ8WiG92SaZ7MSFVtT17Scs7uYMdJsy HZalfvQSfjP9e4W+qiuVHBQZioyrxkCXK6FpqhTSHYXKdRKIxj01ioqjERVM7ygUfQ hQGNr+u92vLMktYspBAr0AJKm4va1ejnViBWlDzvwRtDU+HLnQGiP3yS7a15I0yQuP J7987sqBJRbVw== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id E4CC2378206D; Fri, 5 Jan 2024 18:47:24 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 26/30] drm/shmem-helper: Turn warnings about imported GEM into errors Date: Fri, 5 Jan 2024 21:46:20 +0300 Message-ID: <20240105184624.508603-27-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277688541329334 X-GMAIL-MSGID: 1787277688541329334 Turn sanity warnings about DRM-SHMEM API misuse into a error conditions for cases where imported GEM is used when it shouldn't be used. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 0d95d723b90d..7d2fe12bd793 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -409,7 +409,8 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) struct drm_gem_object *obj = &shmem->base; int ret; - drm_WARN_ON(obj->dev, obj->import_attach); + if (drm_WARN_ON(obj->dev, obj->import_attach)) + return -EINVAL; if (refcount_inc_not_zero(&shmem->pages_pin_count)) return 0; @@ -872,7 +873,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - drm_WARN_ON(obj->dev, obj->import_attach); + if (drm_WARN_ON(obj->dev, obj->import_attach)) + return ERR_PTR(-EINVAL); if (drm_WARN_ON(obj->dev, !shmem->pages)) return ERR_PTR(-ENOMEM); @@ -909,7 +911,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object if (shmem->sgt) return shmem->sgt; - drm_WARN_ON(obj->dev, obj->import_attach); + if (drm_WARN_ON(obj->dev, obj->import_attach)) + return ERR_PTR(-EINVAL); sgt = drm_gem_shmem_get_sg_table(shmem); if (IS_ERR(sgt)) From patchwork Fri Jan 5 18:46:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185509 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6405152dyb; Fri, 5 Jan 2024 10:54:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IEZw6tKuArWmPvTe77hgKX4z0iKHF5GLUlqsGQ1v4Qs2JMiUaUSDrg6iMJGGGuBJHsSlO+N X-Received: by 2002:a05:600c:3ca3:b0:40d:5597:c7e9 with SMTP id bg35-20020a05600c3ca300b0040d5597c7e9mr1427830wmb.95.1704480847208; Fri, 05 Jan 2024 10:54:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480847; cv=none; d=google.com; s=arc-20160816; b=McONn4+bWJGGdj8BndnfMERA+2au9rC+judGLI/B/CS4q66jZUIMbN8CsShEK1Ji+K bGbh+48eVaQ0JAcTt7fMtxoM5qkxK59f7UjCYMtrvwVgE/6F0E4fxfdBN3QzxDvgtnth qFcEXC6rSlBmv2qPhrJN/IAaz5vXCea2v1K9a5hcKeQxVDNx5cumpgZafn/YwmLfjRy7 WbF2kvmApjyrgkBvFPjk00ARKBQDHm4EVYYCxd1XfWMuN854fiSiTIezHjc9jbrLZay6 w7YSX0bitkMFScj7bYfpjrx5Cb/iAdoUBAGVIHsTPPml0dvUp3H7/GuKSI2EIAqTsWjG xcgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=+IrXuO7gzRuPTab542AHBaSpYWgjBTMenk/DnQ7YooI=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=kzMrjPAysqvTB4lUuo05hKAtxgL+ZrMmMX3VS3LhJOzv1r9dw+DsPLKnm5WQkDInY9 FbjJ5cxUCjRVPIVLeKpq3sgvsP37okbEYqKenOktX08of/6zhgE0BaBljq2nnflG2rY7 Sak96IllNS2gr/WmVT5Iwd2mk+mYRSrCuCpJ+ltQ0/aNFVCrlhbRyKtQ1ZP7+i+/41kP DxDqQjtdAhVjfZwoDCFNbCZMyPu6oMvTVWCIJLVMR+qy/sGSEtEKPeK+OF2Ce1EOL0p3 W8N/yWjrq8OgtYOOFTVBAs/rJ4aofLkrRBaWINf1/Oq8aeIDn24BURpfNjgFm17DR/8f zjVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=cNGAEogZ; spf=pass (google.com: domain of linux-kernel+bounces-18265-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18265-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id h12-20020a170906260c00b00a2859929c6esi775462ejc.582.2024.01.05.10.54.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:54:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18265-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=cNGAEogZ; spf=pass (google.com: domain of linux-kernel+bounces-18265-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18265-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id A8F541F2470F for ; Fri, 5 Jan 2024 18:54:06 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A765B3C488; Fri, 5 Jan 2024 18:47:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="cNGAEogZ" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5BF63BB2F for ; Fri, 5 Jan 2024 18:47:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480448; bh=hLZrw2uyBORbZqEow6e5Z9wt3wafG02YrAXZMmshnO8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cNGAEogZcRVb+dzqTWonn6PmYnXTX/FYV6PGxzbt6f7gVux5+fwoN2dMxpElgOFUS +33gPf13ydlPZmcFiQTEO0+P5pqkPzdqGlyjAxqtd7FpXlOaQ65gl2/ndIE4C0mMt+ 8xHKpqYWRDi4C35UEpLbVnJYos06xo0pZuVKz0NNx3y2Xs7AbpqylorRj/8+X+u83u w0/28yBpFqXZ2J8n7urums5aDltS79D3CrOamytc5tdJcglGEl9gCPuFZjbk2a1G/u JI2WrP99ehgJFQLyAQrYZhBGs+HZl3Ow4xHiUZb2nPJV/yaI9fr42/OlV7F9uE/zH3 nYGaYi3xmoiXw== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 6FB813782058; Fri, 5 Jan 2024 18:47:26 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 27/30] drm/virtio: Pin display framebuffer BO Date: Fri, 5 Jan 2024 21:46:21 +0300 Message-ID: <20240105184624.508603-28-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277708837323293 X-GMAIL-MSGID: 1787277708837323293 Prepare to addition of memory shrinker support by pinning display framebuffer BO pages in memory while they are in use by display on host. Shrinker is free to relocate framebuffer BO pages if it doesn't know that pages are in use, thus pin the pages to disallow shrinker to move them. Acked-by: Gerd Hoffmann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 2 ++ drivers/gpu/drm/virtio/virtgpu_gem.c | 19 +++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_plane.c | 17 +++++++++++++++-- 3 files changed, 36 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index bb7d86a0c6a1..83d1e4622292 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -318,6 +318,8 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); +void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); /* virtgpu_vq.c */ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 7db48d17ee3a..625c05d625bf 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -294,3 +294,22 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) } spin_unlock(&vgdev->obj_free_lock); } + +int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) +{ + int err; + + if (virtio_gpu_is_shmem(bo)) { + err = drm_gem_shmem_pin(&bo->base); + if (err) + return err; + } + + return 0; +} + +void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo) +{ + if (virtio_gpu_is_shmem(bo)) + drm_gem_shmem_unpin(&bo->base); +} diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index a72a2dbda031..162fb8a44d71 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -248,20 +248,28 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_framebuffer *vgfb; struct virtio_gpu_object *bo; + int err; if (!new_state->fb) return 0; vgfb = to_virtio_gpu_framebuffer(new_state->fb); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)) + + err = virtio_gpu_gem_pin(bo); + if (err) + return err; + + if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob) return 0; if (bo->dumb && (plane->state->fb != new_state->fb)) { vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); - if (!vgfb->fence) + if (!vgfb->fence) { + virtio_gpu_gem_unpin(bo); return -ENOMEM; + } } return 0; @@ -271,15 +279,20 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_object *bo; if (!state->fb) return; vgfb = to_virtio_gpu_framebuffer(state->fb); + bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence = NULL; } + + virtio_gpu_gem_unpin(bo); } static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, From patchwork Fri Jan 5 18:46:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185511 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6405278dyb; Fri, 5 Jan 2024 10:54:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IE3HVkggQ7ohev1m48v8IX7pXBnghpmojr3odFKZa0GzotZKXxJMHpOs0edRhu6E3Zn45vJ X-Received: by 2002:a19:6702:0:b0:50e:7314:e798 with SMTP id b2-20020a196702000000b0050e7314e798mr1275995lfc.109.1704480862185; Fri, 05 Jan 2024 10:54:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480862; cv=none; d=google.com; s=arc-20160816; b=RF4jR788Q/8h3NcMksBZrd7FkcWZLVz0ZIMN0iVbyCyCaxuRNI/caqMYqtjm0pVtwB rLQUkfsiow6tvtj8Q5TIPf3cw2F0yZ+beHJmi12TK1h6DnH38oCLb8G4yqBZamqi3OFv RkLIV+hUpVgZIh/WgsO3/DigJ33i7+Nm/HuMTCvJHx9tbKpXhvRo3S+b9xyrYHa5vN1u XqTz5dK0K+BGzaqDkbfKZ+dOmDtHmxpcUMgRRq9lDd5ks1R59mDs98RNQ4jMtIbq48ms OBbPcWg0bxOllvq7X9bhsFt0cwyd6EmhYbVocTURrhFfHHjGQtmVUMTmAQ9YMMgglwu9 czUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=pTQLU8kI0aMlJnbGojLkb5xR84JsTKXPnR7l72xx1cE=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=Vbm3AXGj/buZ7t7Y3MTf6a9onRQVcB8OX8xFfCg/7jaysDuznkF4BJSLM2GG2FOG2y yfpIYomECPrKrjXO+C5DQ0Jt+io/l1dlGEXRbA2q+7W90XoJis9FEfJppd3hS3l2RJpR oMPq3In3okCSb3Ta6GXCyZsuMFllEQV1tpjer4UZ2P+s0RwYp/x69OtHVlpwlaYmTCAe YWv1ktGa1JOa6+XXUeiWzT7TBQhRKLI+d3Kg5Qz++Gd1LHN6sHy5OSd5/6gKdgtGbCSD bt08DF01SMZjGL+GAM4VL6i0Nxi7rA4Uz+Tx5SY7UUHyyPEqfNIZS8jTLH74jZaEcb8U ZvQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=WwvIxqBg; spf=pass (google.com: domain of linux-kernel+bounces-18266-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18266-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id n2-20020a50cc42000000b00553b42fb10dsi889053edi.358.2024.01.05.10.54.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:54:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18266-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=WwvIxqBg; spf=pass (google.com: domain of linux-kernel+bounces-18266-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18266-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id C78C91F2485B for ; Fri, 5 Jan 2024 18:54:21 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 606B33C6A6; Fri, 5 Jan 2024 18:47:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="WwvIxqBg" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 216BC3BB5F for ; Fri, 5 Jan 2024 18:47:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480449; bh=YSpfUi0dYnSrHc3ISwDMoKJxIxAsvP0LFr1Z/Juj61c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WwvIxqBgT+ZF/F4QlQBuLmEywOoKoRq+iLeKYzNVyQMuowhWW+DCEGM7oFIrAd8Th 0Pk8+srxhnBR2oIAQHFvyPQIMbPNAYiq9IZcdRiQDY31O8PqEXeAZ3dki6CqP8zb74 JBIEIFEl4JijnEnWHmpegTWX/WYc2TMU6vm509VjjKWskS5f/fu1yawksoCYM8buxn gGSyF06h7/Hk9TToABuWJGUYImLpu6az6xyW95+lHV7QUEH7UTA0sRJ7kkEc1384tR jSJF9y26Tl/UmAYkK1nUjMWY4sNtAhntov47dqIiatlTr2aiy0ejNtcw+KkzLPS9cR VhVaBMV+2oQFA== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 548AE3782075; Fri, 5 Jan 2024 18:47:28 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 28/30] drm/virtio: Attach shmem BOs dynamically Date: Fri, 5 Jan 2024 21:46:22 +0300 Message-ID: <20240105184624.508603-29-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277724124900885 X-GMAIL-MSGID: 1787277724124900885 Prepare for addition of memory shrinker support by attaching shmem pages to host dynamically on first use. Previously the attachment vq command wasn't fenced and there was no vq kick made in the BO creation code path, hence the attachment already was happening dynamically, but implicitly. Making attachment explicitly dynamic will allow to simplify and reuse more code when shrinker will be added. The virtio_gpu_object_shmem_init() now works under the held reservation lock, which will be important to have for shrinker to avoid moving pages while they are in active use by the driver. Acked-by: Gerd Hoffmann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 7 +++ drivers/gpu/drm/virtio/virtgpu_gem.c | 26 +++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 32 +++++++---- drivers/gpu/drm/virtio/virtgpu_object.c | 73 ++++++++++++++++++++----- drivers/gpu/drm/virtio/virtgpu_submit.c | 15 ++++- 5 files changed, 125 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 83d1e4622292..1837dc7ea9fb 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -92,6 +92,7 @@ struct virtio_gpu_object { uint32_t hw_res_handle; bool dumb; bool created; + bool detached; bool host3d_blob, guest_blob; uint32_t blob_mem, blob_flags; @@ -318,6 +319,8 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs); int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); @@ -458,6 +461,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo); +int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo); + +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo); + int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev, uint32_t *resid); /* virtgpu_prime.c */ diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 625c05d625bf..97e67064c97e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -295,6 +295,26 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) spin_unlock(&vgdev->obj_free_lock); } +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs) +{ + struct virtio_gpu_object *bo; + int ret = 0; + u32 i; + + for (i = 0; i < objs->nents; i++) { + bo = gem_to_virtio_gpu_obj(objs->objs[i]); + + if (virtio_gpu_is_shmem(bo) && bo->detached) { + ret = virtio_gpu_reattach_shmem_object_locked(bo); + if (ret) + break; + } + } + + return ret; +} + int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) { int err; @@ -303,6 +323,12 @@ int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) err = drm_gem_shmem_pin(&bo->base); if (err) return err; + + err = virtio_gpu_reattach_shmem_object(bo); + if (err) { + drm_gem_shmem_unpin(&bo->base); + return err; + } } return 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index e4f76f315550..c7da22006149 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -256,6 +256,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); if (!fence) { ret = -ENOMEM; @@ -298,11 +302,25 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, goto err_put_free; } + ret = virtio_gpu_array_lock_resv(objs); + if (ret != 0) + goto err_put_free; + + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) { + ret = -ENOMEM; + goto err_unlock; + } + if (!vgdev->has_virgl_3d) { virtio_gpu_cmd_transfer_to_host_2d (vgdev, offset, args->box.w, args->box.h, args->box.x, args->box.y, - objs, NULL); + objs, fence); } else { virtio_gpu_create_context(dev, file); @@ -311,23 +329,13 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, goto err_put_free; } - ret = virtio_gpu_array_lock_resv(objs); - if (ret != 0) - goto err_put_free; - - ret = -ENOMEM; - fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, - 0); - if (!fence) - goto err_unlock; - virtio_gpu_cmd_transfer_to_host_3d (vgdev, vfpriv ? vfpriv->ctx_id : 0, offset, args->level, args->stride, args->layer_stride, &args->box, objs, fence); - dma_fence_put(&fence->f); } + dma_fence_put(&fence->f); virtio_gpu_notify(vgdev); return 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index e58528c562ef..de347aa3b9a8 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -143,7 +143,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, struct sg_table *pages; int si; - pages = drm_gem_shmem_get_pages_sgt(&bo->base); + pages = drm_gem_shmem_get_pages_sgt_locked(&bo->base); if (IS_ERR(pages)) return PTR_ERR(pages); @@ -177,6 +177,40 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return 0; } +int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_mem_entry *ents; + unsigned int nents; + int err; + + if (!bo->detached) + return 0; + + err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (err) + return err; + + virtio_gpu_object_attach(vgdev, bo, ents, nents); + + bo->detached = false; + + return 0; +} + +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo) +{ + int ret; + + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL); + if (ret) + return ret; + ret = virtio_gpu_reattach_shmem_object_locked(bo); + dma_resv_unlock(bo->base.base.resv); + + return ret; +} + int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_params *params, struct virtio_gpu_object **bo_ptr, @@ -207,45 +241,56 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bo->dumb = params->dumb; - ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); - if (ret != 0) - goto err_put_id; + if (bo->blob_mem == VIRTGPU_BLOB_MEM_GUEST) + bo->guest_blob = true; if (fence) { ret = -ENOMEM; objs = virtio_gpu_array_alloc(1); if (!objs) - goto err_free_entry; + goto err_put_id; virtio_gpu_array_add_obj(objs, &bo->base.base); ret = virtio_gpu_array_lock_resv(objs); if (ret != 0) goto err_put_objs; + } else { + ret = dma_resv_lock(bo->base.base.resv, NULL); + if (ret) + goto err_put_id; } if (params->blob) { - if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST) - bo->guest_blob = true; + ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (ret) + goto err_unlock_objs; + } else { + bo->detached = true; + } + if (params->blob) virtio_gpu_cmd_resource_create_blob(vgdev, bo, params, ents, nents); - } else if (params->virgl) { + else if (params->virgl) virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, objs, fence); - virtio_gpu_object_attach(vgdev, bo, ents, nents); - } else { + else virtio_gpu_cmd_create_resource(vgdev, bo, params, objs, fence); - virtio_gpu_object_attach(vgdev, bo, ents, nents); - } + + if (!fence) + dma_resv_unlock(bo->base.base.resv); *bo_ptr = bo; return 0; +err_unlock_objs: + if (fence) + virtio_gpu_array_unlock_resv(objs); + else + dma_resv_unlock(bo->base.base.resv); err_put_objs: virtio_gpu_array_put_free(objs); -err_free_entry: - kvfree(ents); err_put_id: virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); err_put_pages: diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c index 5c514946bbad..6e4ef2593e8f 100644 --- a/drivers/gpu/drm/virtio/virtgpu_submit.c +++ b/drivers/gpu/drm/virtio/virtgpu_submit.c @@ -464,8 +464,19 @@ static void virtio_gpu_install_out_fence_fd(struct virtio_gpu_submit *submit) static int virtio_gpu_lock_buflist(struct virtio_gpu_submit *submit) { - if (submit->buflist) - return virtio_gpu_array_lock_resv(submit->buflist); + int err; + + if (submit->buflist) { + err = virtio_gpu_array_lock_resv(submit->buflist); + if (err) + return err; + + err = virtio_gpu_array_prepare(submit->vgdev, submit->buflist); + if (err) { + virtio_gpu_array_unlock_resv(submit->buflist); + return err; + } + } return 0; } From patchwork Fri Jan 5 18:46:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185512 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6405392dyb; Fri, 5 Jan 2024 10:54:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IE6SsQy5Jmun7OxuEo0NZ94yLGAU/zB589Ft3fRQSNyAFM5pd/qBIi6ypQXGPBE7sIoZE19 X-Received: by 2002:ac8:7d01:0:b0:427:7cf0:e0a2 with SMTP id g1-20020ac87d01000000b004277cf0e0a2mr2972430qtb.93.1704480877295; Fri, 05 Jan 2024 10:54:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480877; cv=none; d=google.com; s=arc-20160816; b=jEZTjMfl2ZBCNOO05XW0wRQ8iD8xGnjdMGOPyJZInkPzMeKAj4KITwEBoWbmEBhh+9 pAfgVj87Sf13MpGJPpVNJ1EoW7LvJ9Ggey2N715/1iNnplszhWkrpgTW0A8+tg2/O8I7 Ek5hppS8XIaWkQIi+6lxnxMJ0ULQcakshkiwifK9bbXw6VUPpjA2ZxIOrCEJtCaIEIeW mZCDfqrwHMkaleZNPbN0suYYem1Gr1Jc/XT8jhLeYkZ2NiujzjbAzN11U+eo0KDArdEk tCgQ2THhC8Y7cHQYMBcPIDAJzVM7UPIv9t8dhBrMUwXpuIfWeM8h9iAacqM5WbY4MeQZ yixw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=t6AZ36NKf4L58jSB/J/p5sWZZ+Vn26Bp6mEVxjGSyAE=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=r6StVwCsuuZwn/dmkiFEgDikHt15JppZCCutZwXu/rbYyccEe2rhe7JwJuCF4lUaZW 5Q6l2dy4E9obOoCtYOMo5ptOD0g6x7eAggEJ2bHjB94DSkWZskHxKHeFCQNKw9iXDUh4 +jidW5zB/4UjiDWw8a/2PWTOfHzAjeZwiXzmnOC2CayOIDfgZypHMUwIRXKew0IJBbb+ wbGGm2KLc95XR61virkbinNWndAec26QCGrd1lf+YiK1BWVw5AyGZLTnGRpGYdE2466H sDoJca3ViNkghK2vvES230yyOOYXrB3Sv5e9PjVPtut9Pc54koqZdF4knovy3ih4AR+b P0fQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=PttLRenq; spf=pass (google.com: domain of linux-kernel+bounces-18267-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18267-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id fz17-20020a05622a5a9100b0042976ce297bsi2347114qtb.522.2024.01.05.10.54.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:54:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18267-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=PttLRenq; spf=pass (google.com: domain of linux-kernel+bounces-18267-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18267-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 0D28E1C23C6B for ; Fri, 5 Jan 2024 18:54:37 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 399373D0C2; Fri, 5 Jan 2024 18:47:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="PttLRenq" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B5AFF3C46E for ; Fri, 5 Jan 2024 18:47:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480451; bh=hRe/SQ0383HRKfSN7DEdQARJEA4/ceuvyt426OOpxLY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PttLRenqpput2D4w6qSyiD5TUaDNxQsucFKn3dHxXeRRz0+TKVRI/bTi/bnjtq9xe 80g86TTpyBVoseXy6nLP+8f8C2bspZm679hkCgZLH39B0Uat6D/h6Xt7zt/IiY645m n/C6NSpaEAe8DKm7MkJtVftNNqzkk3cRgQ0M1A/B4o05UB8IfcMkiB6NrLRI+UYVA3 5PIhRQZTQ/xgmPGi81tlDzUG6o1St0f0qBWYDhkRhzoLfS9IjxM2ZgBfcuiQbQv45P v5QB5QQkaGlQyS/Jz4RBZT13PJ4/hEselBS7Zc7vX65rVTquqTK6WqioLXNPGAaQJG zvxfPFcbcVOUw== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id F12703782053; Fri, 5 Jan 2024 18:47:29 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 29/30] drm/virtio: Support shmem shrinking Date: Fri, 5 Jan 2024 21:46:23 +0300 Message-ID: <20240105184624.508603-30-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277740306878744 X-GMAIL-MSGID: 1787277740306878744 Support generic drm-shmem memory shrinker and add new madvise IOCTL to the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as "don't need" using the new IOCTL to let shrinker purge the marked BOs on OOM, the shrinker will also evict unpurgeable shmem BOs from memory if guest supports SWAP file or partition. Link: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15278 Acked-by: Gerd Hoffmann Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 13 +++++- drivers/gpu/drm/virtio/virtgpu_gem.c | 48 +++++++++++++++++-- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 25 ++++++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 8 ++++ drivers/gpu/drm/virtio/virtgpu_object.c | 61 +++++++++++++++++++++++++ drivers/gpu/drm/virtio/virtgpu_vq.c | 40 ++++++++++++++++ include/uapi/drm/virtgpu_drm.h | 14 ++++++ 7 files changed, 204 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 1837dc7ea9fb..37188c00e161 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -283,7 +283,7 @@ struct virtio_gpu_fpriv { }; /* virtgpu_ioctl.c */ -#define DRM_VIRTIO_NUM_IOCTLS 12 +#define DRM_VIRTIO_NUM_IOCTLS 13 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); @@ -321,6 +321,8 @@ void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, void virtio_gpu_array_put_free_work(struct work_struct *work); int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo); +int virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv); int virtio_gpu_gem_pin(struct virtio_gpu_object *bo); void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo); @@ -334,6 +336,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo); +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, @@ -354,6 +358,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *obj, struct virtio_gpu_mem_entry *ents, unsigned int nents); +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence); void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, struct virtio_gpu_output *output); int virtio_gpu_cmd_get_display_info(struct virtio_gpu_device *vgdev); @@ -497,4 +504,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev, int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, struct drm_file *file); +/* virtgpu_gem_shrinker.c */ +int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev); +void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev); + #endif diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 97e67064c97e..68d27ae582ba 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -147,10 +147,20 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj, struct virtio_gpu_device *vgdev = obj->dev->dev_private; struct virtio_gpu_fpriv *vfpriv = file->driver_priv; struct virtio_gpu_object_array *objs; + struct virtio_gpu_object *bo; if (!vgdev->has_virgl_3d) return; + bo = gem_to_virtio_gpu_obj(obj); + + /* + * Purged BO was already detached and released, the resource ID + * is invalid by now. + */ + if (!virtio_gpu_gem_madvise(bo, VIRTGPU_MADV_WILLNEED)) + return; + objs = virtio_gpu_array_alloc(1); if (!objs) return; @@ -305,16 +315,46 @@ int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, for (i = 0; i < objs->nents; i++) { bo = gem_to_virtio_gpu_obj(objs->objs[i]); - if (virtio_gpu_is_shmem(bo) && bo->detached) { - ret = virtio_gpu_reattach_shmem_object_locked(bo); - if (ret) - break; + if (virtio_gpu_is_shmem(bo)) { + if (bo->base.madv) + return -EINVAL; + + if (bo->detached) { + ret = virtio_gpu_reattach_shmem_object_locked(bo); + if (ret) + break; + } } } return ret; } +int virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv) +{ + if (virtio_gpu_is_shmem(bo)) + return drm_gem_shmem_object_madvise(&bo->base.base, madv); + + return 1; +} + +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + int err; + + if (bo->created) { + err = virtio_gpu_cmd_release_resource(vgdev, bo); + if (err) + return err; + + virtio_gpu_notify(vgdev); + bo->created = false; + } + + return 0; +} + int virtio_gpu_gem_pin(struct virtio_gpu_object *bo) { int err; diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index c7da22006149..a42799146090 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -701,6 +701,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev, return ret; } +static int virtio_gpu_madvise_ioctl(struct drm_device *dev, + void *data, + struct drm_file *file) +{ + struct drm_virtgpu_madvise *args = data; + struct virtio_gpu_object *bo; + struct drm_gem_object *obj; + + if (args->madv > VIRTGPU_MADV_DONTNEED) + return -EOPNOTSUPP; + + obj = drm_gem_object_lookup(file, args->bo_handle); + if (!obj) + return -ENOENT; + + bo = gem_to_virtio_gpu_obj(obj); + args->retained = virtio_gpu_gem_madvise(bo, args->madv); + drm_gem_object_put(obj); + + return 0; +} + struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl, DRM_RENDER_ALLOW), @@ -740,4 +762,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl, DRM_RENDER_ALLOW), + + DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl, + DRM_RENDER_ALLOW), }; diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 5a3b5aaed1f3..43e237082cec 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -245,6 +245,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) goto err_scanouts; } + ret = drmm_gem_shmem_init(dev); + if (ret) { + DRM_ERROR("shmem init failed\n"); + goto err_modeset; + } + virtio_device_ready(vgdev->vdev); if (num_capsets) @@ -259,6 +265,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) } return 0; +err_modeset: + virtio_gpu_modeset_fini(vgdev); err_scanouts: virtio_gpu_free_vbufs(vgdev); err_vbufs: diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index de347aa3b9a8..86888c1ae5d4 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -98,6 +98,60 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj) virtio_gpu_cleanup_object(bo); } +static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_fence *fence; + + if (bo->detached) + return 0; + + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) + return -ENOMEM; + + virtio_gpu_object_detach(vgdev, bo, fence); + virtio_gpu_notify(vgdev); + + dma_fence_wait(&fence->f, false); + dma_fence_put(&fence->f); + + bo->detached = true; + + return 0; +} + +static int virtio_gpu_shmem_evict(struct drm_gem_object *obj) +{ + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); + int err; + + /* blob is not movable, it's impossible to detach it from host */ + if (bo->blob_mem) + return -EBUSY; + + /* + * At first tell host to stop using guest's memory to ensure that + * host won't touch the released guest's memory once it's gone. + */ + err = virtio_gpu_detach_object_fenced(bo); + if (err) + return err; + + if (drm_gem_shmem_is_purgeable(&bo->base)) { + err = virtio_gpu_gem_host_mem_release(bo); + if (err) + return err; + + drm_gem_shmem_purge_locked(&bo->base); + } else { + bo->base.pages_mark_dirty_on_put = 1; + drm_gem_shmem_evict_locked(&bo->base); + } + + return 0; +} + static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { .free = virtio_gpu_free_object, .open = virtio_gpu_gem_object_open, @@ -111,6 +165,7 @@ static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { .vunmap = drm_gem_shmem_object_vunmap, .mmap = drm_gem_shmem_object_mmap, .vm_ops = &drm_gem_shmem_vm_ops, + .evict = virtio_gpu_shmem_evict, }; bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) @@ -187,6 +242,10 @@ int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo) if (!bo->detached) return 0; + err = drm_gem_shmem_swapin_locked(&bo->base); + if (err) + return err; + err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); if (err) return err; @@ -240,6 +299,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, goto err_put_pages; bo->dumb = params->dumb; + bo->blob_mem = params->blob_mem; + bo->blob_flags = params->blob_flags; if (bo->blob_mem == VIRTGPU_BLOB_MEM_GUEST) bo->guest_blob = true; diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index b1a00c0c25a7..14ab470f413a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -545,6 +545,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, virtio_gpu_cleanup_object(bo); } +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo) +{ + struct virtio_gpu_resource_unref *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF); + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + + return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); +} + void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev, uint32_t scanout_id, uint32_t resource_id, uint32_t width, uint32_t height, @@ -645,6 +660,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev, virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); } +static void +virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev, + u32 resource_id, + struct virtio_gpu_fence *fence) +{ + struct virtio_gpu_resource_attach_backing *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING); + cmd_p->resource_id = cpu_to_le32(resource_id); + + virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); +} + static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf) { @@ -1107,6 +1139,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, ents, nents, NULL); } +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence) +{ + virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle, + fence); +} + void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, struct virtio_gpu_output *output) { diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h index c2ce71987e9b..78255060bc9a 100644 --- a/include/uapi/drm/virtgpu_drm.h +++ b/include/uapi/drm/virtgpu_drm.h @@ -48,6 +48,7 @@ extern "C" { #define DRM_VIRTGPU_GET_CAPS 0x09 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a #define DRM_VIRTGPU_CONTEXT_INIT 0x0b +#define DRM_VIRTGPU_MADVISE 0x0c #define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01 #define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02 @@ -213,6 +214,15 @@ struct drm_virtgpu_context_init { __u64 ctx_set_params; }; +#define VIRTGPU_MADV_WILLNEED 0 +#define VIRTGPU_MADV_DONTNEED 1 +struct drm_virtgpu_madvise { + __u32 bo_handle; + __u32 retained; /* out, non-zero if BO can be used */ + __u32 madv; + __u32 pad; +}; + /* * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in * effect. The event size is sizeof(drm_event), since there is no additional @@ -263,6 +273,10 @@ struct drm_virtgpu_context_init { DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT, \ struct drm_virtgpu_context_init) +#define DRM_IOCTL_VIRTGPU_MADVISE \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \ + struct drm_virtgpu_madvise) + #if defined(__cplusplus) } #endif From patchwork Fri Jan 5 18:46:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185513 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6405472dyb; Fri, 5 Jan 2024 10:54:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IG6KksAK03i0huYCW+0a/dFB3v35MLn2Pols8WdUPFLztKigYJ+5+MwVqTGpBMQdUoZF7Qs X-Received: by 2002:a05:6830:3a16:b0:6dc:7837:56e6 with SMTP id di22-20020a0568303a1600b006dc783756e6mr2900171otb.30.1704480889958; Fri, 05 Jan 2024 10:54:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480889; cv=none; d=google.com; s=arc-20160816; b=ctswfNdmVvc4VYX8L0Pd97m2EUT8k/DStvQfNSYi15XB5pjI8oFnfeEdjBKtt8JGPh cjiX7yS9HFUmS9ViRVi2x9mVF2fiZiWVSWp403ddWdV3bGdLlkA5JdNGUCht6ePoFc9f RTxLQiO5AFSJ0N3UmgLVvUH4ufZCS5ENFsJbiVjj7MBINbiHzB5sDmjCGR/7pEmTMg03 fPgbEoHd2RtBI6NZIuewMlJPUv2XOF0tubnGb19cnPO5moVescQz7WBqdd3+16et885v 1+xqLs+tUhJ2UUDBnB6S+4qVGN6yGrlL8Ulvuqpo4toXXumijvCv0ND3FP8JV66qdQEI eaGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=pNqQGPKPPe0pfp70N79ru1CmzDDo/u7XVwM2ee8BjfE=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=gB7x0wVdZS5RgpaSqlE/8I7QB5S5Yt0aQ760ToZOv3WG5B1z6mQi59lbkGKdHn8D1B wY6Fl3b/RI/lv0Qo4UJ/xITu8f1V9lqEsiX8j7Kak2BVDcgPJZsMfm2i2Z5HwgDjqVTl kSJBXswFT09kQKUWeq/F/c/rwHo3gzpQWVjYjcwVCVgs04g32q+jamvZ5ypJR8fbpJta EOaDZYaUDvA9kkapCtdBxZCBtcPRFNz7+E564jHxSHDq5HPp5nA336WOSMmJTkhaxQ+a 82guzFO6f8FyRRYRwmzkZuCbtEurjLha2cdnjldIBtB39yC+P/Ej7C3O7UJ+XQ7ERKrS xlxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=fEQPp3pm; spf=pass (google.com: domain of linux-kernel+bounces-18268-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18268-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id r184-20020a632bc1000000b005cdfb96ea62si1693631pgr.243.2024.01.05.10.54.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:54:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18268-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=fEQPp3pm; spf=pass (google.com: domain of linux-kernel+bounces-18268-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18268-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id B062E2869E8 for ; Fri, 5 Jan 2024 18:54:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 124703D38F; Fri, 5 Jan 2024 18:47:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="fEQPp3pm" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CD523C498 for ; Fri, 5 Jan 2024 18:47:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480452; bh=aX78HPzWZ5Oplw/Z9BqINCxZ65QTKKO+E85eRzWNe7o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fEQPp3pmDG+iVYgJIHsvF2gCGoCgY5eTAg8EJmsHXStLWn22zl361EWtYYAMHWN0O v7AoWXW5gn7FQHdeujiNDZlGo9xCXPdiEsvwbz7yzhQRJzTJKHpiWKYy2kKyNO45gL iLBD3pJU8/jATothCgc5M/L+SYC72gW6bUprIJhH3PUq98JLvxUgg+H3davLbFNXaU 54SDKYwrHQwcYJktMmp7AZtlm9qH/seJlZOVkS8mhaJIlkThhLqp9htWmmtAJa3/l8 TJsNcfN7H4vTbQdFbdcBYBxWw82mtGQvWBeBwSYZAZrjn31NpzgUp6tQCacxUJpF21 xg7/Cke6XNETg== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id 928B3378204D; Fri, 5 Jan 2024 18:47:31 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 30/30] drm/panfrost: Switch to generic memory shrinker Date: Fri, 5 Jan 2024 21:46:24 +0300 Message-ID: <20240105184624.508603-31-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277753616761763 X-GMAIL-MSGID: 1787277753616761763 Replace Panfrost's custom memory shrinker with a common drm-shmem memory shrinker. Co-developed-by: Boris Brezillon Signed-off-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 4 +- drivers/gpu/drm/panfrost/Makefile | 1 - drivers/gpu/drm/panfrost/panfrost_device.h | 4 - drivers/gpu/drm/panfrost/panfrost_drv.c | 29 ++-- drivers/gpu/drm/panfrost/panfrost_gem.c | 60 ++++---- drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 140 ------------------ drivers/gpu/drm/panfrost/panfrost_job.c | 18 ++- drivers/gpu/drm/panfrost/panfrost_mmu.c | 24 ++- include/drm/drm_gem_shmem_helper.h | 7 - 10 files changed, 83 insertions(+), 213 deletions(-) delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 7d2fe12bd793..56e88378079b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -89,8 +89,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) if (ret) goto err_release; - INIT_LIST_HEAD(&shmem->madv_list); - if (!private) { /* * Our buffers are kept pinned, so allocating them @@ -619,6 +617,8 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + drm_WARN_ON_ONCE(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); + drm_gem_shmem_shrinker_put_pages_locked(shmem); drm_gem_free_mmap_offset(obj); diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile index 2c01c1e7523e..f2cb1ab0a32d 100644 --- a/drivers/gpu/drm/panfrost/Makefile +++ b/drivers/gpu/drm/panfrost/Makefile @@ -5,7 +5,6 @@ panfrost-y := \ panfrost_device.o \ panfrost_devfreq.o \ panfrost_gem.o \ - panfrost_gem_shrinker.o \ panfrost_gpu.o \ panfrost_job.o \ panfrost_mmu.o \ diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 62f7e3527385..cea6df9cd650 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -140,10 +140,6 @@ struct panfrost_device { atomic_t pending; } reset; - struct mutex shrinker_lock; - struct list_head shrinker_list; - struct shrinker *shrinker; - struct panfrost_devfreq pfdevfreq; struct { diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index a15d62f19afb..5c730d15a24d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -171,7 +171,6 @@ panfrost_lookup_bos(struct drm_device *dev, break; } - atomic_inc(&bo->gpu_usecount); job->mappings[i] = mapping; } @@ -397,7 +396,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, { struct panfrost_file_priv *priv = file_priv->driver_priv; struct drm_panfrost_madvise *args = data; - struct panfrost_device *pfdev = dev->dev_private; struct drm_gem_object *gem_obj; struct panfrost_gem_object *bo; int ret = 0; @@ -410,11 +408,15 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, bo = to_panfrost_bo(gem_obj); + if (bo->is_heap) { + args->retained = 1; + goto out_put_object; + } + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL); if (ret) goto out_put_object; - mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv == PANFROST_MADV_DONTNEED) { struct panfrost_gem_mapping *first; @@ -440,17 +442,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv); - if (args->retained) { - if (args->madv == PANFROST_MADV_DONTNEED) - list_move_tail(&bo->base.madv_list, - &pfdev->shrinker_list); - else if (args->madv == PANFROST_MADV_WILLNEED) - list_del_init(&bo->base.madv_list); - } - out_unlock_mappings: mutex_unlock(&bo->mappings.lock); - mutex_unlock(&pfdev->shrinker_lock); dma_resv_unlock(bo->base.base.resv); out_put_object: drm_gem_object_put(gem_obj); @@ -635,9 +628,6 @@ static int panfrost_probe(struct platform_device *pdev) ddev->dev_private = pfdev; pfdev->ddev = ddev; - mutex_init(&pfdev->shrinker_lock); - INIT_LIST_HEAD(&pfdev->shrinker_list); - err = panfrost_device_init(pfdev); if (err) { if (err != -EPROBE_DEFER) @@ -659,13 +649,13 @@ static int panfrost_probe(struct platform_device *pdev) if (err < 0) goto err_out1; - err = panfrost_gem_shrinker_init(ddev); - if (err) - goto err_out2; + err = drmm_gem_shmem_init(ddev); + if (err < 0) + goto err_unregister_dev; return 0; -err_out2: +err_unregister_dev: drm_dev_unregister(ddev); err_out1: pm_runtime_disable(pfdev->dev); @@ -682,7 +672,6 @@ static void panfrost_remove(struct platform_device *pdev) struct drm_device *ddev = pfdev->ddev; drm_dev_unregister(ddev); - panfrost_gem_shrinker_cleanup(ddev); pm_runtime_get_sync(pfdev->dev); pm_runtime_disable(pfdev->dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 8c26b7e41b95..05eb5a89c4ed 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -17,17 +17,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) { struct panfrost_gem_object *bo = to_panfrost_bo(obj); - struct panfrost_device *pfdev = obj->dev->dev_private; - - /* - * Make sure the BO is no longer inserted in the shrinker list before - * taking care of the destruction itself. If we don't do that we have a - * race condition between this function and what's done in - * panfrost_gem_shrinker_scan(). - */ - mutex_lock(&pfdev->shrinker_lock); - list_del_init(&bo->base.madv_list); - mutex_unlock(&pfdev->shrinker_lock); /* * If we still have mappings attached to the BO, there's a problem in @@ -57,26 +46,23 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo, return mapping; } -static void -panfrost_gem_teardown_mapping(struct panfrost_gem_mapping *mapping) +static void panfrost_gem_mapping_release(struct kref *kref) { + struct panfrost_gem_mapping *mapping = + container_of(kref, struct panfrost_gem_mapping, refcount); + struct panfrost_gem_object *bo = mapping->obj; + struct panfrost_device *pfdev = bo->base.base.dev->dev_private; + + /* Shrinker may purge the mapping at the same time. */ + dma_resv_lock(mapping->obj->base.base.resv, NULL); if (mapping->active) panfrost_mmu_unmap(mapping); + dma_resv_unlock(mapping->obj->base.base.resv); spin_lock(&mapping->mmu->mm_lock); if (drm_mm_node_allocated(&mapping->mmnode)) drm_mm_remove_node(&mapping->mmnode); spin_unlock(&mapping->mmu->mm_lock); -} - -static void panfrost_gem_mapping_release(struct kref *kref) -{ - struct panfrost_gem_mapping *mapping = - container_of(kref, struct panfrost_gem_mapping, refcount); - struct panfrost_gem_object *bo = mapping->obj; - struct panfrost_device *pfdev = bo->base.base.dev->dev_private; - - panfrost_gem_teardown_mapping(mapping); /* On heap BOs, release the sgts created in the fault handler path. */ if (bo->sgts) { @@ -117,12 +103,14 @@ void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping) kref_put(&mapping->refcount, panfrost_gem_mapping_release); } -void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo) +void panfrost_gem_evict_mappings_locked(struct panfrost_gem_object *bo) { struct panfrost_gem_mapping *mapping; - list_for_each_entry(mapping, &bo->mappings.list, node) - panfrost_gem_teardown_mapping(mapping); + list_for_each_entry(mapping, &bo->mappings.list, node) { + if (mapping->active) + panfrost_mmu_unmap(mapping); + } } int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) @@ -251,6 +239,25 @@ static size_t panfrost_gem_rss(struct drm_gem_object *obj) return 0; } +static int panfrost_shmem_evict(struct drm_gem_object *obj) +{ + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + + if (!drm_gem_shmem_is_purgeable(&bo->base)) + return -EBUSY; + + if (!mutex_trylock(&bo->mappings.lock)) + return -EBUSY; + + panfrost_gem_evict_mappings_locked(bo); + + drm_gem_shmem_purge_locked(&bo->base); + + mutex_unlock(&bo->mappings.lock); + + return 0; +} + static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .open = panfrost_gem_open, @@ -265,6 +272,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .status = panfrost_gem_status, .rss = panfrost_gem_rss, .vm_ops = &drm_gem_shmem_vm_ops, + .evict = panfrost_shmem_evict, }; /** diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 7516b7ecf7fe..8ddc2d310d29 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -30,12 +30,6 @@ struct panfrost_gem_object { struct mutex lock; } mappings; - /* - * Count the number of jobs referencing this BO so we don't let the - * shrinker reclaim this object prematurely. - */ - atomic_t gpu_usecount; - /* * Object chunk size currently mapped onto physical memory */ @@ -86,7 +80,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo, void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping); void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo); -int panfrost_gem_shrinker_init(struct drm_device *dev); -void panfrost_gem_shrinker_cleanup(struct drm_device *dev); - #endif /* __PANFROST_GEM_H__ */ diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c deleted file mode 100644 index 7b4deba803ed..000000000000 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ /dev/null @@ -1,140 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2019 Arm Ltd. - * - * Based on msm_gem_freedreno.c: - * Copyright (C) 2016 Red Hat - * Author: Rob Clark - */ - -#include - -#include -#include - -#include "panfrost_device.h" -#include "panfrost_gem.h" -#include "panfrost_mmu.h" - -static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) -{ - return (shmem->madv > 0) && - !refcount_read(&shmem->pages_pin_count) && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; -} - -static unsigned long -panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = shrinker->private_data; - struct drm_gem_shmem_object *shmem; - unsigned long count = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return 0; - - list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (panfrost_gem_shmem_is_purgeable(shmem)) - count += shmem->base.size >> PAGE_SHIFT; - } - - mutex_unlock(&pfdev->shrinker_lock); - - return count; -} - -static bool panfrost_gem_purge(struct drm_gem_object *obj) -{ - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - struct panfrost_gem_object *bo = to_panfrost_bo(obj); - bool ret = false; - - if (atomic_read(&bo->gpu_usecount)) - return false; - - if (!mutex_trylock(&bo->mappings.lock)) - return false; - - if (!dma_resv_trylock(shmem->base.resv)) - goto unlock_mappings; - - /* BO might have become unpurgeable if the last pages_use_count ref - * was dropped, but the BO hasn't been destroyed yet. - */ - if (!panfrost_gem_shmem_is_purgeable(shmem)) - goto unlock_mappings; - - panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); - ret = true; - - dma_resv_unlock(shmem->base.resv); - -unlock_mappings: - mutex_unlock(&bo->mappings.lock); - return ret; -} - -static unsigned long -panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = shrinker->private_data; - struct drm_gem_shmem_object *shmem, *tmp; - unsigned long freed = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return SHRINK_STOP; - - list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) { - if (freed >= sc->nr_to_scan) - break; - if (panfrost_gem_shmem_is_purgeable(shmem) && - panfrost_gem_purge(&shmem->base)) { - freed += shmem->base.size >> PAGE_SHIFT; - list_del_init(&shmem->madv_list); - } - } - - mutex_unlock(&pfdev->shrinker_lock); - - if (freed > 0) - pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT); - - return freed; -} - -/** - * panfrost_gem_shrinker_init - Initialize panfrost shrinker - * @dev: DRM device - * - * This function registers and sets up the panfrost shrinker. - */ -int panfrost_gem_shrinker_init(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - - pfdev->shrinker = shrinker_alloc(0, "drm-panfrost"); - if (!pfdev->shrinker) - return -ENOMEM; - - pfdev->shrinker->count_objects = panfrost_gem_shrinker_count; - pfdev->shrinker->scan_objects = panfrost_gem_shrinker_scan; - pfdev->shrinker->private_data = pfdev; - - shrinker_register(pfdev->shrinker); - - return 0; -} - -/** - * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker - * @dev: DRM device - * - * This function unregisters the panfrost shrinker. - */ -void panfrost_gem_shrinker_cleanup(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - - if (pfdev->shrinker) - shrinker_free(pfdev->shrinker); -} diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 0c2dbf6ef2a5..9e26cb013191 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -289,6 +289,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos, dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE); } +static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count) +{ + struct panfrost_gem_object *bo; + int ret = 0; + + while (!ret && bo_count--) { + bo = to_panfrost_bo(bos[bo_count]); + ret = bo->base.madv != PANFROST_MADV_WILLNEED ? -EINVAL : 0; + } + + return ret; +} + int panfrost_job_push(struct panfrost_job *job) { struct panfrost_device *pfdev = job->pfdev; @@ -300,6 +313,10 @@ int panfrost_job_push(struct panfrost_job *job) if (ret) return ret; + ret = panfrost_objects_prepare(job->bos, job->bo_count); + if (ret) + goto unlock; + mutex_lock(&pfdev->sched_lock); drm_sched_job_arm(&job->base); @@ -341,7 +358,6 @@ static void panfrost_job_cleanup(struct kref *ref) if (!job->mappings[i]) break; - atomic_dec(&job->mappings[i]->obj->gpu_usecount); panfrost_gem_mapping_put(job->mappings[i]); } kvfree(job->mappings); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 4a0b4bf03f1a..22e18f7986e7 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -328,6 +328,7 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping) struct panfrost_device *pfdev = to_panfrost_device(obj->dev); struct sg_table *sgt; int prot = IOMMU_READ | IOMMU_WRITE; + int ret = 0; if (WARN_ON(mapping->active)) return 0; @@ -335,15 +336,32 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping) if (bo->noexec) prot |= IOMMU_NOEXEC; + if (!obj->import_attach) { + /* + * Don't allow shrinker to move pages while pages are mapped. + * It's fine to move pages afterwards because shrinker will + * take care of unmapping pages during eviction. + */ + ret = drm_gem_shmem_pin(shmem); + if (ret) + return ret; + } + sgt = drm_gem_shmem_get_pages_sgt(shmem); - if (WARN_ON(IS_ERR(sgt))) - return PTR_ERR(sgt); + if (WARN_ON(IS_ERR(sgt))) { + ret = PTR_ERR(sgt); + goto unpin; + } mmu_map_sg(pfdev, mapping->mmu, mapping->mmnode.start << PAGE_SHIFT, prot, sgt); mapping->active = true; - return 0; +unpin: + if (!obj->import_attach) + drm_gem_shmem_unpin(shmem); + + return ret; } void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping) diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 167f00f089de..9c6bb00260fc 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -61,13 +61,6 @@ struct drm_gem_shmem_object { */ int madv; - /** - * @madv_list: List entry for madvise tracking - * - * Typically used by drivers to track purgeable objects - */ - struct list_head madv_list; - /** * @sgt: Scatter/gather table for imported PRIME buffers */