From patchwork Fri Jan 5 18:46:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185502 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404481dyb; Fri, 5 Jan 2024 10:52:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IGaIq+XbMZ/bW4IEgniwS1PVFr3b3jiY3JfZ/8k3cfTLp2NtQF4Rz9eBRCZQuK0bD/Zy0Cz X-Received: by 2002:a05:651c:d7:b0:2cc:b4c2:af5c with SMTP id 23-20020a05651c00d700b002ccb4c2af5cmr1248279ljr.74.1704480749768; Fri, 05 Jan 2024 10:52:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480749; cv=none; d=google.com; s=arc-20160816; b=ZbNC0wCGPJ0M6KZPAzyMp7RsJHmFsUk9ocpzRSlxtTjvHlkSUPEtnqrw1OeRfiWgYk lKo/P+iro7MRK1qMCv8nzLTJS0lBzWqPEggY8MT38kfCW2xqpQqhfvAwXFSLs4xeog+w xI8HzMotLnlHBFmyIxGps6xPwqjdCxKrnArkRSqJ2GWQQnRwuvimLCP5Q9V5YouyPN50 /zfF3ufyDW/j77TYbt2khg/0cKT5lP71TsgwDufzyRNJi7ocdADvrvi5UibGgYKCHUSM KBECQtzyvZuBFBAbNKZw6rWHhmwsB0qTEKxm012KkXo65rQPJkZqLFoAiJ404pPFerhV CAnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=dk2Wfu+uIjSyPWsfUqAH8EvJtaiYmvOmnRX5UzYlkvg=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=CFGnhrNOjdceqiwLV5pjtyn9u6Cb02gY6eGiIIBmaTXTEU2djLLfJNZQzs2gmgthCM kEgYIgsSerx0YKHkX2Yvei0OjzxMJdbgwb8CVkeJ4WR9ba/7WsaRH4FnGMnhwGBNg/5t Qdk+r8C8CaVdCVhkvzbsh9mEky7HlLKbfbEC4krc5ssR1ja/ZsByKw5j0/1ms4XUaObu gF4TnDC4i7UEKm0JmJcajrfXlokFpxLWwQvN3Z0a69FXwBp6yFvOE6kMIvlC8AYkLCZK RsYbxFUthR52AnyQSQWZ03gA6QASbkAuqBUFGpl5305ZZmrX5G8/WpHpgC5TZzIP0lJ9 MejA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=L4TOndeX; spf=pass (google.com: domain of linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id cq10-20020a056402220a00b005538a8f8b64si871258edb.376.2024.01.05.10.52.29 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:52:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=L4TOndeX; spf=pass (google.com: domain of linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18259-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 612F61F2229F for ; Fri, 5 Jan 2024 18:52:29 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8C2013A8E1; Fri, 5 Jan 2024 18:47:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="L4TOndeX" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9536A3A1C6 for ; Fri, 5 Jan 2024 18:47:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480438; bh=TqXT2PdRKdsfVWfuHdIUz5qWMXCVZU9Nx7RXrVE6nQU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L4TOndeX4uNvvr+jO3tvm5FF6uhPCkatj8ub4WMpwmcWSjb8nntV6FCGfNHR8+jgB 7X22VUvb7tAVulKPH6afTNYJYIxsPkkwCD8AdJko/lxApBmxy4wC157P9HWRwk5Crg sGFQipdQDyfZvFsKUZlPfnpsJ8LX7E6r8PpC63LH+dyUXY4N936NnWKPnWweKUC5Bp n6G/sal2sxhPJGAxdzOmdivhzFMV0EJdpRUT3uD+iyDuI53cQf8S4xWEkzb1tqtIH9 SeRCg1ArebDe5Qs7hTwAOzQw7j/2W3IT2Bd3eqIJ1ntDmpFOQ0+chg16NeOXax/plZ 5CgG0uJUFk9gA== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id DF24F378204D; Fri, 5 Jan 2024 18:47:16 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 21/30] drm/shmem-helper: Change sgt allocation policy Date: Fri, 5 Jan 2024 21:46:15 +0300 Message-ID: <20240105184624.508603-22-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277606758778291 X-GMAIL-MSGID: 1787277606758778291 In a preparation to addition of drm-shmem memory shrinker support, change the SGT allocation policy in this way: 1. SGT can be allocated only if shmem pages are pinned at the time of allocation, otherwise allocation fails. 2. Drivers must ensure that pages are pinned during the time of SGT usage and should get new SGT if pages were unpinned. This new policy is required by the shrinker because it will move pages to/from SWAP unless pages are pinned, invalidating SGT pointer once pages are relocated. Previous patches prepared drivers to the new policy. Reviewed-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 55 ++++++++++++++------------ 1 file changed, 29 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index c7357110ca76..ff5437ab2c95 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -133,6 +133,14 @@ drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + if (shmem->sgt) { + dma_unmap_sgtable(obj->dev->dev, shmem->sgt, + DMA_BIDIRECTIONAL, 0); + sg_free_table(shmem->sgt); + kfree(shmem->sgt); + shmem->sgt = NULL; + } + #ifdef CONFIG_X86 if (shmem->map_wc) set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); @@ -155,24 +163,12 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - if (obj->import_attach) { + if (obj->import_attach) drm_prime_gem_destroy(obj, shmem->sgt); - } else { - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); - if (shmem->sgt) { - dma_unmap_sgtable(obj->dev->dev, shmem->sgt, - DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - } - if (shmem->pages && - refcount_dec_and_test(&shmem->pages_use_count)) - drm_gem_shmem_free_pages(shmem); - - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); - } + drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); drm_gem_object_release(obj); kfree(shmem); @@ -722,6 +718,9 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (drm_WARN_ON(obj->dev, !shmem->pages)) + return ERR_PTR(-ENOMEM); + return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT); } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); @@ -737,15 +736,10 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ drm_WARN_ON(obj->dev, obj->import_attach); - ret = drm_gem_shmem_get_pages_locked(shmem); - if (ret) - return ERR_PTR(ret); - sgt = drm_gem_shmem_get_sg_table(shmem); - if (IS_ERR(sgt)) { - ret = PTR_ERR(sgt); - goto err_put_pages; - } + if (IS_ERR(sgt)) + return sgt; + /* Map the pages for use by the h/w. */ ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0); if (ret) @@ -758,8 +752,6 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ err_free_sgt: sg_free_table(sgt); kfree(sgt); -err_put_pages: - drm_gem_shmem_put_pages_locked(shmem); return ERR_PTR(ret); } @@ -776,6 +768,17 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ * and difference between dma-buf imported and natively allocated objects. * drm_gem_shmem_get_sg_table() should not be directly called by drivers. * + * Drivers should adhere to these SGT usage rules: + * + * 1. SGT should be allocated only if shmem pages are pinned at the + * time of allocation, otherwise allocation will fail. + * + * 2. Drivers should ensure that pages are pinned during the time of + * SGT usage and should get new SGT if pages were unpinned. + * + * Drivers don't own returned SGT and must take care of the SGT pointer + * lifetime. SGT is valid as long as GEM pages that backing SGT are pinned. + * * Returns: * A pointer to the scatter/gather table of pinned pages or errno on failure. */