From patchwork Fri Jan 5 18:46:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 185503 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp6404775dyb; Fri, 5 Jan 2024 10:53:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IGSrsM2C7HveDaAvO86kKeNBGMuuVHCzI5bYZsZ5G7VrDcFAiY5Y9fq0gDA5K1hGy3vRG/+ X-Received: by 2002:a17:902:780e:b0:1d4:19c6:dfff with SMTP id p14-20020a170902780e00b001d419c6dfffmr2253790pll.20.1704480795382; Fri, 05 Jan 2024 10:53:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704480795; cv=none; d=google.com; s=arc-20160816; b=pyuWbT43Jfn/WH8B4uLeCsGD6/rUE86AykoTGzxbBSs8psRjUaj6VYeo/xMEjOMRm0 BKUNjHMgOTNPekypz/1WFtvnH6D+Wn1Qv3E2jAAd7eA2j5BEgAnTLLygStSMr4df9Zqb 3u/Rf26UL0TH9X/RVpGkqONyA7zG5bQ2N18PzXm9u7UxsWMd28QmRFRoVi4cW8CPPc2Q w7nifSDUhs1cZ/NHOvixfOpFB7c+t3ijVaR1O7ZYaYSN7qIK1k9ixNHQ/b6kfWUlAsfq zml4uYF5FLTkxK8BMkqb3bH40AjceylO2hkwmXzr7CCxhdFBG2s0uE5oFdzmt+3h8heg 1ZGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=WGepiqYKsfgCh/MRMXAkiatkEzgEkVPz/Pk3Kp3B3dg=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=vXz/z6tgyAprN8HVWJpYwnoCKsDRJIDmN8Xzba5OwBRv16wJSca1Vd1MzamiCGcoZ7 J3mLbvMx4+3U8qcBSTy01axYlHI4t/lNU/N7uj5bALrSMVpwcsqXZNGH/uUnneyFb382 H4BbJYtNpmpu5r6GsZbCGeuB/wkzh0twNEODnc58QEJbtQjg34/xvE+M8DDgoASoeCrK nvaX2EFowlxWark6IxsjdeNsVQde75g3TeXslH3/ZyhDOt9bXpTJZFmOXCXtqyNrA0fM CxR9tmZnFUg4UkodoW06592TCqtaQ5c/xVtWTDiwdrGxLa4ycO+RJDNvaPBDQSuBV+aD 9siA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=eL0GBCOi; spf=pass (google.com: domain of linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id p9-20020a170902e74900b001d49c0617f6si1617934plf.530.2024.01.05.10.53.14 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 10:53:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=eL0GBCOi; spf=pass (google.com: domain of linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-18256-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id C40C7B24717 for ; Fri, 5 Jan 2024 18:51:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E17513A1B5; Fri, 5 Jan 2024 18:47:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=collabora.com header.i=@collabora.com header.b="eL0GBCOi" X-Original-To: linux-kernel@vger.kernel.org Received: from madrid.collaboradmins.com (madrid.collaboradmins.com [46.235.227.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCFE339AFD for ; Fri, 5 Jan 2024 18:47:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1704480433; bh=sTS/2S+BYwtPCc7TuBkqhut/HYwRwiXmAJgkEuu7P8Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eL0GBCOiEAFErgy+WytN0hVIe/1GuDil9psBDToaEuM+ziwat7seTyzWb+Im8H3DB k0YBICdm1jU4HQPFe0GzcR/Z7uU/m2tUv9r+N53baQ5zZOcFWfrSO1a9yd0o1yGnCY 1Eu+FibV8JgITDVgncjPPUs+hVAwEwNiA4b57NRx4mox6SUoprO1u9y0ULNTtHufR7 BbRad+Bma5gTftk3pbioR/lQ7pk5CRLinByW6G9s450LSC7dTuK4geSVsDSR9zYArP I1GvVFL3/vk7xCouAEu+mur+CezOKgiBZgY4+ABsfWzXCxxcO01GKUp6pLrPPWFM+D OAVMDk9G/Qtqg== Received: from workpc.. (cola.collaboradmins.com [195.201.22.229]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madrid.collaboradmins.com (Postfix) with ESMTPSA id F3158378204B; Fri, 5 Jan 2024 18:47:11 +0000 (UTC) From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v19 18/30] drm/panfrost: Explicitly get and put drm-shmem pages Date: Fri, 5 Jan 2024 21:46:12 +0300 Message-ID: <20240105184624.508603-19-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240105184624.508603-1-dmitry.osipenko@collabora.com> References: <20240105184624.508603-1-dmitry.osipenko@collabora.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787277654434233045 X-GMAIL-MSGID: 1787277654434233045 To simplify the drm-shmem refcnt handling, we're moving away from the implicit get_pages() that is used by get_pages_sgt(). From now on drivers will have to pin pages while they use sgt. Panfrost's shrinker doesn't support swapping out BOs, hence pages are pinned and sgt is valid as long as pages' use-count > 0. In Panfrost, panfrost_gem_mapping, which is the object representing a GPU mapping of a BO, owns a pages ref. This guarantees that any BO being mapped GPU side has its pages retained till the mapping is destroyed. Since pages are no longer guaranteed to stay pinned for the BO lifetime, and MADVISE(DONT_NEED) flagging remains after the GEM handle has been destroyed, we need to add an extra 'is_purgeable' check in panfrost_gem_purge(), to make sure we're not trying to purge a BO that already had its pages released. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon Reviewed-by: Steven Price --- drivers/gpu/drm/panfrost/panfrost_gem.c | 63 ++++++++++++++----- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 ++ 2 files changed, 52 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index f268bd5c2884..7edfc12f7c1f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -35,20 +35,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) */ WARN_ON_ONCE(!list_empty(&bo->mappings.list)); - if (bo->sgts) { - int i; - int n_sgt = bo->base.base.size / SZ_2M; - - for (i = 0; i < n_sgt; i++) { - if (bo->sgts[i].sgl) { - dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], - DMA_BIDIRECTIONAL, 0); - sg_free_table(&bo->sgts[i]); - } - } - kvfree(bo->sgts); - } - drm_gem_shmem_free(&bo->base); } @@ -85,11 +71,40 @@ panfrost_gem_teardown_mapping(struct panfrost_gem_mapping *mapping) static void panfrost_gem_mapping_release(struct kref *kref) { - struct panfrost_gem_mapping *mapping; - - mapping = container_of(kref, struct panfrost_gem_mapping, refcount); + struct panfrost_gem_mapping *mapping = + container_of(kref, struct panfrost_gem_mapping, refcount); + struct panfrost_gem_object *bo = mapping->obj; + struct panfrost_device *pfdev = bo->base.base.dev->dev_private; panfrost_gem_teardown_mapping(mapping); + + /* On heap BOs, release the sgts created in the fault handler path. */ + if (bo->sgts) { + int i, n_sgt = bo->base.base.size / SZ_2M; + + for (i = 0; i < n_sgt; i++) { + if (bo->sgts[i].sgl) { + dma_unmap_sgtable(pfdev->dev, &bo->sgts[i], + DMA_BIDIRECTIONAL, 0); + sg_free_table(&bo->sgts[i]); + } + } + kvfree(bo->sgts); + } + + /* Pages ref is owned by the panfrost_gem_mapping object. We must + * release our pages ref (if any), before releasing the object + * ref. + * Non-heap BOs acquired the pages at panfrost_gem_mapping creation + * time, and heap BOs may have acquired pages if the fault handler + * was called, in which case bo->sgts should be non-NULL. + */ + if (!bo->base.base.import_attach && (!bo->is_heap || bo->sgts) && + bo->base.madv >= 0) { + drm_gem_shmem_put_pages(&bo->base); + bo->sgts = NULL; + } + drm_gem_object_put(&mapping->obj->base.base); panfrost_mmu_ctx_put(mapping->mmu); kfree(mapping); @@ -125,6 +140,20 @@ int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) if (!mapping) return -ENOMEM; + if (!bo->is_heap && !bo->base.base.import_attach) { + /* Pages ref is owned by the panfrost_gem_mapping object. + * For non-heap BOs, we request pages at mapping creation + * time, such that the panfrost_mmu_map() call, further down in + * this function, is guaranteed to have pages_use_count > 0 + * when drm_gem_shmem_get_pages_sgt() is called. + */ + ret = drm_gem_shmem_get_pages(&bo->base); + if (ret) { + kfree(mapping); + return ret; + } + } + INIT_LIST_HEAD(&mapping->node); kref_init(&mapping->refcount); drm_gem_object_get(obj); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 02b60ea1433a..d4fb0854cf2f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -50,6 +50,12 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) if (!dma_resv_trylock(shmem->base.resv)) goto unlock_mappings; + /* BO might have become unpurgeable if the last pages_use_count ref + * was dropped, but the BO hasn't been destroyed yet. + */ + if (!drm_gem_shmem_is_purgeable(shmem)) + goto unlock_mappings; + panfrost_gem_teardown_mappings_locked(bo); drm_gem_shmem_purge_locked(&bo->base); ret = true;