From patchwork Thu Sep 14 23:27:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 140159 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp837759vqi; Thu, 14 Sep 2023 23:13:35 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHzGJwlr2kdOkQlAdk50mLAEm3fiI6bX4I+FGvajdcqXZbgYPPXggi8d38U6u9B/ufMESHF X-Received: by 2002:a05:6a00:134f:b0:68a:52ec:3d36 with SMTP id k15-20020a056a00134f00b0068a52ec3d36mr779363pfu.31.1694758414971; Thu, 14 Sep 2023 23:13:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694758414; cv=none; d=google.com; s=arc-20160816; b=nL/FtnISjiLy2jjp3SCGQqTam9cmMzzuFEQeOMAEV/PQm0cOSVtpxlwkRTiViHsXFn E+btZ0aReU9JX1LGBBB8GXv124EOFZ+UEz9TvVdbmhJ7LLQ29V+QbC+twBSkgZyIye1w Egr3KQIQBySY/xqAbCGji7rFle4aEmfZyltbLJhYlqTwq5TIr/9mse+zjaDinT2cWcxF p3SkuxkdT01AZF58vMR1hElPzPO5zET3rDLVQyU5JV9NxLDGNiDxk8e6wkhH+MnKa/eO nKDQlNPHK7qFiSNdo93s15MqEt9ev7v/fFochl0dllfAzpbuC0eHGQc8RlAgxlHoQEAM GImw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cwVq2kaRzmfRUDvNyCUbTZXt1hcQicEDzl/J9Tw1SL8=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=smidbQcmkaKzxa7RSp6atUov9JMjLqX1an/G6inYy1k6/0K5RuweTPT4DFEKIvXSBg Uirl8NjmkIAuwCKTuU4ZNjKSNPItjMB6/vNdrRJa4qSmz6PLTZEM8wm47M/CEKmW9QBB YGBrIb0wBGDViojxvXEymszGAQBleoCeHBA6vogcOm5/V94A1psp4PCxuLkyHIPTWU6V Hbu5NhhSdcjfICA43/Q8AT2wJrvY5lgSw8TgvODr5L9X+dUWAUVAyqdjIPY4frDe9vN3 I1AUwB+OR4xycv/9Zw1QMJ5vQ7dV934umytia7HhpB6rZ5FzASpLizyFHn+Y6TyWuUU/ NVDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=GkYtGMco; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id w20-20020a63fb54000000b00557447d5721si2613349pgj.768.2023.09.14.23.13.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 23:13:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=GkYtGMco; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 22E46811247D; Thu, 14 Sep 2023 16:29:25 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230510AbjINX3U (ORCPT + 33 others); Thu, 14 Sep 2023 19:29:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230310AbjINX3L (ORCPT ); Thu, 14 Sep 2023 19:29:11 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C68D02717 for ; Thu, 14 Sep 2023 16:29:06 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 723CF6607346; Fri, 15 Sep 2023 00:29:04 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1694734145; bh=+WahLq2KPu3N5dEfC75pioBsE8NOud04/kmdVqiq2D0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GkYtGMcoGjBdC1njqnU7b0hRj4d1iM2gO9LPLH1wXIY0D+m+UIAxPG9045GhHrcIN 27jlC25R/AkUgwEzK5uO9hJFOmvV5HG+iLmLJH15XlTbUMHR6L5QCKtvfo12k6HBZL EPX8oHl4XNMoCa17uitl3RcYNSrFsji7kCA9DHffLt242F6qPJyZXfhJAaKPiYnBv9 KdnIgPHACW0AYoe8T4KzAZbB2nlvq7mbFWyiS3wupPsIvWQWuVxb/y1RHvL+VXUBZa mw25WxE7bu/0M53moE9sQ4kamVQkJHacAg0Dl9pQRqDYi/Yfe/uvTJFdI0Q/GbG8Tc qBgFP5HrmSowQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v17 06/18] drm/shmem-helper: Add and use pages_pin_count Date: Fri, 15 Sep 2023 02:27:09 +0300 Message-ID: <20230914232721.408581-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230914232721.408581-1-dmitry.osipenko@collabora.com> References: <20230914232721.408581-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 14 Sep 2023 16:29:25 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777083000118468279 X-GMAIL-MSGID: 1777083000118468279 Add separate pages_pin_count for tracking of whether drm-shmem pages are moveable or not. With the addition of memory shrinker support to drm-shmem, the pages_use_count will no longer determine whether pages are hard-pinned in memory, but whether pages exist and are soft-pinned (and could be swapped out). The pages_pin_count > 1 will hard-pin pages in memory. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 24 ++++++++++++++++-------- include/drm/drm_gem_shmem_helper.h | 10 ++++++++++ 2 files changed, 26 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2cc0601865f6..286f0ca51309 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -234,18 +234,16 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret = drm_gem_shmem_get_pages_locked(shmem); + if (!ret) + refcount_set(&shmem->pages_pin_count, 1); return ret; } -static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) -{ - dma_resv_assert_held(shmem->base.resv); - - drm_gem_shmem_put_pages_locked(shmem); -} - /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -263,6 +261,9 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; @@ -286,8 +287,14 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (refcount_dec_not_one(&shmem->pages_pin_count)) + return; + dma_resv_lock(shmem->base.resv, NULL); - drm_gem_shmem_unpin_locked(shmem); + + if (refcount_dec_and_test(&shmem->pages_pin_count)) + drm_gem_shmem_put_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); @@ -632,6 +639,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, if (shmem->base.import_attach) return; + drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 6ee4a4046980..268b3127d150 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -39,6 +39,16 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; + /** + * @pages_pin_count: + * + * Reference count on the pinned pages table. + * The pages allowed to be evicted and purged by memory + * shrinker only when the count is zero, otherwise pages + * are hard-pinned in memory. + */ + refcount_t pages_pin_count; + /** * @madv: State for madvise *