From patchwork Sun Oct 29 23:01:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159435 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1891915vqb; Sun, 29 Oct 2023 16:10:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH9HV0jPxQzwnVlgFGk+hSzYa7Bh5s8891RYTK0JMaPfBQTVL+kJbN2F1EjqI+WLlpGtBth X-Received: by 2002:a05:6a20:4425:b0:179:f79e:8615 with SMTP id ce37-20020a056a20442500b00179f79e8615mr12596916pzb.52.1698621012297; Sun, 29 Oct 2023 16:10:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621012; cv=none; d=google.com; s=arc-20160816; b=swXIhLerxYgVvEqQ5GdTHxOICMDF762VPbpbbBGS8YR7O7C0kje6xlXpuzCgvDSmp5 wJSoaEfIK1QI0mhdijwsD4r/qgKb85K+DN5pWmyDM0IPjJQMprF6m5xCRy1c9A3YxI2b uh4zenSmcbXmHyaKwZl96FobnBILrNxPrATHr10QkIPCIS5Pw4iut9SQfTUhlLxJkhP8 iq9ZOOjHRhuTspP8EEpsjFFh3WIWt+VZlooEqzHFkmTik/qvT6eqaJBp1zGEl+9utbkB cUp2/9wRZWx49X9gxJA7e7fzhYfKTCn0dQ2ztcaW1mdpmGukCu3kOPgRGiOFflAjpgep 8+mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+1BuFQKKKmRpE57N6tUysP7LDtC8ZyadZny0WmRfh0k=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=AV7ujaUvN3anRz9ZY2jugzUBqPVL0lVkh1dHXMQ3pHI7pge1nVJ/YkVqwKcbE7Mkju +v3ZpUHXc1xLclxe+P5ngcUZSV8K+T+6XSGvSL88loq64NdpavTudl97gtAHDtp7bpue vdODxiyX0LqBIihjl04QzAqFXbTh5n/tZWlSRrY7yHg+TkBKEwOZgrHzWXuRLnXgfg7U WGvqqENCCFPtxwKkaVDbnTwNKhCAdvUFBdLLKznqiW14Ih91vHQRC/0A5UcSn9CyXkA6 nFsroF3v4FH6tX7xA0XX/P/SoLEdeb2Fm0w3iBl+xMhkbmFkm058mlI2wiTl2nSV+kVE ePqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kIcrT91R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id fb33-20020a056a002da100b0068a38a9ab84si4139394pfb.176.2023.10.29.16.10.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:10:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=kIcrT91R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 8C01980477BE; Sun, 29 Oct 2023 16:10:11 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232375AbjJ2XJr (ORCPT + 31 others); Sun, 29 Oct 2023 19:09:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232760AbjJ2XJP (ORCPT ); Sun, 29 Oct 2023 19:09:15 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5585465BC for ; Sun, 29 Oct 2023 16:03:56 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 298FC6607388; Sun, 29 Oct 2023 23:02:34 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620555; bh=kRBcrLAj4D2pvg6jvqHxVmGBnWprZUgp4H9azTfbRYE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kIcrT91R2KsnfqEJqrzr/8g5XvCiRTG15pQZrSdsY7CjrthMuTV7UzFCNftZBg1Kx BzICvJKEVCKLmItWaM3WtIcqVa7F6dbGZfLe6jAAXWpiMWZl1i3+1O8K+Ype8kWrOz R6zoBWI0WE2fxtmAVZ/QiFlmaqzr3yxhekXoIplfhC5oEQOTfVjZM1pgxrDQuYi+ca Htl/XMDw1kRYVtEv8jT3+GblNjOo069yvUlg0ld4ec99SHby87w0OznehkfMW/UDZJ et6pZ+ZDDBdzgezCpqkps5G+ApbO/NCPzb3BWIF5c+spZLGgtYOyxzsUNDY8//cQkK io8K/I6piPBhA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 06/26] drm/shmem-helper: Add and use pages_pin_count Date: Mon, 30 Oct 2023 02:01:45 +0300 Message-ID: <20231029230205.93277-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:10:11 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781133226329589071 X-GMAIL-MSGID: 1781133226329589071 Add separate pages_pin_count for tracking of whether drm-shmem pages are moveable or not. With the addition of memory shrinker support to drm-shmem, the pages_use_count will no longer determine whether pages are hard-pinned in memory, but whether pages exist and are soft-pinned (and could be swapped out). The pages_pin_count > 1 will hard-pin pages in memory. Reviewed-by: Boris Brezillon Suggested-by: Boris Brezillon Signed-off-by: Dmitry Osipenko Acked-by: Maxime Ripard --- drivers/gpu/drm/drm_gem_shmem_helper.c | 25 +++++++++++++++++-------- include/drm/drm_gem_shmem_helper.h | 11 +++++++++++ 2 files changed, 28 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 2cc0601865f6..b9b71a1a563a 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -156,6 +156,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) drm_gem_shmem_put_pages_locked(shmem); drm_WARN_ON(obj->dev, shmem->pages_use_count); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); dma_resv_unlock(shmem->base.resv); } @@ -234,18 +235,16 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem) dma_resv_assert_held(shmem->base.resv); + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret = drm_gem_shmem_get_pages_locked(shmem); + if (!ret) + refcount_set(&shmem->pages_pin_count, 1); return ret; } -static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem) -{ - dma_resv_assert_held(shmem->base.resv); - - drm_gem_shmem_put_pages_locked(shmem); -} - /** * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -263,6 +262,9 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (refcount_inc_not_zero(&shmem->pages_pin_count)) + return 0; + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL); if (ret) return ret; @@ -286,8 +288,14 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (refcount_dec_not_one(&shmem->pages_pin_count)) + return; + dma_resv_lock(shmem->base.resv, NULL); - drm_gem_shmem_unpin_locked(shmem); + + if (refcount_dec_and_test(&shmem->pages_pin_count)) + drm_gem_shmem_put_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); } EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin); @@ -632,6 +640,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, if (shmem->base.import_attach) return; + drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count)); drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 6ee4a4046980..5088bd623518 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -39,6 +39,17 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; + /** + * @pages_pin_count: + * + * Reference count on the pinned pages table. + * + * Pages are hard-pinned and reside in memory if count + * greater than zero. Otherwise, when count is zero, the pages are + * allowed to be evicted and purged by memory shrinker. + */ + refcount_t pages_pin_count; + /** * @madv: State for madvise *