From patchwork Thu Sep 14 23:27:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 140149 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp823585vqi; Thu, 14 Sep 2023 22:37:35 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGC6hBfwhDwMV5x6wOA0HSBTLKjO+AlfouvUAjHUuqabk/Q74nkCR1D8LUu9+W6PTVf81mH X-Received: by 2002:a17:90a:9a7:b0:273:e64c:f22e with SMTP id 36-20020a17090a09a700b00273e64cf22emr424486pjo.29.1694756255415; Thu, 14 Sep 2023 22:37:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694756255; cv=none; d=google.com; s=arc-20160816; b=vPSNBns09IO0njycmiuQ6A5yExBQKk1w/sFrsBSbL2xbYICxBfy+Hxryr9izIQavck KTR38U64uyCijdk6A4kmS60QA0tAV+UUwV9oqyoBHOsbU5uTwRnh7+YQztcgnJBBc3tj P4Gfv10YOaxEU635ZfUXvQ3C6r39ApPdR1C3DWE+eKeVOQHDIMNsrRHJEUoKP3T+sgSO NYIPhJt1W/m19iMP2YSwyrLLiIubXavVfl4JWBcQTAn9pF3XH6Zz87ExjVEbsjGdnsn+ QrgNxS9l0pAwQCfpMT834es8pDgJDQW7G6Bi35jn+0jVsZS8rS3lYknI3x4OEVgic9tS htZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wU7Z4jUfhEOMwwm7U0POVVUZtSqFnzmcUj5a/SheqVo=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=gaYlralI32T62J7+pHcLGm3R+OYQkwsi8dEOQ9wtmgiWEcr0ylmVCw/dJOnMCWgX8p Wx3WgKSK5jzi1m2CkcoOJhuuWnj46OJgSru7W/oN63MFcD+OZp24a+8jCINzuhrsGR8o QPhUGa0UUkezx1x4Ek0OpJoM0n7w9axEMie6NfZ7yKHGMcjuE4z5a+8NZ7cUD1dUuQLL +SICX/GzB5wIGIwGsMVDprfW+SFX0GbNq87Efn4x2UcjDGRMhRJlQKg/sTSlixGs0/O+ X4a+ogK2kCRYf/2ahPACSMB5qr/UIKjW6qOOweteyBydRzw5CDYvH3F/q7LVWn8BVb+x 62CA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=bYsnFvCS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id i23-20020a17090acf9700b0026b502223cbsi2841758pju.10.2023.09.14.22.37.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 22:37:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=bYsnFvCS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 2D86B837727B; Thu, 14 Sep 2023 16:30:45 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231260AbjINX3k (ORCPT + 33 others); Thu, 14 Sep 2023 19:29:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230412AbjINX3Z (ORCPT ); Thu, 14 Sep 2023 19:29:25 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EEEE271E for ; Thu, 14 Sep 2023 16:29:16 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 03814660736A; Fri, 15 Sep 2023 00:29:13 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1694734155; bh=AJfzXqUuOYq/YDCQsWCWT/OFILNeW+U2Pu7qFD8Rn7I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bYsnFvCSoJdnB1sQ5AHNHoF9SF+NkuVcL3ObKLbyd39R20SBhLLImZyGjbrPqHP0D gsdyfIZ0WYvH7zrCYPe2G34ozAngjZig0asn/qVfn7xjLn4J//0UcL+BdJymVlibh9 rAceDCcSs8ROfoz1KzHrDRhYPkZwa3doq+3YEkmHzAzgiOw587OYaoqCir44jOXs+1 AHNRGwZpIjssbrXWvXrmRrsxcOiKm6e0shaN+Pxf918KUs9f0W4wDgLhH+oydc/VbR aoz8gxaBDMyky4sSqjQtMXZ7u+6JA6MrKir1Im5GcZlnyVXzjCMiRVffbFSVf5eLYT 8HkdP2qp/My+A== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v17 12/18] drm/shmem-helper: Prepare drm_gem_shmem_free() to shrinker addition Date: Fri, 15 Sep 2023 02:27:15 +0300 Message-ID: <20230914232721.408581-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230914232721.408581-1-dmitry.osipenko@collabora.com> References: <20230914232721.408581-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Thu, 14 Sep 2023 16:30:45 -0700 (PDT) X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777080735425238152 X-GMAIL-MSGID: 1777080735425238152 Prepare drm_gem_shmem_free() to addition of memory shrinker support to drm-shmem by adding and using variant of put_pages() that doesn't touch reservation lock. Reservation shouldn't be touched because lockdep will trigger a bogus warning about locking contention with fs_reclaim code paths that can't happen during the time when GEM is freed and lockdep doesn't know about that. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 55 +++++++++++++++++--------- 1 file changed, 37 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 8a8eab4d0332..4959f51b647a 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -128,6 +128,41 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static void +__drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); +#endif + + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; +} + +static void +__drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +{ + /* + * Destroying the object is a special case. Acquiring the obj + * lock in drm_gem_shmem_put_pages_locked() can cause a locking + * order inversion between reservation_ww_class_mutex and fs_reclaim + * when called from drm_gem_shmem_free(). + * + * This deadlock is not actually possible, because no one should + * be already holding the lock when drm_gem_shmem_free() is called. + * Unfortunately lockdep is not aware of this detail. So when the + * refcount drops to zero, make sure that the reservation lock + * isn't touched here. + */ + if (refcount_dec_and_test(&shmem->pages_use_count)) + __drm_gem_shmem_release_pages(shmem); +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -142,8 +177,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { - dma_resv_lock(shmem->base.resv, NULL); - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); if (shmem->sgt) { @@ -153,11 +186,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) kfree(shmem->sgt); } if (shmem->pages) - drm_gem_shmem_put_pages_locked(shmem); + __drm_gem_shmem_put_pages(shmem); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); - - dma_resv_unlock(shmem->base.resv); } drm_gem_object_release(obj); @@ -207,21 +238,9 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { - struct drm_gem_object *obj = &shmem->base; - dma_resv_assert_held(shmem->base.resv); - if (refcount_dec_and_test(&shmem->pages_use_count)) { -#ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); -#endif - - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; - } + __drm_gem_shmem_put_pages(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);