From patchwork Sun Oct 29 23:01:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 159447 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1895956vqb; Sun, 29 Oct 2023 16:23:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEb9dCR8t6+P8HT3pgvXZNF55Df6PQ5KwEMzRXdvZMMNirKGYRrDKy98mC1OybLfz2vqJ/L X-Received: by 2002:a05:6870:860c:b0:1ea:a11:71f4 with SMTP id h12-20020a056870860c00b001ea0a1171f4mr10716112oal.59.1698621783665; Sun, 29 Oct 2023 16:23:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698621783; cv=none; d=google.com; s=arc-20160816; b=LEKio7BAyBel9AOVifaClVejJL0vppmmed8o5y0bj4QHOn2I+hgZjdrQBFkccKjp4y l1/F2Xgoyj+cpXkrfGq6kuaavkVN82SD4A22O3GCWqEc8yP6mb0Jp7wQdTiJ5o505dLy Jn4cmyWCQSsyZo69jaeyB2wxEyjZc7ESktTnhWTnbjGStZTcbtZTISipIrXNJ22uzaci U2DKYzZyDgdOqMXvZ6MgK8twq0CFPc9oEv2lyiCL4APE4EN7ajv0z3vMpO+GWcesbPSa hQPTjzokAtQfJy4/f/pOP9bhHLx1Cp/LWEOMODNB8vsgl7bogLz6wpFvKGGJM+dZrPI2 d3Og== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=J+VAFwuLDItYlUvrZN+zRdYdLqFAqpHb0tZVGA0fAFE=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=rCiqhIBJsfWyzNfSSopNS/UW4MJGMTW0URFL/9gguG02LNBuUN2WED5f+HGEXe31y5 8vV7Khvmwkx327Rmp9Nu8YF2ywluIWvj3Yj1unWm4A/Iq08NhQQlzQLPJ0S13szJ5EOm GffFpWGRBUcROLkUed2749Hzi/l1xY6t/cqTpiQpY1VT821BQa8Y688KipcIf1HPd13j 0jDO/W7Q+jMLJUAJzmr5MIP2pgHTbAIv678i3m1Swvx+KSCakBnvqBbqQaS3JXeBkOaF wjXFd2w2qA+qzBzqOzA42ZvYKibp7QPR9P4167lWv7RUQeHDclwdhF+Fcne+sNmuhoJO LxEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Vsu1u+Np; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id l26-20020a65681a000000b00577f4d736bbsi4150010pgt.373.2023.10.29.16.23.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:23:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Vsu1u+Np; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id C1ACA804393E; Sun, 29 Oct 2023 16:23:02 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232324AbjJ2XWz (ORCPT + 31 others); Sun, 29 Oct 2023 19:22:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232317AbjJ2XWf (ORCPT ); Sun, 29 Oct 2023 19:22:35 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BCFB7D8C for ; Sun, 29 Oct 2023 16:04:17 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 6BC9E6607390; Sun, 29 Oct 2023 23:02:52 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620573; bh=Y38zEoY1/gjYYjrjDZs9Rr+zwqObmBjdBml3XtFCTzs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vsu1u+Npxkp9e5ZNo9JZC5sEhd9r5AwEDu6nZCq3kKd+dbEAq44Z0f0Oq4nd5s9JJ IDidsJaAwKOwFEgHBx+rmabzyhrPrPquTmDzes50MPEQ2tnGo6txyw/C4mpuaNyb4n ixPslp3QLBnnLP96U5Hq2Q8dnX3TUSDXciHTZONuOYXT+4dhzug8t+dMCcNiEHMO+r Zgn+tjohtPFre1OJaf38Ybb4ckxki5HIJtmV/Knkn7lLgiij2m+3l8yenA4Aa6kr3R k/rmwBvcfH6L+9GiR1DzQrzVbaTADTKKvs4MejvOWFTLKeNFtLV+MerVAlTrLXGk1J UGR5T/2Kd5VGQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 18/26] drm/shmem-helper: Change sgt allocation policy Date: Mon, 30 Oct 2023 02:01:57 +0300 Message-ID: <20231029230205.93277-19-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:23:02 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134035791300261 X-GMAIL-MSGID: 1781134035791300261 In a preparation to addition of drm-shmem memory shrinker support, change the SGT allocation policy in this way: 1. SGT can be allocated only if shmem pages are pinned at the time of allocation, otherwise allocation fails. 2. Drivers must ensure that pages are pinned during the time of SGT usage and should get new SGT if pages were unpinned. This new policy is required by the shrinker because it will move pages to/from SWAP unless pages are pinned, invalidating SGT pointer once pages are relocated. Previous patches prepared drivers to the new policy. Signed-off-by: Dmitry Osipenko Reviewed-by: Boris Brezillon --- drivers/gpu/drm/drm_gem_shmem_helper.c | 51 +++++++++++++------------- 1 file changed, 26 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index f371ebc6f85c..1420d2166b76 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -133,6 +133,14 @@ drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + if (shmem->sgt) { + dma_unmap_sgtable(obj->dev->dev, shmem->sgt, + DMA_BIDIRECTIONAL, 0); + sg_free_table(shmem->sgt); + kfree(shmem->sgt); + shmem->sgt = NULL; + } + #ifdef CONFIG_X86 if (shmem->map_wc) set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); @@ -155,23 +163,12 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - if (obj->import_attach) { + if (obj->import_attach) drm_prime_gem_destroy(obj, shmem->sgt); - } else { - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); - - if (shmem->sgt) { - dma_unmap_sgtable(obj->dev->dev, shmem->sgt, - DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - } - if (shmem->pages) - drm_gem_shmem_put_pages_locked(shmem); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); - drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); - } + drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); + drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); drm_gem_object_release(obj); kfree(shmem); @@ -705,6 +702,9 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); + if (drm_WARN_ON(obj->dev, !shmem->pages)) + return ERR_PTR(-ENOMEM); + return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT); } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); @@ -720,15 +720,10 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ drm_WARN_ON(obj->dev, obj->import_attach); - ret = drm_gem_shmem_get_pages_locked(shmem); - if (ret) - return ERR_PTR(ret); - sgt = drm_gem_shmem_get_sg_table(shmem); - if (IS_ERR(sgt)) { - ret = PTR_ERR(sgt); - goto err_put_pages; - } + if (IS_ERR(sgt)) + return sgt; + /* Map the pages for use by the h/w. */ ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0); if (ret) @@ -741,8 +736,6 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ err_free_sgt: sg_free_table(sgt); kfree(sgt); -err_put_pages: - drm_gem_shmem_put_pages_locked(shmem); return ERR_PTR(ret); } @@ -759,6 +752,14 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_ * and difference between dma-buf imported and natively allocated objects. * drm_gem_shmem_get_sg_table() should not be directly called by drivers. * + * Drivers should adhere to these SGT usage rules: + * + * 1. SGT should be allocated only if shmem pages are pinned at the + * time of allocation, otherwise allocation will fail. + * + * 2. Drivers should ensure that pages are pinned during the time of + * SGT usage and should get new SGT if pages were unpinned. + * * Returns: * A pointer to the scatter/gather table of pinned pages or errno on failure. */