Message ID | 20231029230205.93277-12-dmitry.osipenko@collabora.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:d641:0:b0:403:3b70:6f57 with SMTP id cy1csp1899779vqb; Sun, 29 Oct 2023 16:36:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGZcnnPZRolbabxiUilxXAPNpFFveJUfpgSSDsVURQpC20QUXnXFJPNYK1Fnq2x8RO/47Fy X-Received: by 2002:a17:902:6acb:b0:1ca:2c3b:7747 with SMTP id i11-20020a1709026acb00b001ca2c3b7747mr5487000plt.20.1698622601693; Sun, 29 Oct 2023 16:36:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698622601; cv=none; d=google.com; s=arc-20160816; b=kP2yLec511NjPfQO11zuhnDuUPiqJJhZYhTyuUvBtU6YQ5eqvSJVO0yQeVY6l9mAhq 4BKrttXqa/M6ljji3zHQvhjb+QvEqXfKdbFdC8xtdpu3TD2GsdrNF+ypAR1I57x+SWdm Qork/k8OqoW9O8lqwvjKkrTJV2o0fdgPhk6p9mwVAtz1m4SsZqB/XUAySG1qIh8qq6yR ZEGOg0RDXV3FjfP7X8TKxIDt4r/40OIqoCWjUGreyJy1alIg5ztDCo45QP2fLA4b59HX VNy7Ef7IMEsWO2TGHPKFyXJdOamlg4lzkaj0YRNQ3ItUG2NXTWXP5jZinREAKGVPmijW ppvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5QsgUj/4LYloQiHXkRNPm1vyv9HyqH7ejdKeMBSZTX4=; fh=5cel2jD5h+yPMXVwxbomyVhwojHUqATy6nFjd4aOh4o=; b=joMiy9MmoQoMIIbbAZP4aHFH7+5bbbKNDZhOcw5tfJMYFAKm3j5BZ+zzia7Gv9YhsD 1B6vQno5FzLCGcpw+GGKuFnHKneJAqyORUmUpt1j/FdD9Q6/ypbDfllXjRLI2agprxqd xhmGotwVub8VKbopQSbtkK+mnQFXv9P4MkriI8linCiSjOHOOvJGZeahRLPH/Zsl/hkX mrBcsULVxOawKmeNpr1uJiv02le7aK7wA6FWVv5ELaFFLiLH2D0iBktIDPu8pDf93mCW 8XI7miTccibk3xbx/ra/yidcO9RgNGGfZl1syReOhk0W5A/5TBee/NHdvgC31Tv6gFyB ndog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=b6voNmWv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id t11-20020a170902e1cb00b001c73732c1f2si4094676pla.223.2023.10.29.16.36.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Oct 2023 16:36:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=b6voNmWv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id EAE6280A44FD; Sun, 29 Oct 2023 16:36:34 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231636AbjJ2XgK (ORCPT <rfc822;zxc52fgh@gmail.com> + 31 others); Sun, 29 Oct 2023 19:36:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232601AbjJ2XJH (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sun, 29 Oct 2023 19:09:07 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 179C759E8 for <linux-kernel@vger.kernel.org>; Sun, 29 Oct 2023 16:04:09 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1E09E6607392; Sun, 29 Oct 2023 23:02:42 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620563; bh=yiD04P9CgkHzY0nKiv5L+yDOq2HcDAwa5da3tFP1xDY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b6voNmWv+XnQYeoygWZPgZN4ZykybdYJjp2jCYpFL43XMxSR0Vnxfrdfk9/aowgY+ qsFTfaHOylP4a1gSazSif5Q7WRcmYcxshuPxpDa+pDFQ7t91S6709I3lMYyyn4k0WN V1yOJJlUwI9ovBVwOM+ViQRQBdAPDnHSUYH9uAFVAs85zl4sXZ3SHsmHb5ohGFA8n5 qLpsjnvUrn78ytyiM7t9b+uIEQbr4kjrjQfU7rGvs9wogNk3COfU2Q2VUKqa/QEv7J 0rqyGVVAIgITd/hRMQ5CnbYR4+3T22T2IoLdPwt8BBOZ4NtoW7ra6Lq/xrTkuFcqeD kmwrfeer6kROw== From: Dmitry Osipenko <dmitry.osipenko@collabora.com> To: David Airlie <airlied@gmail.com>, Gerd Hoffmann <kraxel@redhat.com>, Gurchetan Singh <gurchetansingh@chromium.org>, Chia-I Wu <olvaffe@gmail.com>, Daniel Vetter <daniel@ffwll.ch>, Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, Maxime Ripard <mripard@kernel.org>, Thomas Zimmermann <tzimmermann@suse.de>, =?utf-8?q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>, Qiang Yu <yuq825@gmail.com>, Steven Price <steven.price@arm.com>, Boris Brezillon <boris.brezillon@collabora.com>, Emma Anholt <emma@anholt.net>, Melissa Wen <mwen@igalia.com> Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 11/26] drm/shmem-helper: Prepare drm_gem_shmem_free() to shrinker addition Date: Mon, 30 Oct 2023 02:01:50 +0300 Message-ID: <20231029230205.93277-12-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Sun, 29 Oct 2023 16:36:35 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781134893021407797 X-GMAIL-MSGID: 1781134893021407797 |
Series |
Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
|
|
Commit Message
Dmitry Osipenko
Oct. 29, 2023, 11:01 p.m. UTC
Prepare drm_gem_shmem_free() to addition of memory shrinker support
to drm-shmem by adding and using variant of put_pages() that doesn't
touch reservation lock. Reservation shouldn't be touched because lockdep
will trigger a bogus warning about locking contention with fs_reclaim
code paths that can't happen during the time when GEM is freed and
lockdep doesn't know about that.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++-------------
1 file changed, 18 insertions(+), 17 deletions(-)
Comments
On Mon, 30 Oct 2023 02:01:50 +0300 Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > Prepare drm_gem_shmem_free() to addition of memory shrinker support > to drm-shmem by adding and using variant of put_pages() that doesn't > touch reservation lock. Reservation shouldn't be touched because lockdep > will trigger a bogus warning about locking contention with fs_reclaim > code paths that can't happen during the time when GEM is freed and > lockdep doesn't know about that. > > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++------------- > 1 file changed, 18 insertions(+), 17 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index 08b5a57c59d8..24ff2b99e75b 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -128,6 +128,22 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t > } > EXPORT_SYMBOL_GPL(drm_gem_shmem_create); > > +static void > +drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) > +{ > + struct drm_gem_object *obj = &shmem->base; > + > +#ifdef CONFIG_X86 > + if (shmem->map_wc) > + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); > +#endif > + > + drm_gem_put_pages(obj, shmem->pages, > + shmem->pages_mark_dirty_on_put, > + shmem->pages_mark_accessed_on_put); > + shmem->pages = NULL; > +} > + > /** > * drm_gem_shmem_free - Free resources associated with a shmem GEM object > * @shmem: shmem GEM object to free > @@ -142,8 +158,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > if (obj->import_attach) { > drm_prime_gem_destroy(obj, shmem->sgt); > } else { > - dma_resv_lock(shmem->base.resv, NULL); > - > drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); > > if (shmem->sgt) { > @@ -157,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > If you drop the dma_resv_lock/unlock(), you should also replace the drm_gem_shmem_put_pages_locked() by a drm_gem_shmem_free_pages() in this commit. > drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); > drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); > - > - dma_resv_unlock(shmem->base.resv); > } > > drm_gem_object_release(obj); > @@ -208,21 +220,10 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) > */ > void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) > { > - struct drm_gem_object *obj = &shmem->base; > - > dma_resv_assert_held(shmem->base.resv); > > - if (refcount_dec_and_test(&shmem->pages_use_count)) { > -#ifdef CONFIG_X86 > - if (shmem->map_wc) > - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); > -#endif > - > - drm_gem_put_pages(obj, shmem->pages, > - shmem->pages_mark_dirty_on_put, > - shmem->pages_mark_accessed_on_put); > - shmem->pages = NULL; > - } > + if (refcount_dec_and_test(&shmem->pages_use_count)) > + drm_gem_shmem_free_pages(shmem); > } > EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); >
On 11/10/23 13:16, Boris Brezillon wrote: > On Mon, 30 Oct 2023 02:01:50 +0300 > Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > >> Prepare drm_gem_shmem_free() to addition of memory shrinker support >> to drm-shmem by adding and using variant of put_pages() that doesn't >> touch reservation lock. Reservation shouldn't be touched because lockdep >> will trigger a bogus warning about locking contention with fs_reclaim >> code paths that can't happen during the time when GEM is freed and >> lockdep doesn't know about that. >> >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> >> --- >> drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++------------- >> 1 file changed, 18 insertions(+), 17 deletions(-) >> >> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c >> index 08b5a57c59d8..24ff2b99e75b 100644 >> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c >> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c >> @@ -128,6 +128,22 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t >> } >> EXPORT_SYMBOL_GPL(drm_gem_shmem_create); >> >> +static void >> +drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) >> +{ >> + struct drm_gem_object *obj = &shmem->base; >> + >> +#ifdef CONFIG_X86 >> + if (shmem->map_wc) >> + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); >> +#endif >> + >> + drm_gem_put_pages(obj, shmem->pages, >> + shmem->pages_mark_dirty_on_put, >> + shmem->pages_mark_accessed_on_put); >> + shmem->pages = NULL; >> +} >> + >> /** >> * drm_gem_shmem_free - Free resources associated with a shmem GEM object >> * @shmem: shmem GEM object to free >> @@ -142,8 +158,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) >> if (obj->import_attach) { >> drm_prime_gem_destroy(obj, shmem->sgt); >> } else { >> - dma_resv_lock(shmem->base.resv, NULL); >> - >> drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); >> >> if (shmem->sgt) { >> @@ -157,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) >> > If you drop the dma_resv_lock/unlock(), you should also replace the > drm_gem_shmem_put_pages_locked() by a drm_gem_shmem_free_pages() in this > commit. drm_gem_shmem_put_pages_locked() is exported by a later patch of this series, it's not worthwhile to remove this function
On Mon, 20 Nov 2023 14:02:29 +0300 Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > On 11/10/23 13:16, Boris Brezillon wrote: > > On Mon, 30 Oct 2023 02:01:50 +0300 > > Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote: > > > >> Prepare drm_gem_shmem_free() to addition of memory shrinker support > >> to drm-shmem by adding and using variant of put_pages() that doesn't > >> touch reservation lock. Reservation shouldn't be touched because lockdep > >> will trigger a bogus warning about locking contention with fs_reclaim > >> code paths that can't happen during the time when GEM is freed and > >> lockdep doesn't know about that. > >> > >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > >> --- > >> drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++------------- > >> 1 file changed, 18 insertions(+), 17 deletions(-) > >> > >> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > >> index 08b5a57c59d8..24ff2b99e75b 100644 > >> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > >> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > >> @@ -128,6 +128,22 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t > >> } > >> EXPORT_SYMBOL_GPL(drm_gem_shmem_create); > >> > >> +static void > >> +drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) > >> +{ > >> + struct drm_gem_object *obj = &shmem->base; > >> + > >> +#ifdef CONFIG_X86 > >> + if (shmem->map_wc) > >> + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); > >> +#endif > >> + > >> + drm_gem_put_pages(obj, shmem->pages, > >> + shmem->pages_mark_dirty_on_put, > >> + shmem->pages_mark_accessed_on_put); > >> + shmem->pages = NULL; > >> +} > >> + > >> /** > >> * drm_gem_shmem_free - Free resources associated with a shmem GEM object > >> * @shmem: shmem GEM object to free > >> @@ -142,8 +158,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > >> if (obj->import_attach) { > >> drm_prime_gem_destroy(obj, shmem->sgt); > >> } else { > >> - dma_resv_lock(shmem->base.resv, NULL); > >> - > >> drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); > >> > >> if (shmem->sgt) { > >> @@ -157,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) > >> > > If you drop the dma_resv_lock/unlock(), you should also replace the > > drm_gem_shmem_put_pages_locked() by a drm_gem_shmem_free_pages() in this > > commit. > > drm_gem_shmem_put_pages_locked() is exported by a later patch of this > series, it's not worthwhile to remove this function I'm not talking about removing drm_gem_shmem_put_pages_locked(), but replacing the drm_gem_shmem_put_pages_locked() call you have in drm_gem_shmem_free() by a drm_gem_shmem_free_pages(), so you don't end up with a lockdep warning when you stop exactly here in the patch series, which is important if we want to keep things bisectable.
On 11/20/23 14:19, Boris Brezillon wrote: ... >>>> - dma_resv_lock(shmem->base.resv, NULL); >>>> - >>>> drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); >>>> >>>> if (shmem->sgt) { >>>> @@ -157,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) >>>> >>> If you drop the dma_resv_lock/unlock(), you should also replace the >>> drm_gem_shmem_put_pages_locked() by a drm_gem_shmem_free_pages() in this >>> commit. >> >> drm_gem_shmem_put_pages_locked() is exported by a later patch of this >> series, it's not worthwhile to remove this function > > I'm not talking about removing drm_gem_shmem_put_pages_locked(), but > replacing the drm_gem_shmem_put_pages_locked() call you have in > drm_gem_shmem_free() by a drm_gem_shmem_free_pages(), so you don't end > up with a lockdep warning when you stop exactly here in the patch > series, which is important if we want to keep things bisectable. Indeed, there is assert_locked() there. Thanks for the clarification :)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 08b5a57c59d8..24ff2b99e75b 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -128,6 +128,22 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static void +drm_gem_shmem_free_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); +#endif + + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -142,8 +158,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { - dma_resv_lock(shmem->base.resv, NULL); - drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count)); if (shmem->sgt) { @@ -157,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count)); drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count)); - - dma_resv_unlock(shmem->base.resv); } drm_gem_object_release(obj); @@ -208,21 +220,10 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { - struct drm_gem_object *obj = &shmem->base; - dma_resv_assert_held(shmem->base.resv); - if (refcount_dec_and_test(&shmem->pages_use_count)) { -#ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); -#endif - - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; - } + if (refcount_dec_and_test(&shmem->pages_use_count)) + drm_gem_shmem_free_pages(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);