From patchwork Wed Nov 23 02:57:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 24667 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2561246wrr; Tue, 22 Nov 2022 19:02:15 -0800 (PST) X-Google-Smtp-Source: AA0mqf48EHtm7gpBwXS2d2/sfdE0kX06wOgQ34gaO7WpPNGoSZgt4QZpLWFo6R4WFkSfS2MYDBP+ X-Received: by 2002:a17:906:3982:b0:7ad:8bc6:46e7 with SMTP id h2-20020a170906398200b007ad8bc646e7mr23478052eje.28.1669172535762; Tue, 22 Nov 2022 19:02:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669172535; cv=none; d=google.com; s=arc-20160816; b=W0tiwVKR9BqrBzbyZj4MOL0ogq857oLbAP6Nkp+o/L4ya3bGg/vr3oeyF4YArWk8Nf R91FAjejXitSwgX98srgfl6iQEZV6hf6rZOoeDq+TPdLlMPg7QRQUC7Yg+j00XEl8t4f Hk4SV4r8diylzAa8+aGT4OP8Pc50A5oFdDZ7IXGJ3DT44FHRskCAlOe5oCAb3X+8RV/5 o1kBtq02iFLJiRaO8F8bIDdHdbU7pGZVSqulj0UpN7DAB5Hh1150iLxXzGjs7aWRlMeA FPTRn6qe2JkDHdDjS9eASqKHClK8cf6ZMkVSSL+53TYoBRYl9Ipnp7s189xPOVNPxuQI vaGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=w8hXZiT5tb00YdpMJbBNukadDcYgwjLk57EDpN20eHA=; b=xNI3so5WNX5opnhhMKTPDM2Bd85n77nxP9+J0icHCFpdKaAIgI58sQHZtmVM5nBo0+ W3M9BZfXS6QZRJ1fFyQxAamZGjDDeN4RKfB5XVRMqMbbewCwp3qSpG3jFCcqEQzkKTEz aaiXGiPqq/gMMYsJJn2YZhN9r2ZWQoRG7Lfiqo/rpARMVA09UhVlLCUbCF/3kWS1Jbmy Rv7FrXCNCjtcf+b4+VS8yhp5a+Qh+aoej1MIAFkjXDH6I6EAR+QSUsb7n7DJSEpcf9ya SXyB1igHKhEKf6NO4K3UwOlSY5meIAg0gDugjSLdDsSN4zaJU/k5I7Sm6CMO4zSUKN/8 79Jg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Y5PTbB0p; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d12-20020aa7d5cc000000b004694d005e82si8094218eds.571.2022.11.22.19.01.49; Tue, 22 Nov 2022 19:02:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=Y5PTbB0p; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235577AbiKWDAz (ORCPT + 99 others); Tue, 22 Nov 2022 22:00:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235493AbiKWDAf (ORCPT ); Tue, 22 Nov 2022 22:00:35 -0500 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66D95E6371 for ; Tue, 22 Nov 2022 19:00:29 -0800 (PST) Received: from dimapc.. (109-252-117-140.nat.spd-mgts.ru [109.252.117.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 7C9336602AFC; Wed, 23 Nov 2022 03:00:26 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1669172428; bh=Oj//xJJuNz/9oqzfpT2DT+6APRg5PsGl2dle2vNn4d8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y5PTbB0pg4cEWCZxOxueYbv7D1ZDP9WnCtwif1a/E26g68GUY334h73BIFcmwsMoc H4B1bvWw/wAKA7KWOjG315GwyHaaXxCQL7Q6WjpK6WAb74H3QwXaZLlGUjhKH40UOS DH9xr3SF4rIqBrTEeoeB9KHuAnAJYBOxOe3F1DjnbWuMLLaKjmVOwTaMvpfmsQyues cM3GcvP1QDxg5AQDi05kZ0FT41gUe7K2st2F3FoGcwSdeKz4xs6T0XQjgBfEAp/QDV OkectDbtIjmNI8KjOVmMURFP6dFnIyf7thy6LgMHEPgicbohPkl2MDIak9w2fePMwU ZbARyth4/74qg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Qiang Yu , Steven Price , Alyssa Rosenzweig , Rob Herring , Sean Paul , Dmitry Baryshkov , Abhinav Kumar Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v9 08/11] drm/shmem-helper: Add memory shrinker Date: Wed, 23 Nov 2022 05:57:20 +0300 Message-Id: <20221123025723.695075-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123025723.695075-1-dmitry.osipenko@collabora.com> References: <20221123025723.695075-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750254260827234071?= X-GMAIL-MSGID: =?utf-8?q?1750254260827234071?= Introduce common drm-shmem shrinker for DRM drivers. To start using drm-shmem shrinker drivers should do the following: 1. Implement evict() callback of GEM object where driver should check whether object is purgeable or evictable using drm-shmem helpers and perform the shrinking action 2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device), which will register drm-shmem shrinker 3. Implement madvise IOCTL that will use drm_gem_shmem_madvise() Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 465 ++++++++++++++++-- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 9 +- include/drm/drm_device.h | 10 +- include/drm/drm_gem_shmem_helper.h | 61 ++- 4 files changed, 492 insertions(+), 53 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index b4aa2d253f8e..705bd32a4c92 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -126,6 +127,57 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem) +{ + /* + * Destroying the object is a special case.. drm_gem_shmem_free() + * calls many things that WARN_ON if the obj lock is not held. But + * acquiring the obj lock in drm_gem_shmem_free() can cause a locking + * order inversion between reservation_ww_class_mutex and fs_reclaim. + * + * This deadlock is not actually possible, because no one should + * be already holding the lock when msm_gem_free_object() is called. + * Unfortunately lockdep is not aware of this detail. So when the + * refcount drops to zero, we pretend it is already locked. + */ + if (kref_read(&shmem->base.refcount)) + dma_resv_assert_held(shmem->base.resv); +} + +static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem) +{ + dma_resv_assert_held(shmem->base.resv); + + return (shmem->madv >= 0) && shmem->base.funcs->evict && + shmem->pages_use_count && !shmem->pages_pin_count && + !shmem->base.dma_buf && !shmem->base.import_attach && + shmem->sgt && !shmem->evicted; +} + +static void +drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm; + struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker; + + drm_gem_shmem_resv_assert_held(shmem); + + if (!gem_shrinker || obj->import_attach) + return; + + if (shmem->madv < 0) + drm_gem_lru_remove(&shmem->base); + else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(shmem)) + drm_gem_lru_move_tail(&gem_shrinker->lru_evictable, &shmem->base); + else if (shmem->evicted) + drm_gem_lru_move_tail(&gem_shrinker->lru_evicted, &shmem->base); + else if (!shmem->pages) + drm_gem_lru_remove(&shmem->base); + else + drm_gem_lru_move_tail(&gem_shrinker->lru_pinned, &shmem->base); +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -140,7 +192,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { - dma_resv_lock(shmem->base.resv, NULL); + /* take out shmem GEM object from the memory shrinker */ + drm_gem_shmem_madvise(shmem, -1); drm_WARN_ON(obj->dev, shmem->vmap_use_count); @@ -150,12 +203,10 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) + if (shmem->pages_use_count) drm_gem_shmem_put_pages(shmem); drm_WARN_ON(obj->dev, shmem->pages_use_count); - - dma_resv_unlock(shmem->base.resv); } drm_gem_object_release(obj); @@ -163,19 +214,31 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +static int +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; - if (shmem->pages_use_count++ > 0) + dma_resv_assert_held(shmem->base.resv); + + if (shmem->madv < 0) { + drm_WARN_ON(obj->dev, shmem->pages); + return -ENOMEM; + } + + if (shmem->pages) { + drm_WARN_ON(obj->dev, !shmem->evicted); return 0; + } + + if (drm_WARN_ON(obj->dev, !shmem->pages_use_count)) + return -EINVAL; pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count = 0; return PTR_ERR(pages); } @@ -194,6 +257,59 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) return 0; } +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +{ + int err; + + dma_resv_assert_held(shmem->base.resv); + + if (shmem->madv < 0) + return -ENOMEM; + + if (shmem->pages_use_count++ > 0) { + err = drm_gem_shmem_swap_in(shmem); + if (err) + goto err_zero_use; + + return 0; + } + + err = drm_gem_shmem_acquire_pages(shmem); + if (err) + goto err_zero_use; + + drm_gem_shmem_update_pages_state(shmem); + + return 0; + +err_zero_use: + shmem->pages_use_count = 0; + + return err; +} + +static void +drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + if (!shmem->pages) { + drm_WARN_ON(obj->dev, + !shmem->evicted && shmem->madv >= 0); + return; + } + +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); +#endif + + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; +} + /* * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -204,7 +320,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - dma_resv_assert_held(shmem->base.resv); + drm_gem_shmem_resv_assert_held(shmem); if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) return; @@ -212,15 +328,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) if (--shmem->pages_use_count > 0) return; -#ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); -#endif + drm_gem_shmem_release_pages(shmem); - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; + drm_gem_shmem_update_pages_state(shmem); } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -237,12 +347,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + int ret; dma_resv_assert_held(shmem->base.resv); drm_WARN_ON(obj->dev, obj->import_attach); - return drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages(shmem); + if (!ret) + shmem->pages_pin_count++; + + return ret; } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -262,6 +377,8 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) drm_WARN_ON(obj->dev, obj->import_attach); drm_gem_shmem_put_pages(shmem); + + shmem->pages_pin_count--; } EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -304,7 +421,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, return 0; } - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_pin(shmem); if (ret) goto err_zero_use; @@ -327,7 +444,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_unpin(shmem); err_zero_use: shmem->vmap_use_count = 0; @@ -364,7 +481,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, return; vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_unpin(shmem); } shmem->vaddr = NULL; @@ -401,48 +518,84 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, */ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - dma_resv_assert_held(shmem->base.resv); + drm_gem_shmem_resv_assert_held(shmem); if (shmem->madv >= 0) shmem->madv = madv; madv = shmem->madv; + drm_gem_shmem_update_pages_state(shmem); + return (madv >= 0); } EXPORT_SYMBOL(drm_gem_shmem_madvise); -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) +/** + * drm_gem_shmem_swap_in() - Moves shmem GEM back to memory and enables + * hardware access to the memory. + * @shmem: shmem GEM object + * + * This function moves shmem GEM back to memory if it was previously evicted + * by the memory shrinker. The GEM is ready to use on success. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - struct drm_device *dev = obj->dev; + struct sg_table *sgt; + int err; dma_resv_assert_held(shmem->base.resv); - drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); + if (shmem->evicted) { + err = drm_gem_shmem_acquire_pages(shmem); + if (err) + return err; + + sgt = drm_gem_shmem_get_sg_table(shmem); + if (IS_ERR(sgt)) + return PTR_ERR(sgt); + + err = dma_map_sgtable(obj->dev->dev, sgt, + DMA_BIDIRECTIONAL, 0); + if (err) { + sg_free_table(sgt); + kfree(sgt); + return err; + } - dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - shmem->sgt = NULL; + shmem->sgt = sgt; + shmem->evicted = false; - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_update_pages_state(shmem); + } - shmem->madv = -1; + if (!shmem->pages) + return -ENOMEM; - drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); - drm_gem_free_mmap_offset(obj); + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in); - /* Our goal here is to return as much of the memory as - * is possible back to the system as we are called from OOM. - * To do this we must instruct the shmfs to drop all of its - * backing pages, *now*. - */ - shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); +static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_device *dev = obj->dev; - invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + if (shmem->evicted) + return; + + dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); + drm_gem_shmem_release_pages(shmem); + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + + sg_free_table(shmem->sgt); + kfree(shmem->sgt); + shmem->sgt = NULL; } -EXPORT_SYMBOL(drm_gem_shmem_purge); /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object @@ -493,22 +646,33 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) vm_fault_t ret; struct page *page; pgoff_t page_offset; + bool pages_unpinned; + int err; /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; dma_resv_lock(shmem->base.resv, NULL); - if (page_offset >= num_pages || - drm_WARN_ON_ONCE(obj->dev, !shmem->pages) || - shmem->madv < 0) { + /* Sanity-check that we have the pages pointer when it should present */ + pages_unpinned = (shmem->evicted || shmem->madv < 0 || !shmem->pages_use_count); + drm_WARN_ON_ONCE(obj->dev, !shmem->pages ^ pages_unpinned); + + if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) { ret = VM_FAULT_SIGBUS; } else { + err = drm_gem_shmem_swap_in(shmem); + if (err) { + ret = VM_FAULT_OOM; + goto unlock; + } + page = shmem->pages[page_offset]; ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } +unlock: dma_resv_unlock(shmem->base.resv); return ret; @@ -518,13 +682,15 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) { struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - int ret; drm_WARN_ON(obj->dev, obj->import_attach); dma_resv_lock(shmem->base.resv, NULL); - ret = drm_gem_shmem_get_pages(shmem); - drm_WARN_ON_ONCE(obj->dev, ret != 0); + + if (drm_gem_shmem_get_pages(shmem)) + shmem->pages_use_count++; + + drm_gem_shmem_update_pages_state(shmem); dma_resv_unlock(shmem->base.resv); drm_gem_vm_open(vma); @@ -609,7 +775,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); + drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); + drm_printf_indent(p, indent, "madv=%d\n", shmem->madv); } EXPORT_SYMBOL(drm_gem_shmem_print_info); @@ -682,6 +850,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) shmem->sgt = sgt; + drm_gem_shmem_update_pages_state(shmem); + dma_resv_unlock(shmem->base.resv); return sgt; @@ -732,6 +902,209 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); +static struct drm_gem_shmem_shrinker * +to_drm_shrinker(struct shrinker *shrinker) +{ + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); +} + +static unsigned long +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); + unsigned long count = gem_shrinker->lru_evictable.count; + + if (count >= SHRINK_EMPTY) + return SHRINK_EMPTY - 1; + + return count ?: SHRINK_EMPTY; +} + +void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem)); + drm_WARN_ON(obj->dev, shmem->evicted); + + drm_gem_shmem_unpin_pages(shmem); + + shmem->evicted = true; + drm_gem_shmem_update_pages_state(shmem); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_evict); + +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); + + drm_gem_shmem_unpin_pages(shmem); + drm_gem_free_mmap_offset(obj); + + /* Our goal here is to return as much of the memory as + * is possible back to the system as we are called from OOM. + * To do this we must instruct the shmfs to drop all of its + * backing pages, *now*. + */ + shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); + + invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + + shmem->madv = -1; + shmem->evicted = false; + drm_gem_shmem_update_pages_state(shmem); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge); + +static bool drm_gem_is_busy(struct drm_gem_object *obj) +{ + return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ); +} + +static bool drm_gem_shmem_shrinker_evict(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + if (!drm_gem_shmem_is_evictable(shmem) || + get_nr_swap_pages() < obj->size >> PAGE_SHIFT || + drm_gem_is_busy(obj)) + return false; + + return drm_gem_object_evict(obj); +} + +static bool drm_gem_shmem_shrinker_purge(struct drm_gem_object *obj) +{ + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + + if (!drm_gem_shmem_is_purgeable(shmem) || + drm_gem_is_busy(obj)) + return false; + + return drm_gem_object_evict(obj); +} + +static unsigned long +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); + unsigned long nr_to_scan = sc->nr_to_scan; + unsigned long remaining = 0; + unsigned long freed = 0; + + /* purge as many objects as we can */ + freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable, + nr_to_scan, &remaining, + drm_gem_shmem_shrinker_purge); + + /* evict as many objects as we can */ + if (freed < nr_to_scan) + freed += drm_gem_lru_scan(&gem_shrinker->lru_evictable, + nr_to_scan - freed, &remaining, + drm_gem_shmem_shrinker_evict); + + return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP; +} + +static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm, + const char *shrinker_name) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker; + int err; + + gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects; + gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects; + gem_shrinker->base.seeks = DEFAULT_SEEKS; + + mutex_init(&gem_shrinker->lock); + drm_gem_lru_init(&gem_shrinker->lru_evictable, &gem_shrinker->lock); + drm_gem_lru_init(&gem_shrinker->lru_evicted, &gem_shrinker->lock); + drm_gem_lru_init(&gem_shrinker->lru_pinned, &gem_shrinker->lock); + + err = register_shrinker(&gem_shrinker->base, shrinker_name); + if (err) { + mutex_destroy(&gem_shrinker->lock); + return err; + } + + return 0; +} + +static void drm_gem_shmem_shrinker_release(struct drm_device *dev, + struct drm_gem_shmem *shmem_mm) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = &shmem_mm->shrinker; + + unregister_shrinker(&gem_shrinker->base); + drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evictable.list)); + drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_evicted.list)); + drm_WARN_ON(dev, !list_empty(&gem_shrinker->lru_pinned.list)); + mutex_destroy(&gem_shrinker->lock); +} + +static int drm_gem_shmem_init(struct drm_device *dev) +{ + int err; + + if (WARN_ON(dev->shmem_mm)) + return -EBUSY; + + dev->shmem_mm = kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL); + if (!dev->shmem_mm) + return -ENOMEM; + + err = drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique); + if (err) + goto free_gem_shmem; + + return 0; + +free_gem_shmem: + kfree(dev->shmem_mm); + dev->shmem_mm = NULL; + + return err; +} + +static void drm_gem_shmem_release(struct drm_device *dev, void *ptr) +{ + struct drm_gem_shmem *shmem_mm = dev->shmem_mm; + + drm_gem_shmem_shrinker_release(dev, shmem_mm); + dev->shmem_mm = NULL; + kfree(shmem_mm); +} + +/** + * drmm_gem_shmem_init() - Initialize drm-shmem internals + * @dev: DRM device + * + * Cleanup is automatically managed as part of DRM device releasing. + * Calling this function multiple times will result in a error. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drmm_gem_shmem_init(struct drm_device *dev) +{ + int err; + + err = drm_gem_shmem_init(dev); + if (err) + return err; + + err = drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL); + if (err) + return err; + + return 0; +} +EXPORT_SYMBOL_GPL(drmm_gem_shmem_init); + MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); MODULE_IMPORT_NS(DMA_BUF); MODULE_LICENSE("GPL v2"); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 6a71a2555f85..865a989d67c8 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -15,6 +15,13 @@ #include "panfrost_gem.h" #include "panfrost_mmu.h" +static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv > 0) && + !shmem->pages_pin_count && shmem->sgt && + !shmem->base.dma_buf && !shmem->base.import_attach; +} + static unsigned long panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { @@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc return 0; list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (drm_gem_shmem_is_purgeable(shmem)) + if (panfrost_gem_shmem_is_purgeable(shmem)) count += shmem->base.size >> PAGE_SHIFT; } diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h index 9923c7a6885e..de5650b3c3ad 100644 --- a/include/drm/drm_device.h +++ b/include/drm/drm_device.h @@ -16,6 +16,7 @@ struct drm_vblank_crtc; struct drm_vma_offset_manager; struct drm_vram_mm; struct drm_fb_helper; +struct drm_gem_shmem_shrinker; struct inode; @@ -274,8 +275,13 @@ struct drm_device { /** @vma_offset_manager: GEM information */ struct drm_vma_offset_manager *vma_offset_manager; - /** @vram_mm: VRAM MM memory manager */ - struct drm_vram_mm *vram_mm; + union { + /** @vram_mm: VRAM MM memory manager */ + struct drm_vram_mm *vram_mm; + + /** @shmem_mm: SHMEM GEM memory manager */ + struct drm_gem_shmem *shmem_mm; + }; /** * @switch_power_state: diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 20ddcd799df9..c264caf6c83b 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -15,6 +16,7 @@ struct dma_buf_attachment; struct drm_mode_create_dumb; struct drm_printer; +struct drm_device; struct sg_table; /** @@ -39,12 +41,21 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; + /** + * @pages_pin_count: + * + * Reference count on the pinned pages table. + * The pages allowed to be evicted by memory shrinker + * only when the count is zero. + */ + unsigned int pages_pin_count; + /** * @madv: State for madvise * * 0 is active/inuse. + * 1 is not-needed/can-be-purged * A negative value is the object is purged. - * Positive values are driver specific and not used by the helpers. */ int madv; @@ -91,6 +102,12 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc : 1; + + /** + * @evicted: True if shmem pages are evicted by the memory shrinker. + * Used internally by memory shrinker. + */ + bool evicted : 1; }; #define to_drm_gem_shmem_obj(obj) \ @@ -112,11 +129,17 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { - return (shmem->madv > 0) && - !shmem->vmap_use_count && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; + dma_resv_assert_held(shmem->base.resv); + + return (shmem->madv > 0) && shmem->base.funcs->evict && + shmem->pages_use_count && !shmem->pages_pin_count && + !shmem->base.dma_buf && !shmem->base.import_attach && + (shmem->sgt || shmem->evicted); } +int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem); + +void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); @@ -260,6 +283,36 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v return drm_gem_shmem_mmap(shmem, vma); } +/** + * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory manager + */ +struct drm_gem_shmem_shrinker { + /** @base: Shrinker for purging shmem GEM objects */ + struct shrinker base; + + /** @lock: Protects @lru_* */ + struct mutex lock; + + /** @lru_pinned: List of pinned shmem GEM objects */ + struct drm_gem_lru lru_pinned; + + /** @lru_evictable: List of shmem GEM objects to be evicted */ + struct drm_gem_lru lru_evictable; + + /** @lru_evicted: List of evicted shmem GEM objects */ + struct drm_gem_lru lru_evicted; +}; + +/** + * struct drm_gem_shmem - GEM shmem memory manager + */ +struct drm_gem_shmem { + /** @shrinker: GEM shmem shrinker */ + struct drm_gem_shmem_shrinker shrinker; +}; + +int drmm_gem_shmem_init(struct drm_device *dev); + /* * Driver ops */