From patchwork Tue Apr 4 01:27:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78807 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2702130vqo; Mon, 3 Apr 2023 18:38:01 -0700 (PDT) X-Google-Smtp-Source: AKy350ajwNd+KWDhj533csjHJDWrxqjmo/e3R1BdK1JtUD2LUcUgG6F+t5qQgvkRc22F2n9qRy2V X-Received: by 2002:a17:902:e80f:b0:1a2:35f3:3608 with SMTP id u15-20020a170902e80f00b001a235f33608mr1022335plg.49.1680572281429; Mon, 03 Apr 2023 18:38:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572281; cv=none; d=google.com; s=arc-20160816; b=ABheYFs8eCDqZyAj0OQMsGy7crTfY7srvqZmQF9aXB1fKCWLyTMiOQ3mb3YGm6y+gV Uk8WMceYYF/T1DebLdA3yfCwBFGvb0B0YnE4TgWV+Vq9m1WTsDRoHtHgEyAHM02VrKmd nTrAr/r+bcxPIgBsF3bYIfmp7SAE1v1HGzlKTCv+80qzs8h/Z3wooKpvnRSzLrqVMXKG wrQI1kGkp9UJEr0b5bDM9unPy2YVDQLf/AemPUYTEncGXRQTJgF2WJVVDmKyF7S+vV8U YXlw02E9RhYdFrTfTGikNHa8LPVEUtLgJhEKurndP0vGX6REeBRdkg4TXuN9SjrSVz0P 4hxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=uFCRMKUSTUQC7wfjXmYlkuwwGxw7y+VdboflsRIya48=; b=fsfnc+v7wzrLRIRvM1GStsb+t8OwudhEFiZAzLJnyo1g6wnKy9+wyzX5cAE2qGoCvp rZ5vWIsJKbiQGk9sjzSzthOF82NRGlKiInt5rdaTs3mu+Vu89TVv9nrkzNLDvT+PWZSl 8ZCZLr/4aUio+TbvrZG94l6GEaOr+ShyDmBJ+tl/oGIJW/72TCvq4O8yG0ag3wwotAYG 1LaD3JI54MxufeQEujWnEzFp+cMeyxIN+7ogMlcBuJ3vk21wIRt9QEPbLD88miK4FfM9 cIMUNEu/tsUK2KDCzvIIvqBOMoHr3H+ZrYrzgZYKtl2WJXflLHc7kPq1XkfyPdYDg7ox QT1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="Ci7VMt/r"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u3-20020a170902bf4300b001a21cd1844asi9014382pls.278.2023.04.03.18.37.48; Mon, 03 Apr 2023 18:38:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="Ci7VMt/r"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232365AbjDDB2e (ORCPT + 99 others); Mon, 3 Apr 2023 21:28:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232099AbjDDB2d (ORCPT ); Mon, 3 Apr 2023 21:28:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF1B3E5 for ; Mon, 3 Apr 2023 18:27:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571672; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uFCRMKUSTUQC7wfjXmYlkuwwGxw7y+VdboflsRIya48=; b=Ci7VMt/rEYuaNKMxlANXcrPIDjIAbOZrPW8Dw1U1/uiWIWQrGuvWtpqSWfeUsR5UtML4ef KER4nK+FqEM9m+z1mF51OuqMZb2zdhHomUPF/S35S4byUiFct5pczO8Ugw9KGx4CJ3REVi 4tuJjVKtC0eaRUh+80MfHSlG1VM7ZJE= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-518-hoGmgBb_NQ6JSkkNFatH1Q-1; Mon, 03 Apr 2023 21:27:51 -0400 X-MC-Unique: hoGmgBb_NQ6JSkkNFatH1Q-1 Received: by mail-ed1-f72.google.com with SMTP id x35-20020a50baa6000000b005021d1b1e9eso43964015ede.13 for ; Mon, 03 Apr 2023 18:27:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571670; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uFCRMKUSTUQC7wfjXmYlkuwwGxw7y+VdboflsRIya48=; b=XEoNWqE+uvzT5XKT5iWJLDLGO/Q7YVa+xsDxWkjgXIMqG4sy6g/OsZLDX3b4ph8IHw K328anDGPlpcPgST0c6cHoyawgRvpceQXdP59p0x3qxBzZcb42R1YXWvG9BoyukEwPok NmVQNA79L+7fFspGExKEM+D3GoEKJwUE6mou0tRKu5wnrT2R4Mvi5WNweoXAOLdy1RIv cZ58b2zHH7vIpdeZMbXtMQHyN3Qnnru8tbZtjjIubszEmIWjbQd2N5HbKyGfZXDBdpJh K9LfySTrNTcdfx9V9NkXcx5QqfnqVGJqJprJPdg20y38U8m/SgUwNiAXPekz/hnbGaT4 tCHg== X-Gm-Message-State: AAQBX9e75Y9Y6smn023kdngVxf/UKy2QneSidVVBB2pcRUB80UEDMo8t NUrueEFd4U9s8b0UcXLP04P19L9bTt/HE30kVRh31eVluTVghPfQudUchnu1TqPWldeq/ZQawVQ f/CGjGoup+eWRxCkiYwtuXYIc X-Received: by 2002:a17:907:74d:b0:939:e928:5323 with SMTP id xc13-20020a170907074d00b00939e9285323mr500522ejb.54.1680571670530; Mon, 03 Apr 2023 18:27:50 -0700 (PDT) X-Received: by 2002:a17:907:74d:b0:939:e928:5323 with SMTP id xc13-20020a170907074d00b00939e9285323mr500505ejb.54.1680571670239; Mon, 03 Apr 2023 18:27:50 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id h23-20020a170906261700b008e0bb004976sm5183061ejc.134.2023.04.03.18.27.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:27:49 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, =?utf-8?q?Christian_K=C3=B6nig?= Subject: [PATCH drm-next v3 01/15] drm: execution context for GEM buffers v3 Date: Tue, 4 Apr 2023 03:27:27 +0200 Message-Id: <20230404012741.116502-2-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762207760600442717?= X-GMAIL-MSGID: =?utf-8?q?1762207760600442717?= From: Christian König This adds the infrastructure for an execution context for GEM buffers which is similar to the existinc TTMs execbuf util and intended to replace it in the long term. The basic functionality is that we abstracts the necessary loop to lock many different GEM buffers with automated deadlock and duplicate handling. v2: drop xarray and use dynamic resized array instead, the locking overhead is unecessary and measureable. v3: drop duplicate tracking, radeon is really the only one needing that. Signed-off-by: Christian König --- Documentation/gpu/drm-mm.rst | 12 ++ drivers/gpu/drm/Kconfig | 6 + drivers/gpu/drm/Makefile | 2 + drivers/gpu/drm/drm_exec.c | 249 +++++++++++++++++++++++++++++++++++ include/drm/drm_exec.h | 115 ++++++++++++++++ 5 files changed, 384 insertions(+) create mode 100644 drivers/gpu/drm/drm_exec.c create mode 100644 include/drm/drm_exec.h diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index a79fd3549ff8..a52e6f4117d6 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -493,6 +493,18 @@ DRM Sync Objects .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c :export: +DRM Execution context +===================== + +.. kernel-doc:: drivers/gpu/drm/drm_exec.c + :doc: Overview + +.. kernel-doc:: include/drm/drm_exec.h + :internal: + +.. kernel-doc:: drivers/gpu/drm/drm_exec.c + :export: + GPU Scheduler ============= diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index ba3fb04bb691..2dc81eb062eb 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -201,6 +201,12 @@ config DRM_TTM GPU memory types. Will be enabled automatically if a device driver uses it. +config DRM_EXEC + tristate + depends on DRM + help + Execution context for command submissions + config DRM_BUDDY tristate depends on DRM diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index a33257d2bc7f..9c6446eb3c83 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -78,6 +78,8 @@ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o # # Memory-management helpers # +# +obj-$(CONFIG_DRM_EXEC) += drm_exec.o obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o diff --git a/drivers/gpu/drm/drm_exec.c b/drivers/gpu/drm/drm_exec.c new file mode 100644 index 000000000000..df546cc5a227 --- /dev/null +++ b/drivers/gpu/drm/drm_exec.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ + +#include +#include +#include + +/** + * DOC: Overview + * + * This component mainly abstracts the retry loop necessary for locking + * multiple GEM objects while preparing hardware operations (e.g. command + * submissions, page table updates etc..). + * + * If a contention is detected while locking a GEM object the cleanup procedure + * unlocks all previously locked GEM objects and locks the contended one first + * before locking any further objects. + * + * After an object is locked fences slots can optionally be reserved on the + * dma_resv object inside the GEM object. + * + * A typical usage pattern should look like this:: + * + * struct drm_gem_object *obj; + * struct drm_exec exec; + * unsigned long index; + * int ret; + * + * drm_exec_init(&exec, true); + * drm_exec_while_not_all_locked(&exec) { + * ret = drm_exec_prepare_obj(&exec, boA, 1); + * drm_exec_continue_on_contention(&exec); + * if (ret) + * goto error; + * + * ret = drm_exec_lock(&exec, boB, 1); + * drm_exec_continue_on_contention(&exec); + * if (ret) + * goto error; + * } + * + * drm_exec_for_each_locked_object(&exec, index, obj) { + * dma_resv_add_fence(obj->resv, fence, DMA_RESV_USAGE_READ); + * ... + * } + * drm_exec_fini(&exec); + * + * See struct dma_exec for more details. + */ + +/* Dummy value used to initially enter the retry loop */ +#define DRM_EXEC_DUMMY (void*)~0 + +/* Unlock all objects and drop references */ +static void drm_exec_unlock_all(struct drm_exec *exec) +{ + struct drm_gem_object *obj; + unsigned long index; + + drm_exec_for_each_locked_object(exec, index, obj) { + dma_resv_unlock(obj->resv); + drm_gem_object_put(obj); + } + + if (exec->prelocked) { + dma_resv_unlock(exec->prelocked->resv); + drm_gem_object_put(exec->prelocked); + exec->prelocked = NULL; + } +} + +/** + * drm_exec_init - initialize a drm_exec object + * @exec: the drm_exec object to initialize + * @interruptible: if locks should be acquired interruptible + * + * Initialize the object and make sure that we can track locked and duplicate + * objects. + */ +void drm_exec_init(struct drm_exec *exec, bool interruptible) +{ + exec->interruptible = interruptible; + exec->objects = kmalloc(PAGE_SIZE, GFP_KERNEL); + + /* If allocation here fails, just delay that till the first use */ + exec->max_objects = exec->objects ? PAGE_SIZE / sizeof(void *) : 0; + exec->num_objects = 0; + exec->contended = DRM_EXEC_DUMMY; + exec->prelocked = NULL; +} +EXPORT_SYMBOL(drm_exec_init); + +/** + * drm_exec_fini - finalize a drm_exec object + * @exec: the drm_exec object to finilize + * + * Unlock all locked objects, drop the references to objects and free all memory + * used for tracking the state. + */ +void drm_exec_fini(struct drm_exec *exec) +{ + drm_exec_unlock_all(exec); + kvfree(exec->objects); + if (exec->contended != DRM_EXEC_DUMMY) { + drm_gem_object_put(exec->contended); + ww_acquire_fini(&exec->ticket); + } +} +EXPORT_SYMBOL(drm_exec_fini); + +/** + * drm_exec_cleanup - cleanup when contention is detected + * @exec: the drm_exec object to cleanup + * + * Cleanup the current state and return true if we should stay inside the retry + * loop, false if there wasn't any contention detected and we can keep the + * objects locked. + */ +bool drm_exec_cleanup(struct drm_exec *exec) +{ + if (likely(!exec->contended)) { + ww_acquire_done(&exec->ticket); + return false; + } + + if (likely(exec->contended == DRM_EXEC_DUMMY)) { + exec->contended = NULL; + ww_acquire_init(&exec->ticket, &reservation_ww_class); + return true; + } + + drm_exec_unlock_all(exec); + exec->num_objects = 0; + return true; +} +EXPORT_SYMBOL(drm_exec_cleanup); + +/* Track the locked object in the xa and reserve fences */ +static int drm_exec_obj_locked(struct drm_exec *exec, + struct drm_gem_object *obj) +{ + if (unlikely(exec->num_objects == exec->max_objects)) { + size_t size = exec->max_objects * sizeof(void *); + void *tmp; + + tmp = kvrealloc(exec->objects, size, size + PAGE_SIZE, + GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + exec->objects = tmp; + exec->max_objects += PAGE_SIZE / sizeof(void *); + } + drm_gem_object_get(obj); + exec->objects[exec->num_objects++] = obj; + + return 0; +} + +/* Make sure the contended object is locked first */ +static int drm_exec_lock_contended(struct drm_exec *exec) +{ + struct drm_gem_object *obj = exec->contended; + int ret; + + if (likely(!obj)) + return 0; + + if (exec->interruptible) { + ret = dma_resv_lock_slow_interruptible(obj->resv, + &exec->ticket); + if (unlikely(ret)) + goto error_dropref; + } else { + dma_resv_lock_slow(obj->resv, &exec->ticket); + } + + ret = drm_exec_obj_locked(exec, obj); + if (unlikely(ret)) { + dma_resv_unlock(obj->resv); + goto error_dropref; + } + + swap(exec->prelocked, obj); + +error_dropref: + /* Always cleanup the contention so that error handling can kick in */ + drm_gem_object_put(obj); + exec->contended = NULL; + return ret; +} + +/** + * drm_exec_prepare_obj - prepare a GEM object for use + * @exec: the drm_exec object with the state + * @obj: the GEM object to prepare + * @num_fences: how many fences to reserve + * + * Prepare a GEM object for use by locking it and reserving fence slots. All + * succesfully locked objects are put into the locked container. Duplicates + * detected as well and automatically moved into the duplicates container. + * + * Returns: -EDEADLK if a contention is detected, -ENOMEM when memory + * allocation failed and zero for success. + */ +int drm_exec_prepare_obj(struct drm_exec *exec, struct drm_gem_object *obj, + unsigned int num_fences) +{ + int ret; + + ret = drm_exec_lock_contended(exec); + if (unlikely(ret)) + return ret; + + if (exec->prelocked == obj) { + drm_gem_object_put(exec->prelocked); + exec->prelocked = NULL; + + return dma_resv_reserve_fences(obj->resv, num_fences); + } + + if (exec->interruptible) + ret = dma_resv_lock_interruptible(obj->resv, &exec->ticket); + else + ret = dma_resv_lock(obj->resv, &exec->ticket); + + if (unlikely(ret == -EDEADLK)) { + drm_gem_object_get(obj); + exec->contended = obj; + return -EDEADLK; + } + + if (unlikely(ret)) + return ret; + + ret = drm_exec_obj_locked(exec, obj); + if (ret) + goto error_unlock; + + /* Keep locked when reserving fences fails */ + return dma_resv_reserve_fences(obj->resv, num_fences); + +error_unlock: + dma_resv_unlock(obj->resv); + return ret; +} +EXPORT_SYMBOL(drm_exec_prepare_obj); + +MODULE_DESCRIPTION("DRM execution context"); +MODULE_LICENSE("Dual MIT/GPL"); diff --git a/include/drm/drm_exec.h b/include/drm/drm_exec.h new file mode 100644 index 000000000000..65e518c01db3 --- /dev/null +++ b/include/drm/drm_exec.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ + +#ifndef __DRM_EXEC_H__ +#define __DRM_EXEC_H__ + +#include + +struct drm_gem_object; + +/** + * struct drm_exec - Execution context + */ +struct drm_exec { + /** + * @interruptible: If locks should be taken interruptible + */ + bool interruptible; + + /** + * @ticket: WW ticket used for acquiring locks + */ + struct ww_acquire_ctx ticket; + + /** + * @num_objects: number of objects locked + */ + unsigned int num_objects; + + /** + * @max_objects: maximum objects in array + */ + unsigned int max_objects; + + /** + * @objects: array of the locked objects + */ + struct drm_gem_object **objects; + + /** + * @contended: contended GEM object we backet of for + */ + struct drm_gem_object *contended; + + /** + * @prelocked: already locked GEM object because of contention + */ + struct drm_gem_object *prelocked; +}; + +/** + * drm_exec_for_each_locked_object - iterate over all the locked objects + * @exec: drm_exec object + * @index: unsigned long index for the iteration + * @obj: the current GEM object + * + * Iterate over all the locked GEM objects inside the drm_exec object. + */ +#define drm_exec_for_each_locked_object(exec, index, obj) \ + for (index = 0, obj = (exec)->objects[0]; \ + index < (exec)->num_objects; \ + ++index, obj = (exec)->objects[index]) + +/** + * drm_exec_while_not_all_locked - loop until all GEM objects are prepared + * @exec: drm_exec object + * + * Core functionality of the drm_exec object. Loops until all GEM objects are + * prepared and no more contention exists. + * + * At the beginning of the loop it is guaranteed that no GEM object is locked. + */ +#define drm_exec_while_not_all_locked(exec) \ + while (drm_exec_cleanup(exec)) + +/** + * drm_exec_continue_on_contention - continue the loop when we need to cleanup + * @exec: drm_exec object + * + * Control flow helper to continue when a contention was detected and we need to + * clean up and re-start the loop to prepare all GEM objects. + */ +#define drm_exec_continue_on_contention(exec) \ + if (unlikely(drm_exec_is_contended(exec))) \ + continue + +/** + * drm_exec_break_on_contention - break a subordinal loop on contention + * @exec: drm_exec object + * + * Control flow helper to break a subordinal loop when a contention was detected + * and we need to clean up and re-start the loop to prepare all GEM objects. + */ +#define drm_exec_break_on_contention(exec) \ + if (unlikely(drm_exec_is_contended(exec))) \ + break + +/** + * drm_exec_is_contended - check for contention + * @exec: drm_exec object + * + * Returns true if the drm_exec object has run into some contention while + * locking a GEM object and needs to clean up. + */ +static inline bool drm_exec_is_contended(struct drm_exec *exec) +{ + return !!exec->contended; +} + +void drm_exec_init(struct drm_exec *exec, bool interruptible); +void drm_exec_fini(struct drm_exec *exec); +bool drm_exec_cleanup(struct drm_exec *exec); +int drm_exec_prepare_obj(struct drm_exec *exec, struct drm_gem_object *obj, + unsigned int num_fences); + +#endif From patchwork Tue Apr 4 01:27:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78817 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2707872vqo; Mon, 3 Apr 2023 18:51:35 -0700 (PDT) X-Google-Smtp-Source: AKy350YH2WK5fn6N3G/xeSmkif7w7SKNnQEE9RGUPNnhgoDiPlv63xA1hmn40eIUanbDnumljzOU X-Received: by 2002:aa7:c30a:0:b0:4fe:9374:30d0 with SMTP id l10-20020aa7c30a000000b004fe937430d0mr953510edq.39.1680573095112; Mon, 03 Apr 2023 18:51:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680573095; cv=none; d=google.com; s=arc-20160816; b=sTyCzFTVBRPc7o67yl/8DtV7rSph8SINetaww1aVUgubqLsFLgSxh4ItvoMRYWC4Sv qZBuv+AA061laEABXfimsLrED6aGdvtAFWkNOnk0njtgZ25P5cIdQLpA9kpBeJu2jte7 4O0BKfpzQi/f/MoyGvwqL720IS0ekAyLFvbMHxsqagHc0rBlYhrr3BkES+jGzzFq6E+X 10/3MlSJ3Z2qMuJBnQAPWGQj1TAk+aZjhhWPuAAI/TbAqolxvvHolmWPWQ5xnG3mGUUU T3bwaBZyutmOMYGKO2gsuH0rRaXAOWv2MyyyROkw1vA6Lla2fjET8DLe+yYvmkNKgOLM 8qwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Qy5D/ZZ0dwag1BT8hM3evEqOdQXhrzVTN7mL8aXSMGo=; b=spV+fIP9+1IHLyJeTvWiY6tG62tl3ynvk/KgCgceWJ/Sar4H7uMUEcWOSwanGxWT6d UQNYPUDn2v5w7JxLsIXpTQhNoY905AnRxAhGmDqEerLkNFWUGVt+6DDvMa2FNW0b2aJ7 XjxwRsFMaVWTtH/H8W8NHB7apneobpqAuO/o1J661wSsbT2JXJ5Mp31pVp/JlPQxntPZ tAc534h0hbfUTPIZyRNF/rcrFPUIlZpviCRETC7O0ujSYs02Ycb2asNHUuUY5IrAHT/9 NxaWtgkAOsYNgHsFX13hqVg93bwK6if4k2CdLizbYLQsWO55ZpD2GCswkD3CGluJaYvV VlWQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bUxs+PFG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f24-20020a056402161800b004acc823ab81si1418213edv.166.2023.04.03.18.51.11; Mon, 03 Apr 2023 18:51:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=bUxs+PFG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232573AbjDDB2p (ORCPT + 99 others); Mon, 3 Apr 2023 21:28:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232545AbjDDB2m (ORCPT ); Mon, 3 Apr 2023 21:28:42 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90B60C4 for ; Mon, 3 Apr 2023 18:27:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571676; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Qy5D/ZZ0dwag1BT8hM3evEqOdQXhrzVTN7mL8aXSMGo=; b=bUxs+PFGo4AlzpQjpN8Wql3VNehMvI6WPQLFoa+Wyc1AyjN9TtIS4O3X9JCei27gYG2lao 2IKQlC9v3Ha3bTLUrBb4WvH2ukgB2KvTPj7q/hj0hBOwQhmVOgYbw9a+9DBtHWFGcCH2K8 l5rb2yfpk7C2KOp06wvXtdcvebZoQK4= Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-588-1scEHFkRO3Cjh-fIUmpAGw-1; Mon, 03 Apr 2023 21:27:55 -0400 X-MC-Unique: 1scEHFkRO3Cjh-fIUmpAGw-1 Received: by mail-ed1-f70.google.com with SMTP id k30-20020a50ce5e000000b00500544ebfb1so43368898edj.7 for ; Mon, 03 Apr 2023 18:27:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571674; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qy5D/ZZ0dwag1BT8hM3evEqOdQXhrzVTN7mL8aXSMGo=; b=Xvd4tdnwOmPEHJim0fXAUCO/XYpdi4ok68cbTdiwNBHSL6IcYmtjwMuNT12H0OvmK8 PeAWogJHkFC1PW8YMuSzzm6uAGEwYUWP+BUFGKIyRydwCx1mAyZp25h+z5prMGclAkgz 0JVygKeRZxj1ui2V/PHugw7OSPZ7zZ2qAFePw3hxx21zXeIr7hBiRYsEG4wIt4QCRr20 jUGk8fHCTZODyqbsSS4DKdx2XUCZNWrvKeQPaZEil9areQIrit/rUDKeTQ3Ut/1ZjWKu 2km8Pc8gporvOmo3v4H+HC+RNQKGCqWIVFtf3NnhJSkWhknro/odJLF/G/KbkXO1WGC6 WK1w== X-Gm-Message-State: AAQBX9fjHiBve2BcMX3pWCnH/LJVL8RbPJv0xFNjtpnqADEj9JskXVU7 d6fJJoU5+YFB84axv4r8dyUM3iEgv8MduHI6iF1QeUohR0B9y7rbPm2UkjUpNwvtb85oMYyrTgs UxtknODIBHOTwRtdg3JzM2mQ/ X-Received: by 2002:a17:906:4dc1:b0:8b0:ad0b:7ab8 with SMTP id f1-20020a1709064dc100b008b0ad0b7ab8mr537822ejw.14.1680571674352; Mon, 03 Apr 2023 18:27:54 -0700 (PDT) X-Received: by 2002:a17:906:4dc1:b0:8b0:ad0b:7ab8 with SMTP id f1-20020a1709064dc100b008b0ad0b7ab8mr537795ejw.14.1680571674124; Mon, 03 Apr 2023 18:27:54 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id mh13-20020a170906eb8d00b00931024e96c5sm5222682ejb.99.2023.04.03.18.27.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:27:53 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 02/15] drm_exec: fix double dma_resv unlock Date: Tue, 4 Apr 2023 03:27:28 +0200 Message-Id: <20230404012741.116502-3-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208613782230319?= X-GMAIL-MSGID: =?utf-8?q?1762208613782230319?= Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_exec.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/gpu/drm/drm_exec.c b/drivers/gpu/drm/drm_exec.c index df546cc5a227..f645d22a0863 100644 --- a/drivers/gpu/drm/drm_exec.c +++ b/drivers/gpu/drm/drm_exec.c @@ -62,7 +62,6 @@ static void drm_exec_unlock_all(struct drm_exec *exec) } if (exec->prelocked) { - dma_resv_unlock(exec->prelocked->resv); drm_gem_object_put(exec->prelocked); exec->prelocked = NULL; } From patchwork Tue Apr 4 01:27:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78815 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2707670vqo; Mon, 3 Apr 2023 18:50:58 -0700 (PDT) X-Google-Smtp-Source: AKy350ablq0E4KNLrrekkbbl4K6t5njNYpHwhF1u3ULawAUwbqZAP5CGaEzUEnUeaQxMRPxZO5cn X-Received: by 2002:a05:6402:8c3:b0:4fa:8aa4:8d8b with SMTP id d3-20020a05640208c300b004fa8aa48d8bmr871200edz.7.1680573057902; Mon, 03 Apr 2023 18:50:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680573057; cv=none; d=google.com; s=arc-20160816; b=Ef/FrKOVq5Qpi+VaWRgN0jJN5ZmzGXSdlHOU1YuLUWp0gRlIHPynM+3sukxcjyWsz4 i3iZOxl/yPfAdZh2QGfajGLIaTilBijxnN4zVSS417lTX4b9r51aTTiUyYQhQz0O8Af2 rua87Zo7NQRul0uqbrU6wVIOoEK9rzhdTTwTWkvPPWMr9XZePXdjbH4vL/HHKHrME6k2 4ikjkzcI1sP9yvwfxCoNYD/kdM98GvZjHZGGtOz7yED2XNBvFWikeFQcCDmy/e1e234O 76MFZ2nRM4Ky9+VPAIXC4KBQn56G8WCdKwbTG98NomHpDCqGaG6Td7KEkb82Axh27NCU Qysg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zKXKlpbqPz1zBDAc8gRMr7UD/43fGbZFIFAeWQ+/v+c=; b=fYnzkUvilz9sSj5iJqaT1w3LRLCQyouhJPxuXLVkqZyNcgAUwfZ96Fdv8YBfLN6EiU /ZqtdF8rbFPkpL2HhAb8n2lYHwYSyDmvlZH0xlmQTRKOopDIUh9f865CQjwFt/Si2cHX a3FFcbZhWnK4gUCbcyreuTShwHmL/TdO6dIJNWWuc6b7DxobmhSZP9nMKC8boqnwMd16 VQWZlikrCH/ilWsTmgkne7YlW/9jhhcm+9jZuAkI6/XJUSTBA+28cJH50FClXgaFvzMQ uDL1pM/R8vvBW2P7kDcLZLNcYALh3GHw1GwPt+HveDVx2jyKctnr1fA3xGpZqnykmCLT KCCA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YENrR3fz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m16-20020aa7c2d0000000b004fbe5ca598fsi9146893edp.643.2023.04.03.18.50.32; Mon, 03 Apr 2023 18:50:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YENrR3fz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232605AbjDDB3V (ORCPT + 99 others); Mon, 3 Apr 2023 21:29:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232578AbjDDB3U (ORCPT ); Mon, 3 Apr 2023 21:29:20 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EA4810D3 for ; Mon, 3 Apr 2023 18:28:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571680; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zKXKlpbqPz1zBDAc8gRMr7UD/43fGbZFIFAeWQ+/v+c=; b=YENrR3fzcquy8NVqzzhRwAbX3XfKwDAN+7FMzwvAYq8s33manYEh6/r2QrcO804wB/Oqkx M8MrCekUw54SbI8Yh+7P8Y9RHwXZEh5ACVyzOWzR/kfFjX86jyR3P5+Eij5ZnhI2XbZU3a fJLIdVP0/81GP8tQCb1LRv39Sp4PxbM= Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-591--PpjZpA3NwyGgg0E3WNS3A-1; Mon, 03 Apr 2023 21:27:59 -0400 X-MC-Unique: -PpjZpA3NwyGgg0E3WNS3A-1 Received: by mail-ed1-f70.google.com with SMTP id j21-20020a508a95000000b004fd82403c91so43277975edj.3 for ; Mon, 03 Apr 2023 18:27:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zKXKlpbqPz1zBDAc8gRMr7UD/43fGbZFIFAeWQ+/v+c=; b=0mMpd69gDRghEL4Z+sm8qzXzJToRfE0U6vFaS9jkJlRIMruyrrdXX6aZjXTzGWjR1Z P+smnS4wiAUqooAjXDi0ltknNKGc1+rRXBIMNEN7y0QQVelgeuocMyZxcEe0I2X1LndN Z71nqEax2l22ViWR1U2TW1pREDUkacR5RJVYIovlrwxNbtp9oBNZ4sXHCAvevbNJmJiZ oC+P9uAJ8A4TfuNN7F5/s0EutYEjo/AH4IP1QdCI1jnRW0NnvxV4IIHR89HR7K5OKn84 VP4DuK9NzuPhakof9JjhK/cZA73dImsQAChoDevsocHsfF9EQQR60weNrnogeVgn6drM ykdQ== X-Gm-Message-State: AAQBX9cY/AzSZa2x/VBcvrfJJAnvutzVIMIWV4Lu9kV9J6c70qg2CcK1 k2Vu4g5I4SG7VpFZaZ22ROmqSmdILIFNMk0VT+Tgq58m7tSsmdo9whODZp4qKLeE8RE1VQ2AvWy /oWxlRklKGgm6rATwM4U7bskd X-Received: by 2002:a17:906:3518:b0:947:f937:d58c with SMTP id r24-20020a170906351800b00947f937d58cmr399013eja.73.1680571678179; Mon, 03 Apr 2023 18:27:58 -0700 (PDT) X-Received: by 2002:a17:906:3518:b0:947:f937:d58c with SMTP id r24-20020a170906351800b00947f937d58cmr398989eja.73.1680571677891; Mon, 03 Apr 2023 18:27:57 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id z9-20020a17090665c900b0093fa8c2e877sm5158255ejn.80.2023.04.03.18.27.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:27:57 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 03/15] maple_tree: split up MA_STATE() macro Date: Tue, 4 Apr 2023 03:27:29 +0200 Message-Id: <20230404012741.116502-4-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208574454585419?= X-GMAIL-MSGID: =?utf-8?q?1762208574454585419?= Split up the MA_STATE() macro such that components using the maple tree can easily inherit from struct ma_state and build custom tree walk macros to hide their internals from users. Example: struct sample_iterator { struct ma_state mas; struct sample_mgr *mgr; }; \#define SAMPLE_ITERATOR(name, __mgr, start) \ struct sample_iterator name = { \ .mas = MA_STATE_INIT(&(__mgr)->mt, start, 0), \ .mgr = __mgr, \ } \#define sample_iter_for_each_range(it__, entry__, end__) \ mas_for_each(&(it__).mas, entry__, end__) --- struct sample *sample; SAMPLE_ITERATOR(si, min); sample_iter_for_each_range(&si, sample, max) { frob(mgr, sample); } Signed-off-by: Danilo Krummrich --- include/linux/maple_tree.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h index 1fadb5f5978b..87d55334f1c2 100644 --- a/include/linux/maple_tree.h +++ b/include/linux/maple_tree.h @@ -423,8 +423,8 @@ struct ma_wr_state { #define MA_ERROR(err) \ ((struct maple_enode *)(((unsigned long)err << 2) | 2UL)) -#define MA_STATE(name, mt, first, end) \ - struct ma_state name = { \ +#define MA_STATE_INIT(mt, first, end) \ + { \ .tree = mt, \ .index = first, \ .last = end, \ @@ -435,6 +435,9 @@ struct ma_wr_state { .mas_flags = 0, \ } +#define MA_STATE(name, mt, first, end) \ + struct ma_state name = MA_STATE_INIT(mt, first, end) + #define MA_WR_STATE(name, ma_state, wr_entry) \ struct ma_wr_state name = { \ .mas = ma_state, \ From patchwork Tue Apr 4 01:27:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78814 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2706914vqo; Mon, 3 Apr 2023 18:48:47 -0700 (PDT) X-Google-Smtp-Source: AKy350bjJdCFGvdiun9KCnMmMW2S+5I9w/ON5bGhMweI9rKZWCojdEDvidKUz1qaCZCaxWqqcH0j X-Received: by 2002:a05:6402:b0b:b0:4fc:e605:556a with SMTP id bm11-20020a0564020b0b00b004fce605556amr897228edb.5.1680572926996; Mon, 03 Apr 2023 18:48:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572926; cv=none; d=google.com; s=arc-20160816; b=kxMdEOtR//OKIufpAzDWVWvCR+2QCWNGj/ZBQzbDNmNfYHXAqU0SMj1OY61rp6+S0h uP3YI+A/QKFtV6H31eMS3QQUg9DZ4jt0J5i/sCJlbuuCo4fieRjNlGyZAo1E68mkBeai dpvHDfWDsfRMWAtEJpUEx/7gLYXeENzKdUTvAagxqaanRrxPI+JfeR3DkAaAdFC3HW0M JmIn6T9XPC5vz4X61Em682mEKdPfcYiXtx5mrnu5I2g3XnlfXvXkXd/2LmFvL2zaFcWC GjvzmptR8zaxUZdgaw6P+KGFE2p4gDHFlORwWh61XzEKIWZJlfLyJiE4U3UnBaM5sGAd U0uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7QePSGV58gfvvH0avmjGzCaknKXgHtq/+sdcs2nyj9o=; b=FlGYEE08PmlbAtOBqvACaWTKC3CpA/4UR+RzUk7UlEIM5ATeLZmTytG4/TwNVxZF/E dHS5/s0valwBL/HD7w6tLtRMt8g4q8a6KtE9vA1tyG6soQ4RQFTdRcev6wwPhlbiMDfR +5VKVL2biJwZN+7yywOzvQ9rqxzFRPXtf13xKilHM9uAuTKy1o0YscxRrbl2r4JbWEVI EQ/tWs94UjeQb9dHb7IEM0GOv0aOehl+Nn2OGvKJ/DcE0h8udrIkZfku/Kih3kc3thPj Gmqpt4zch2ZcvBJK05c0xh0MWiciEe+LRfb/18Pcw4BwCc/0G+siz7klYRvE7F7ef+RV +AoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YlMb1q4M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m16-20020aa7c2d0000000b004fbe5ca598fsi9146893edp.643.2023.04.03.18.48.22; Mon, 03 Apr 2023 18:48:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YlMb1q4M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233038AbjDDB3j (ORCPT + 99 others); Mon, 3 Apr 2023 21:29:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232662AbjDDB3Y (ORCPT ); Mon, 3 Apr 2023 21:29:24 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E23F6170C for ; Mon, 3 Apr 2023 18:28:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571686; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7QePSGV58gfvvH0avmjGzCaknKXgHtq/+sdcs2nyj9o=; b=YlMb1q4MTX+jdTV+7j2DQLw22gtnCaQTTn7Q5rMfUkMurAgvUlSGZOL356Rp9ZVoYW4KNB iYGNNnad2cQ9UM+EU+MoEI5iFNiJCQzcCNJRtqrBOSYQKtFjLqvkvsE8Gs51pZsQ5tgA70 UHhuNKZLI0JXjHI5hhi18HedTdysL3k= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-203-U2vG1QFNMC2ZMZFGL2eufQ-1; Mon, 03 Apr 2023 21:28:04 -0400 X-MC-Unique: U2vG1QFNMC2ZMZFGL2eufQ-1 Received: by mail-ed1-f71.google.com with SMTP id u30-20020a50c05e000000b0050299de3f82so7823857edd.10 for ; Mon, 03 Apr 2023 18:28:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571683; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7QePSGV58gfvvH0avmjGzCaknKXgHtq/+sdcs2nyj9o=; b=xt/zpbIExr273yO/fi3eBmaY9/4WzEDuDJCBoIHT/1ZjSYLDsooyOQ73HdiHVqe6zq zDaT9ebcvCTjjgxjr5DqMXwNHUWqMwKiKLPTjSaLJPhiWK+41f9ITft4VJ+iqPirOEXI +7BJ27W8uASOdU8rsuOLY1I9h18zzzS+eT3IS5+oicyQ3YQfqL71eWZ+9UPmwO6VtSzl 5TzML/SNAywStgQRjTr4W4wk9/rjC966667m4/4RlIpMrW2hPjOqTuWWy/94hkZcyHbp szf1BwNFj8BMRHS1izhYnLCbKlijw6v4EMJbuU2SImPk6ungmqGCr/iMiVPByJGoyplV Bw7w== X-Gm-Message-State: AAQBX9e9tQUC/MDjpd7Mlh7k1q0khMyXuocUAlOww3/hAGxbgoWED4Hn Ot2FEOX68d1BzSAmdKpeMyoWeiBuaeU2rz/yATwnRQliq/KEQxIbcpTnmJ+Hl+f4D4hb9mNiipY Hr9QiUZWdWiRlF8ZoIPKRCYTH X-Received: by 2002:a17:907:7dac:b0:932:365a:969a with SMTP id oz44-20020a1709077dac00b00932365a969amr700669ejc.8.1680571682648; Mon, 03 Apr 2023 18:28:02 -0700 (PDT) X-Received: by 2002:a17:907:7dac:b0:932:365a:969a with SMTP id oz44-20020a1709077dac00b00932365a969amr700647ejc.8.1680571681860; Mon, 03 Apr 2023 18:28:01 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id b11-20020a17090636cb00b00947f4e2b2b5sm4012876ejc.127.2023.04.03.18.28.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:01 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich , Dave Airlie Subject: [PATCH drm-next v3 04/15] drm: manager to keep track of GPUs VA mappings Date: Tue, 4 Apr 2023 03:27:30 +0200 Message-Id: <20230404012741.116502-5-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,T_FILL_THIS_FORM_SHORT autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208437092155575?= X-GMAIL-MSGID: =?utf-8?q?1762208437092155575?= Add infrastructure to keep track of GPU virtual address (VA) mappings with a decicated VA space manager implementation. New UAPIs, motivated by Vulkan sparse memory bindings graphics drivers start implementing, allow userspace applications to request multiple and arbitrary GPU VA mappings of buffer objects. The DRM GPU VA manager is intended to serve the following purposes in this context. 1) Provide infrastructure to track GPU VA allocations and mappings, making use of the maple_tree. 2) Generically connect GPU VA mappings to their backing buffers, in particular DRM GEM objects. 3) Provide a common implementation to perform more complex mapping operations on the GPU VA space. In particular splitting and merging of GPU VA mappings, e.g. for intersecting mapping requests or partial unmap requests. Suggested-by: Dave Airlie Signed-off-by: Danilo Krummrich --- Documentation/gpu/drm-mm.rst | 31 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_gem.c | 3 + drivers/gpu/drm/drm_gpuva_mgr.c | 1686 +++++++++++++++++++++++++++++++ include/drm/drm_drv.h | 6 + include/drm/drm_gem.h | 75 ++ include/drm/drm_gpuva_mgr.h | 681 +++++++++++++ 7 files changed, 2483 insertions(+) create mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c create mode 100644 include/drm/drm_gpuva_mgr.h diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index a52e6f4117d6..c9f120cfe730 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -466,6 +466,37 @@ DRM MM Range Allocator Function References .. kernel-doc:: drivers/gpu/drm/drm_mm.c :export: +DRM GPU VA Manager +================== + +Overview +-------- + +.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c + :doc: Overview + +Split and Merge +--------------- + +.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c + :doc: Split and Merge + +Locking +------- + +.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c + :doc: Locking + + +DRM GPU VA Manager Function References +-------------------------------------- + +.. kernel-doc:: include/drm/drm_gpuva_mgr.h + :internal: + +.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c + :export: + DRM Buddy Allocator =================== diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 9c6446eb3c83..8eeed446a078 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -45,6 +45,7 @@ drm-y := \ drm_vblank.o \ drm_vblank_work.o \ drm_vma_manager.o \ + drm_gpuva_mgr.o \ drm_writeback.o drm-$(CONFIG_DRM_LEGACY) += \ drm_agpsupport.o \ diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index ee3e11e7177d..dd50c46f21b7 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -164,6 +164,9 @@ void drm_gem_private_object_init(struct drm_device *dev, if (!obj->resv) obj->resv = &obj->_resv; + if (drm_core_check_feature(dev, DRIVER_GEM_GPUVA)) + drm_gem_gpuva_init(obj); + drm_vma_node_reset(&obj->vma_node); INIT_LIST_HEAD(&obj->lru_node); } diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuva_mgr.c new file mode 100644 index 000000000000..bd7d27ee44bb --- /dev/null +++ b/drivers/gpu/drm/drm_gpuva_mgr.c @@ -0,0 +1,1686 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2022 Red Hat. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Danilo Krummrich + * + */ + +#include +#include + +/** + * DOC: Overview + * + * The DRM GPU VA Manager, represented by struct drm_gpuva_manager keeps track + * of a GPU's virtual address (VA) space and manages the corresponding virtual + * mappings represented by &drm_gpuva objects. It also keeps track of the + * mapping's backing &drm_gem_object buffers. + * + * &drm_gem_object buffers maintain a list (and a corresponding list lock) of + * &drm_gpuva objects representing all existent GPU VA mappings using this + * &drm_gem_object as backing buffer. + * + * GPU VAs can be flagged as sparse, such that drivers may use GPU VAs to also + * keep track of sparse PTEs in order to support Vulkan 'Sparse Resources'. + * + * The GPU VA manager internally uses a &maple_tree to manage the + * &drm_gpuva mappings within a GPU's virtual address space. + * + * The &drm_gpuva_manager contains a special &drm_gpuva representing the + * portion of VA space reserved by the kernel. This node is initialized together + * with the GPU VA manager instance and removed when the GPU VA manager is + * destroyed. + * + * In a typical application drivers would embed struct drm_gpuva_manager and + * struct drm_gpuva within their own driver specific structures, there won't be + * any memory allocations of it's own nor memory allocations of &drm_gpuva + * entries. + * + * However, the &drm_gpuva_manager needs to allocate nodes for it's internal + * tree structures when &drm_gpuva entries are inserted. In order to support + * inserting &drm_gpuva entries from dma-fence signalling critical sections the + * &drm_gpuva_manager provides struct drm_gpuva_prealloc. Drivers may create + * pre-allocated nodes which drm_gpuva_prealloc_create() and subsequently insert + * a new &drm_gpuva entry with drm_gpuva_insert_prealloc(). + */ + +/** + * DOC: Split and Merge + * + * The DRM GPU VA manager also provides an algorithm implementing splitting and + * merging of existent GPU VA mappings with the ones that are requested to be + * mapped or unmapped. This feature is required by the Vulkan API to implement + * Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this as + * VM BIND. + * + * Drivers can call drm_gpuva_sm_map() to receive a sequence of callbacks + * containing map, unmap and remap operations for a given newly requested + * mapping. The sequence of callbacks represents the set of operations to + * execute in order to integrate the new mapping cleanly into the current state + * of the GPU VA space. + * + * Depending on how the new GPU VA mapping intersects with the existent mappings + * of the GPU VA space the &drm_gpuva_fn_ops callbacks contain an arbitrary + * amount of unmap operations, a maximum of two remap operations and a single + * map operation. The caller might receive no callback at all if no operation is + * required, e.g. if the requested mapping already exists in the exact same way. + * + * The single map operation represents the original map operation requested by + * the caller. + * + * &drm_gpuva_op_unmap contains a 'keep' field, which indicates whether the + * &drm_gpuva to unmap is physically contiguous with the original mapping + * request. Optionally, if 'keep' is set, drivers may keep the actual page table + * entries for this &drm_gpuva, adding the missing page table entries only and + * update the &drm_gpuva_manager's view of things accordingly. + * + * Drivers may do the same optimization, namely delta page table updates, also + * for remap operations. This is possible since &drm_gpuva_op_remap consists of + * one unmap operation and one or two map operations, such that drivers can + * derive the page table update delta accordingly. + * + * Note that there can't be more than two existent mappings to split up, one at + * the beginning and one at the end of the new mapping, hence there is a + * maximum of two remap operations. + * + * Analogous to drm_gpuva_sm_map() drm_gpuva_sm_unmap() uses &drm_gpuva_fn_ops + * to call back into the driver in order to unmap a range of GPU VA space. The + * logic behind this function is way simpler though: For all existent mappings + * enclosed by the given range unmap operations are created. For mappings which + * are only partically located within the given range, remap operations are + * created such that those mappings are split up and re-mapped partically. + * + * To update the &drm_gpuva_manager's view of the GPU VA space + * drm_gpuva_insert(), drm_gpuva_insert_prealloc(), and drm_gpuva_remove() may + * be used. Please note that these functions are not safe to be called from a + * &drm_gpuva_fn_ops callback originating from drm_gpuva_sm_map() or + * drm_gpuva_sm_unmap(). The drm_gpuva_map(), drm_gpuva_remap() and + * drm_gpuva_unmap() helpers should be used instead. + * + * The following diagram depicts the basic relationships of existent GPU VA + * mappings, a newly requested mapping and the resulting mappings as implemented + * by drm_gpuva_sm_map() - it doesn't cover any arbitrary combinations of these. + * + * 1) Requested mapping is identical. Replace it, but indicate the backing PTEs + * could be kept. + * + * :: + * + * 0 a 1 + * old: |-----------| (bo_offset=n) + * + * 0 a 1 + * req: |-----------| (bo_offset=n) + * + * 0 a 1 + * new: |-----------| (bo_offset=n) + * + * + * 2) Requested mapping is identical, except for the BO offset, hence replace + * the mapping. + * + * :: + * + * 0 a 1 + * old: |-----------| (bo_offset=n) + * + * 0 a 1 + * req: |-----------| (bo_offset=m) + * + * 0 a 1 + * new: |-----------| (bo_offset=m) + * + * + * 3) Requested mapping is identical, except for the backing BO, hence replace + * the mapping. + * + * :: + * + * 0 a 1 + * old: |-----------| (bo_offset=n) + * + * 0 b 1 + * req: |-----------| (bo_offset=n) + * + * 0 b 1 + * new: |-----------| (bo_offset=n) + * + * + * 4) Existent mapping is a left aligned subset of the requested one, hence + * replace the existent one. + * + * :: + * + * 0 a 1 + * old: |-----| (bo_offset=n) + * + * 0 a 2 + * req: |-----------| (bo_offset=n) + * + * 0 a 2 + * new: |-----------| (bo_offset=n) + * + * .. note:: + * We expect to see the same result for a request with a different BO + * and/or non-contiguous BO offset. + * + * + * 5) Requested mapping's range is a left aligned subset of the existent one, + * but backed by a different BO. Hence, map the requested mapping and split + * the existent one adjusting it's BO offset. + * + * :: + * + * 0 a 2 + * old: |-----------| (bo_offset=n) + * + * 0 b 1 + * req: |-----| (bo_offset=n) + * + * 0 b 1 a' 2 + * new: |-----|-----| (b.bo_offset=n, a.bo_offset=n+1) + * + * .. note:: + * We expect to see the same result for a request with a different BO + * and/or non-contiguous BO offset. + * + * + * 6) Existent mapping is a superset of the requested mapping. Split it up, but + * indicate that the backing PTEs could be kept. + * + * :: + * + * 0 a 2 + * old: |-----------| (bo_offset=n) + * + * 0 a 1 + * req: |-----| (bo_offset=n) + * + * 0 a 1 a' 2 + * new: |-----|-----| (a.bo_offset=n, a'.bo_offset=n+1) + * + * + * 7) Requested mapping's range is a right aligned subset of the existent one, + * but backed by a different BO. Hence, map the requested mapping and split + * the existent one, without adjusting the BO offset. + * + * :: + * + * 0 a 2 + * old: |-----------| (bo_offset=n) + * + * 1 b 2 + * req: |-----| (bo_offset=m) + * + * 0 a 1 b 2 + * new: |-----|-----| (a.bo_offset=n,b.bo_offset=m) + * + * + * 8) Existent mapping is a superset of the requested mapping. Split it up, but + * indicate that the backing PTEs could be kept. + * + * :: + * + * 0 a 2 + * old: |-----------| (bo_offset=n) + * + * 1 a 2 + * req: |-----| (bo_offset=n+1) + * + * 0 a' 1 a 2 + * new: |-----|-----| (a'.bo_offset=n, a.bo_offset=n+1) + * + * + * 9) Existent mapping is overlapped at the end by the requested mapping backed + * by a different BO. Hence, map the requested mapping and split up the + * existent one, without adjusting the BO offset. + * + * :: + * + * 0 a 2 + * old: |-----------| (bo_offset=n) + * + * 1 b 3 + * req: |-----------| (bo_offset=m) + * + * 0 a 1 b 3 + * new: |-----|-----------| (a.bo_offset=n,b.bo_offset=m) + * + * + * 10) Existent mapping is overlapped by the requested mapping, both having the + * same backing BO with a contiguous offset. Indicate the backing PTEs of + * the old mapping could be kept. + * + * :: + * + * 0 a 2 + * old: |-----------| (bo_offset=n) + * + * 1 a 3 + * req: |-----------| (bo_offset=n+1) + * + * 0 a' 1 a 3 + * new: |-----|-----------| (a'.bo_offset=n, a.bo_offset=n+1) + * + * + * 11) Requested mapping's range is a centered subset of the existent one + * having a different backing BO. Hence, map the requested mapping and split + * up the existent one in two mappings, adjusting the BO offset of the right + * one accordingly. + * + * :: + * + * 0 a 3 + * old: |-----------------| (bo_offset=n) + * + * 1 b 2 + * req: |-----| (bo_offset=m) + * + * 0 a 1 b 2 a' 3 + * new: |-----|-----|-----| (a.bo_offset=n,b.bo_offset=m,a'.bo_offset=n+2) + * + * + * 12) Requested mapping is a contiguous subset of the existent one. Split it + * up, but indicate that the backing PTEs could be kept. + * + * :: + * + * 0 a 3 + * old: |-----------------| (bo_offset=n) + * + * 1 a 2 + * req: |-----| (bo_offset=n+1) + * + * 0 a' 1 a 2 a'' 3 + * old: |-----|-----|-----| (a'.bo_offset=n, a.bo_offset=n+1, a''.bo_offset=n+2) + * + * + * 13) Existent mapping is a right aligned subset of the requested one, hence + * replace the existent one. + * + * :: + * + * 1 a 2 + * old: |-----| (bo_offset=n+1) + * + * 0 a 2 + * req: |-----------| (bo_offset=n) + * + * 0 a 2 + * new: |-----------| (bo_offset=n) + * + * .. note:: + * We expect to see the same result for a request with a different bo + * and/or non-contiguous bo_offset. + * + * + * 14) Existent mapping is a centered subset of the requested one, hence + * replace the existent one. + * + * :: + * + * 1 a 2 + * old: |-----| (bo_offset=n+1) + * + * 0 a 3 + * req: |----------------| (bo_offset=n) + * + * 0 a 3 + * new: |----------------| (bo_offset=n) + * + * .. note:: + * We expect to see the same result for a request with a different bo + * and/or non-contiguous bo_offset. + * + * + * 15) Existent mappings is overlapped at the beginning by the requested mapping + * backed by a different BO. Hence, map the requested mapping and split up + * the existent one, adjusting it's BO offset accordingly. + * + * :: + * + * 1 a 3 + * old: |-----------| (bo_offset=n) + * + * 0 b 2 + * req: |-----------| (bo_offset=m) + * + * 0 b 2 a' 3 + * new: |-----------|-----| (b.bo_offset=m,a.bo_offset=n+2) + */ + +/** + * DOC: Locking + * + * Generally, the GPU VA manager does not take care of locking itself, it is + * the drivers responsibility to take care about locking. Drivers might want to + * protect the following operations: inserting, removing and iterating + * &drm_gpuva objects as well as generating all kinds of operations, such as + * split / merge or prefetch. + * + * The GPU VA manager also does not take care of the locking of the backing + * &drm_gem_object buffers GPU VA lists by itself; drivers are responsible to + * enforce mutual exclusion. + */ + + /* + * Maple Tree Locking + * + * The maple tree's advanced API requires the user of the API to protect + * certain tree operations with a lock (either the external or internal tree + * lock) for tree internal reasons. + * + * The actual rules (when to aquire/release the lock) are enforced by lockdep + * through the maple tree implementation. + * + * For this reason the DRM GPUVA manager takes the maple tree's internal + * spinlock according to the lockdep enforced rules. + * + * Please note, that this lock is *only* meant to fulfill the maple trees + * requirements and does not intentionally protect the DRM GPUVA manager + * against concurrent access. + * + * The following mail thread provides more details on why the maple tree + * has this requirement. + * + * https://lore.kernel.org/lkml/20230217134422.14116-5-dakr@redhat.com/ + */ + +static int __drm_gpuva_insert(struct drm_gpuva_manager *mgr, + struct drm_gpuva *va); +static void __drm_gpuva_remove(struct drm_gpuva *va); + +/** + * drm_gpuva_manager_init - initialize a &drm_gpuva_manager + * @mgr: pointer to the &drm_gpuva_manager to initialize + * @name: the name of the GPU VA space + * @start_offset: the start offset of the GPU VA space + * @range: the size of the GPU VA space + * @reserve_offset: the start of the kernel reserved GPU VA area + * @reserve_range: the size of the kernel reserved GPU VA area + * @ops: &drm_gpuva_fn_ops called on &drm_gpuva_sm_map / &drm_gpuva_sm_unmap + * + * The &drm_gpuva_manager must be initialized with this function before use. + * + * Note that @mgr must be cleared to 0 before calling this function. The given + * &name is expected to be managed by the surrounding driver structures. + */ +void +drm_gpuva_manager_init(struct drm_gpuva_manager *mgr, + const char *name, + u64 start_offset, u64 range, + u64 reserve_offset, u64 reserve_range, + struct drm_gpuva_fn_ops *ops) +{ + mt_init(&mgr->mtree); + + mgr->mm_start = start_offset; + mgr->mm_range = range; + + mgr->name = name ? name : "unknown"; + mgr->ops = ops; + + memset(&mgr->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); + + if (reserve_range) { + mgr->kernel_alloc_node.va.addr = reserve_offset; + mgr->kernel_alloc_node.va.range = reserve_range; + + __drm_gpuva_insert(mgr, &mgr->kernel_alloc_node); + } + +} +EXPORT_SYMBOL(drm_gpuva_manager_init); + +/** + * drm_gpuva_manager_destroy - cleanup a &drm_gpuva_manager + * @mgr: pointer to the &drm_gpuva_manager to clean up + * + * Note that it is a bug to call this function on a manager that still + * holds GPU VA mappings. + */ +void +drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr) +{ + mgr->name = NULL; + + if (mgr->kernel_alloc_node.va.range) + __drm_gpuva_remove(&mgr->kernel_alloc_node); + + mtree_lock(&mgr->mtree); + WARN(!mtree_empty(&mgr->mtree), + "GPUVA tree is not empty, potentially leaking memory."); + __mt_destroy(&mgr->mtree); + mtree_unlock(&mgr->mtree); +} +EXPORT_SYMBOL(drm_gpuva_manager_destroy); + +static inline bool +drm_gpuva_in_mm_range(struct drm_gpuva_manager *mgr, u64 addr, u64 range) +{ + u64 end = addr + range; + u64 mm_start = mgr->mm_start; + u64 mm_end = mm_start + mgr->mm_range; + + return addr < mm_end && mm_start < end; +} + +static inline bool +drm_gpuva_in_kernel_node(struct drm_gpuva_manager *mgr, u64 addr, u64 range) +{ + u64 end = addr + range; + u64 kstart = mgr->kernel_alloc_node.va.addr; + u64 krange = mgr->kernel_alloc_node.va.range; + u64 kend = kstart + krange; + + return krange && addr < kend && kstart < end; +} + +static inline bool +drm_gpuva_range_valid(struct drm_gpuva_manager *mgr, + u64 addr, u64 range) +{ + return drm_gpuva_in_mm_range(mgr, addr, range) && + !drm_gpuva_in_kernel_node(mgr, addr, range); +} + +/** + * drm_gpuva_iter_remove - removes the iterators current element + * @it: the &drm_gpuva_iterator + * + * This removes the element the iterator currently points to. + */ +void +drm_gpuva_iter_remove(struct drm_gpuva_iterator *it) +{ + mas_lock(&it->mas); + mas_erase(&it->mas); + mas_unlock(&it->mas); +} +EXPORT_SYMBOL(drm_gpuva_iter_remove); + +/** + * drm_gpuva_prealloc_create - creates a preallocated node to store a + * &drm_gpuva entry. + * + * Returns: the &drm_gpuva_prealloc object on success, NULL on failure + */ +struct drm_gpuva_prealloc * +drm_gpuva_prealloc_create(void) +{ + struct drm_gpuva_prealloc *pa; + + pa = kzalloc(sizeof(*pa), GFP_KERNEL); + if (!pa) + return NULL; + + if (mas_preallocate(&pa->mas, GFP_KERNEL)) { + kfree(pa); + return NULL; + } + + return pa; +} +EXPORT_SYMBOL(drm_gpuva_prealloc_create); + +/** + * drm_gpuva_prealloc_destroy - destroyes a preallocated node and frees the + * &drm_gpuva_prealloc + * + * @pa: the &drm_gpuva_prealloc to destroy + */ +void +drm_gpuva_prealloc_destroy(struct drm_gpuva_prealloc *pa) +{ + mas_destroy(&pa->mas); + kfree(pa); +} +EXPORT_SYMBOL(drm_gpuva_prealloc_destroy); + +static int +drm_gpuva_insert_state(struct drm_gpuva_manager *mgr, + struct ma_state *mas, + struct drm_gpuva *va) +{ + u64 addr = va->va.addr; + u64 range = va->va.range; + u64 last = addr + range - 1; + + mas_set(mas, addr); + + mas_lock(mas); + if (unlikely(mas_walk(mas))) { + mas_unlock(mas); + return -EEXIST; + } + + if (unlikely(mas->last < last)) { + mas_unlock(mas); + return -EEXIST; + } + + mas->index = addr; + mas->last = last; + + mas_store_prealloc(mas, va); + mas_unlock(mas); + + va->mgr = mgr; + + return 0; +} + +static int +__drm_gpuva_insert(struct drm_gpuva_manager *mgr, + struct drm_gpuva *va) +{ + MA_STATE(mas, &mgr->mtree, 0, 0); + int ret; + + ret = mas_preallocate(&mas, GFP_KERNEL); + if (ret) + return ret; + + return drm_gpuva_insert_state(mgr, &mas, va); +} + +/** + * drm_gpuva_insert - insert a &drm_gpuva + * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in + * @va: the &drm_gpuva to insert + * + * Insert a &drm_gpuva with a given address and range into a + * &drm_gpuva_manager. + * + * It is not allowed to use this function while iterating this GPU VA space, + * e.g via drm_gpuva_iter_for_each(). + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuva_insert(struct drm_gpuva_manager *mgr, + struct drm_gpuva *va) +{ + u64 addr = va->va.addr; + u64 range = va->va.range; + + if (unlikely(!drm_gpuva_range_valid(mgr, addr, range))) + return -EINVAL; + + return __drm_gpuva_insert(mgr, va); +} +EXPORT_SYMBOL(drm_gpuva_insert); + +/** + * drm_gpuva_insert_prealloc - insert a &drm_gpuva with a preallocated node + * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in + * @va: the &drm_gpuva to insert + * @pa: the &drm_gpuva_prealloc node + * + * Insert a &drm_gpuva with a given address and range into a + * &drm_gpuva_manager. + * + * It is not allowed to use this function while iterating this GPU VA space, + * e.g via drm_gpuva_iter_for_each(). + * + * Returns: 0 on success, negative error code on failure. + */ +int +drm_gpuva_insert_prealloc(struct drm_gpuva_manager *mgr, + struct drm_gpuva_prealloc *pa, + struct drm_gpuva *va) +{ + struct ma_state *mas = &pa->mas; + u64 addr = va->va.addr; + u64 range = va->va.range; + + if (unlikely(!drm_gpuva_range_valid(mgr, addr, range))) + return -EINVAL; + + mas->tree = &mgr->mtree; + return drm_gpuva_insert_state(mgr, mas, va); +} +EXPORT_SYMBOL(drm_gpuva_insert_prealloc); + +static void +__drm_gpuva_remove(struct drm_gpuva *va) +{ + MA_STATE(mas, &va->mgr->mtree, va->va.addr, 0); + + mas_lock(&mas); + mas_erase(&mas); + mas_unlock(&mas); +} + +/** + * drm_gpuva_remove - remove a &drm_gpuva + * @va: the &drm_gpuva to remove + * + * This removes the given &va from the underlaying tree. + * + * It is not allowed to use this function while iterating this GPU VA space, + * e.g via drm_gpuva_iter_for_each(). Please use drm_gpuva_iter_remove() + * instead. + */ +void +drm_gpuva_remove(struct drm_gpuva *va) +{ + struct drm_gpuva_manager *mgr = va->mgr; + + if (unlikely(va == &mgr->kernel_alloc_node)) { + WARN(1, "Can't destroy kernel reserved node.\n"); + return; + } + + __drm_gpuva_remove(va); +} +EXPORT_SYMBOL(drm_gpuva_remove); + +/** + * drm_gpuva_link - link a &drm_gpuva + * @va: the &drm_gpuva to link + * + * This adds the given &va to the GPU VA list of the &drm_gem_object it is + * associated with. + * + * This function expects the caller to protect the GEM's GPUVA list against + * concurrent access. + */ +void +drm_gpuva_link(struct drm_gpuva *va) +{ + if (likely(va->gem.obj)) + list_add_tail(&va->gem.entry, &va->gem.obj->gpuva.list); +} +EXPORT_SYMBOL(drm_gpuva_link); + +/** + * drm_gpuva_unlink - unlink a &drm_gpuva + * @va: the &drm_gpuva to unlink + * + * This removes the given &va from the GPU VA list of the &drm_gem_object it is + * associated with. + * + * This function expects the caller to protect the GEM's GPUVA list against + * concurrent access. + */ +void +drm_gpuva_unlink(struct drm_gpuva *va) +{ + if (likely(va->gem.obj)) + list_del_init(&va->gem.entry); +} +EXPORT_SYMBOL(drm_gpuva_unlink); + +/** + * drm_gpuva_find_first - find the first &drm_gpuva in the given range + * @mgr: the &drm_gpuva_manager to search in + * @addr: the &drm_gpuvas address + * @range: the &drm_gpuvas range + * + * Returns: the first &drm_gpuva within the given range + */ +struct drm_gpuva * +drm_gpuva_find_first(struct drm_gpuva_manager *mgr, + u64 addr, u64 range) +{ + MA_STATE(mas, &mgr->mtree, addr, 0); + struct drm_gpuva *va; + + mas_lock(&mas); + va = mas_find(&mas, addr + range - 1); + mas_unlock(&mas); + + return va; +} +EXPORT_SYMBOL(drm_gpuva_find_first); + +/** + * drm_gpuva_find - find a &drm_gpuva + * @mgr: the &drm_gpuva_manager to search in + * @addr: the &drm_gpuvas address + * @range: the &drm_gpuvas range + * + * Returns: the &drm_gpuva at a given &addr and with a given &range + */ +struct drm_gpuva * +drm_gpuva_find(struct drm_gpuva_manager *mgr, + u64 addr, u64 range) +{ + struct drm_gpuva *va; + + va = drm_gpuva_find_first(mgr, addr, range); + if (!va) + goto out; + + if (va->va.addr != addr || + va->va.range != range) + goto out; + + return va; + +out: + return NULL; +} +EXPORT_SYMBOL(drm_gpuva_find); + +/** + * drm_gpuva_find_prev - find the &drm_gpuva before the given address + * @mgr: the &drm_gpuva_manager to search in + * @start: the given GPU VA's start address + * + * Find the adjacent &drm_gpuva before the GPU VA with given &start address. + * + * Note that if there is any free space between the GPU VA mappings no mapping + * is returned. + * + * Returns: a pointer to the found &drm_gpuva or NULL if none was found + */ +struct drm_gpuva * +drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start) +{ + MA_STATE(mas, &mgr->mtree, start - 1, 0); + struct drm_gpuva *va; + + if (start <= mgr->mm_start || + start > (mgr->mm_start + mgr->mm_range)) + return NULL; + + mas_lock(&mas); + va = mas_walk(&mas); + mas_unlock(&mas); + + return va; +} +EXPORT_SYMBOL(drm_gpuva_find_prev); + +/** + * drm_gpuva_find_next - find the &drm_gpuva after the given address + * @mgr: the &drm_gpuva_manager to search in + * @end: the given GPU VA's end address + * + * Find the adjacent &drm_gpuva after the GPU VA with given &end address. + * + * Note that if there is any free space between the GPU VA mappings no mapping + * is returned. + * + * Returns: a pointer to the found &drm_gpuva or NULL if none was found + */ +struct drm_gpuva * +drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end) +{ + MA_STATE(mas, &mgr->mtree, end, 0); + struct drm_gpuva *va; + + if (end < mgr->mm_start || + end >= (mgr->mm_start + mgr->mm_range)) + return NULL; + + mas_lock(&mas); + va = mas_walk(&mas); + mas_unlock(&mas); + + return va; +} +EXPORT_SYMBOL(drm_gpuva_find_next); + +/** + * drm_gpuva_interval_empty - indicate whether a given interval of the VA space + * is empty + * @mgr: the &drm_gpuva_manager to check the range for + * @addr: the start address of the range + * @range: the range of the interval + * + * Returns: true if the interval is empty, false otherwise + */ +bool +drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range) +{ + DRM_GPUVA_ITER(it, mgr, addr); + struct drm_gpuva *va; + + drm_gpuva_iter_for_each_range(va, it, addr + range) + return false; + + return true; +} +EXPORT_SYMBOL(drm_gpuva_interval_empty); + +/** + * drm_gpuva_map - helper to insert a &drm_gpuva from &drm_gpuva_fn_ops + * callbacks + * + * @mgr: the &drm_gpuva_manager + * @pa: the &drm_gpuva_prealloc + * @va: the &drm_gpuva to inser + */ +int +drm_gpuva_map(struct drm_gpuva_manager *mgr, + struct drm_gpuva_prealloc *pa, + struct drm_gpuva *va) +{ + return drm_gpuva_insert_prealloc(mgr, pa, va); +} +EXPORT_SYMBOL(drm_gpuva_map); + +/** + * drm_gpuva_remap - helper to insert a &drm_gpuva from &drm_gpuva_fn_ops + * callbacks + * + * @state: the current &drm_gpuva_state + * @prev: the &drm_gpuva to remap when keeping the start of a mapping, + * may be NULL + * @next: the &drm_gpuva to remap when keeping the end of a mapping, + * may be NULL + */ +int +drm_gpuva_remap(drm_gpuva_state_t state, + struct drm_gpuva *prev, + struct drm_gpuva *next) +{ + struct ma_state *mas = &state->mas; + u64 max = mas->last; + + if (unlikely(!prev && !next)) + return -EINVAL; + + if (prev) { + u64 addr = prev->va.addr; + u64 last = addr + prev->va.range - 1; + + if (unlikely(addr != mas->index)) + return -EINVAL; + + if (unlikely(last >= mas->last)) + return -EINVAL; + } + + if (next) { + u64 addr = next->va.addr; + u64 last = addr + next->va.range - 1; + + if (unlikely(last != mas->last)) + return -EINVAL; + + if (unlikely(addr <= mas->index)) + return -EINVAL; + } + + if (prev && next) { + u64 p_last = prev->va.addr + prev->va.range - 1; + u64 n_addr = next->va.addr; + + if (unlikely(p_last > n_addr)) + return -EINVAL; + + if (unlikely(n_addr - p_last <= 1)) + return -EINVAL; + } + + mas_lock(mas); + if (prev) { + mas_store(mas, prev); + mas_next(mas, max); + if (!next) + mas_store(mas, NULL); + } + + if (next) { + mas->last = next->va.addr - 1; + mas_store(mas, NULL); + mas_next(mas, max); + mas_store(mas, next); + } + mas_unlock(mas); + + return 0; +} +EXPORT_SYMBOL(drm_gpuva_remap); + +/** + * drm_gpuva_unmap - helper to remove a &drm_gpuva from &drm_gpuva_fn_ops + * callbacks + * + * @state: the current &drm_gpuva_state + * + * The entry associated with the current state is removed. + */ +void +drm_gpuva_unmap(drm_gpuva_state_t state) +{ + drm_gpuva_iter_remove(state); +} +EXPORT_SYMBOL(drm_gpuva_unmap); + +static int +op_map_cb(struct drm_gpuva_fn_ops *fn, void *priv, + u64 addr, u64 range, + struct drm_gem_object *obj, u64 offset) +{ + struct drm_gpuva_op op = {}; + + op.op = DRM_GPUVA_OP_MAP; + op.map.va.addr = addr; + op.map.va.range = range; + op.map.gem.obj = obj; + op.map.gem.offset = offset; + + return fn->sm_step_map(&op, priv); +} + +static int +op_remap_cb(struct drm_gpuva_fn_ops *fn, + drm_gpuva_state_t state, void *priv, + struct drm_gpuva_op_map *prev, + struct drm_gpuva_op_map *next, + struct drm_gpuva_op_unmap *unmap) +{ + struct drm_gpuva_op op = {}; + struct drm_gpuva_op_remap *r; + + op.op = DRM_GPUVA_OP_REMAP; + r = &op.remap; + r->prev = prev; + r->next = next; + r->unmap = unmap; + + return fn->sm_step_remap(&op, state, priv); +} + +static int +op_unmap_cb(struct drm_gpuva_fn_ops *fn, + drm_gpuva_state_t state, void *priv, + struct drm_gpuva *va, bool merge) +{ + struct drm_gpuva_op op = {}; + + op.op = DRM_GPUVA_OP_UNMAP; + op.unmap.va = va; + op.unmap.keep = merge; + + return fn->sm_step_unmap(&op, state, priv); +} + +static int +__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, + struct drm_gpuva_fn_ops *ops, void *priv, + u64 req_addr, u64 req_range, + struct drm_gem_object *req_obj, u64 req_offset) +{ + DRM_GPUVA_ITER(it, mgr, req_addr); + struct drm_gpuva *va, *prev = NULL; + u64 req_end = req_addr + req_range; + int ret; + + if (unlikely(!drm_gpuva_in_mm_range(mgr, req_addr, req_range))) + return -EINVAL; + + if (unlikely(drm_gpuva_in_kernel_node(mgr, req_addr, req_range))) + return -EINVAL; + + drm_gpuva_iter_for_each_range(va, it, req_end) { + struct drm_gem_object *obj = va->gem.obj; + u64 offset = va->gem.offset; + u64 addr = va->va.addr; + u64 range = va->va.range; + u64 end = addr + range; + bool merge = !!va->gem.obj; + + if (addr == req_addr) { + merge &= obj == req_obj && + offset == req_offset; + + if (end == req_end) { + ret = op_unmap_cb(ops, &it, priv, va, merge); + if (ret) + return ret; + break; + } + + if (end < req_end) { + ret = op_unmap_cb(ops, &it, priv, va, merge); + if (ret) + return ret; + goto next; + } + + if (end > req_end) { + struct drm_gpuva_op_map n = { + .va.addr = req_end, + .va.range = range - req_range, + .gem.obj = obj, + .gem.offset = offset + req_range, + }; + struct drm_gpuva_op_unmap u = { + .va = va, + .keep = merge, + }; + + ret = op_remap_cb(ops, &it, priv, NULL, &n, &u); + if (ret) + return ret; + break; + } + } else if (addr < req_addr) { + u64 ls_range = req_addr - addr; + struct drm_gpuva_op_map p = { + .va.addr = addr, + .va.range = ls_range, + .gem.obj = obj, + .gem.offset = offset, + }; + struct drm_gpuva_op_unmap u = { .va = va }; + + merge &= obj == req_obj && + offset + ls_range == req_offset; + u.keep = merge; + + if (end == req_end) { + ret = op_remap_cb(ops, &it, priv, &p, NULL, &u); + if (ret) + return ret; + break; + } + + if (end < req_end) { + ret = op_remap_cb(ops, &it, priv, &p, NULL, &u); + if (ret) + return ret; + goto next; + } + + if (end > req_end) { + struct drm_gpuva_op_map n = { + .va.addr = req_end, + .va.range = end - req_end, + .gem.obj = obj, + .gem.offset = offset + ls_range + + req_range, + }; + + ret = op_remap_cb(ops, &it, priv, &p, &n, &u); + if (ret) + return ret; + break; + } + } else if (addr > req_addr) { + merge &= obj == req_obj && + offset == req_offset + + (addr - req_addr); + + if (end == req_end) { + ret = op_unmap_cb(ops, &it, priv, va, merge); + if (ret) + return ret; + break; + } + + if (end < req_end) { + ret = op_unmap_cb(ops, &it, priv, va, merge); + if (ret) + return ret; + goto next; + } + + if (end > req_end) { + struct drm_gpuva_op_map n = { + .va.addr = req_end, + .va.range = end - req_end, + .gem.obj = obj, + .gem.offset = offset + req_end - addr, + }; + struct drm_gpuva_op_unmap u = { + .va = va, + .keep = merge, + }; + + ret = op_remap_cb(ops, &it, priv, NULL, &n, &u); + if (ret) + return ret; + break; + } + } +next: + prev = va; + } + + return op_map_cb(ops, priv, + req_addr, req_range, + req_obj, req_offset); +} + +static int +__drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, + struct drm_gpuva_fn_ops *ops, void *priv, + u64 req_addr, u64 req_range) +{ + DRM_GPUVA_ITER(it, mgr, req_addr); + struct drm_gpuva *va; + u64 req_end = req_addr + req_range; + int ret; + + if (unlikely(drm_gpuva_in_kernel_node(mgr, req_addr, req_range))) + return -EINVAL; + + drm_gpuva_iter_for_each_range(va, it, req_end) { + struct drm_gpuva_op_map prev = {}, next = {}; + bool prev_split = false, next_split = false; + struct drm_gem_object *obj = va->gem.obj; + u64 offset = va->gem.offset; + u64 addr = va->va.addr; + u64 range = va->va.range; + u64 end = addr + range; + + if (addr < req_addr) { + prev.va.addr = addr; + prev.va.range = req_addr - addr; + prev.gem.obj = obj; + prev.gem.offset = offset; + + prev_split = true; + } + + if (end > req_end) { + next.va.addr = req_end; + next.va.range = end - req_end; + next.gem.obj = obj; + next.gem.offset = offset + (req_end - addr); + + next_split = true; + } + + if (prev_split || next_split) { + struct drm_gpuva_op_unmap unmap = { .va = va }; + + ret = op_remap_cb(ops, &it, priv, + prev_split ? &prev : NULL, + next_split ? &next : NULL, + &unmap); + if (ret) + return ret; + } else { + ret = op_unmap_cb(ops, &it, priv, va, false); + if (ret) + return ret; + } + } + + return 0; +} + +/** + * drm_gpuva_sm_map - creates the &drm_gpuva_op split/merge steps + * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @req_addr: the start address of the new mapping + * @req_range: the range of the new mapping + * @req_obj: the &drm_gem_object to map + * @req_offset: the offset within the &drm_gem_object + * @priv: pointer to a driver private data structure + * + * This function iterates the given range of the GPU VA space. It utilizes the + * &drm_gpuva_fn_ops to call back into the driver providing the split and merge + * steps. + * + * Drivers may use these callbacks to update the GPU VA space right away within + * the callback. In case the driver decides to copy and store the operations for + * later processing neither this function nor &drm_gpuva_sm_unmap is allowed to + * be called before the &drm_gpuva_manager's view of the GPU VA space was + * updated with the previous set of operations. To update the + * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(), + * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be + * used. + * + * A sequence of callbacks can contain map, unmap and remap operations, but + * the sequence of callbacks might also be empty if no operation is required, + * e.g. if the requested mapping already exists in the exact same way. + * + * There can be an arbitrary amount of unmap operations, a maximum of two remap + * operations and a single map operation. The latter one represents the original + * map operation requested by the caller. + * + * Returns: 0 on success or a negative error code + */ +int +drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv, + u64 req_addr, u64 req_range, + struct drm_gem_object *req_obj, u64 req_offset) +{ + struct drm_gpuva_fn_ops *ops = mgr->ops; + + if (unlikely(!(ops && ops->sm_step_map && + ops->sm_step_remap && + ops->sm_step_unmap))) + return -EINVAL; + + return __drm_gpuva_sm_map(mgr, ops, priv, + req_addr, req_range, + req_obj, req_offset); +} +EXPORT_SYMBOL(drm_gpuva_sm_map); + +/** + * drm_gpuva_sm_unmap - creates the &drm_gpuva_ops to split on unmap + * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @priv: pointer to a driver private data structure + * @req_addr: the start address of the range to unmap + * @req_range: the range of the mappings to unmap + * + * This function iterates the given range of the GPU VA space. It utilizes the + * &drm_gpuva_fn_ops to call back into the driver providing the operations to + * unmap and, if required, split existent mappings. + * + * Drivers may use these callbacks to update the GPU VA space right away within + * the callback. In case the driver decides to copy and store the operations for + * later processing neither this function nor &drm_gpuva_sm_map is allowed to be + * called before the &drm_gpuva_manager's view of the GPU VA space was updated + * with the previous set of operations. To update the &drm_gpuva_manager's view + * of the GPU VA space drm_gpuva_insert(), drm_gpuva_destroy_locked() and/or + * drm_gpuva_destroy_unlocked() should be used. + * + * A sequence of callbacks can contain unmap and remap operations, depending on + * whether there are actual overlapping mappings to split. + * + * There can be an arbitrary amount of unmap operations and a maximum of two + * remap operations. + * + * Returns: 0 on success or a negative error code + */ +int +drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv, + u64 req_addr, u64 req_range) +{ + struct drm_gpuva_fn_ops *ops = mgr->ops; + + if (unlikely(!(ops && ops->sm_step_remap && + ops->sm_step_unmap))) + return -EINVAL; + + return __drm_gpuva_sm_unmap(mgr, ops, priv, + req_addr, req_range); +} +EXPORT_SYMBOL(drm_gpuva_sm_unmap); + +static struct drm_gpuva_op * +gpuva_op_alloc(struct drm_gpuva_manager *mgr) +{ + struct drm_gpuva_fn_ops *fn = mgr->ops; + struct drm_gpuva_op *op; + + if (fn && fn->op_alloc) + op = fn->op_alloc(); + else + op = kzalloc(sizeof(*op), GFP_KERNEL); + + if (unlikely(!op)) + return NULL; + + return op; +} + +static void +gpuva_op_free(struct drm_gpuva_manager *mgr, + struct drm_gpuva_op *op) +{ + struct drm_gpuva_fn_ops *fn = mgr->ops; + + if (fn && fn->op_free) + fn->op_free(op); + else + kfree(op); +} + +static int +drm_gpuva_sm_step(struct drm_gpuva_op *__op, + drm_gpuva_state_t state, + void *priv) +{ + struct { + struct drm_gpuva_manager *mgr; + struct drm_gpuva_ops *ops; + } *args = priv; + struct drm_gpuva_manager *mgr = args->mgr; + struct drm_gpuva_ops *ops = args->ops; + struct drm_gpuva_op *op; + + op = gpuva_op_alloc(mgr); + if (unlikely(!op)) + goto err; + + memcpy(op, __op, sizeof(*op)); + + if (op->op == DRM_GPUVA_OP_REMAP) { + struct drm_gpuva_op_remap *__r = &__op->remap; + struct drm_gpuva_op_remap *r = &op->remap; + + r->unmap = kmemdup(__r->unmap, sizeof(*r->unmap), + GFP_KERNEL); + if (unlikely(!r->unmap)) + goto err_free_op; + + if (__r->prev) { + r->prev = kmemdup(__r->prev, sizeof(*r->prev), + GFP_KERNEL); + if (unlikely(!r->prev)) + goto err_free_unmap; + } + + if (__r->next) { + r->next = kmemdup(__r->next, sizeof(*r->next), + GFP_KERNEL); + if (unlikely(!r->next)) + goto err_free_prev; + } + } + + list_add_tail(&op->entry, &ops->list); + + return 0; + +err_free_unmap: + kfree(op->remap.unmap); +err_free_prev: + kfree(op->remap.prev); +err_free_op: + gpuva_op_free(mgr, op); +err: + return -ENOMEM; +} + +static int +drm_gpuva_sm_step_map(struct drm_gpuva_op *__op, void *priv) +{ + return drm_gpuva_sm_step(__op, NULL, priv); +} + +static struct drm_gpuva_fn_ops gpuva_list_ops = { + .sm_step_map = drm_gpuva_sm_step_map, + .sm_step_remap = drm_gpuva_sm_step, + .sm_step_unmap = drm_gpuva_sm_step, +}; + +/** + * drm_gpuva_sm_map_ops_create - creates the &drm_gpuva_ops to split and merge + * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @req_addr: the start address of the new mapping + * @req_range: the range of the new mapping + * @req_obj: the &drm_gem_object to map + * @req_offset: the offset within the &drm_gem_object + * + * This function creates a list of operations to perform splitting and merging + * of existent mapping(s) with the newly requested one. + * + * The list can be iterated with &drm_gpuva_for_each_op and must be processed + * in the given order. It can contain map, unmap and remap operations, but it + * also can be empty if no operation is required, e.g. if the requested mapping + * already exists is the exact same way. + * + * There can be an arbitrary amount of unmap operations, a maximum of two remap + * operations and a single map operation. The latter one represents the original + * map operation requested by the caller. + * + * Note that before calling this function again with another mapping request it + * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The + * previously obtained operations must be either processed or abandoned. To + * update the &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(), + * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be + * used. + * + * After the caller finished processing the returned &drm_gpuva_ops, they must + * be freed with &drm_gpuva_ops_free. + * + * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure + */ +struct drm_gpuva_ops * +drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr, + u64 req_addr, u64 req_range, + struct drm_gem_object *req_obj, u64 req_offset) +{ + struct drm_gpuva_ops *ops; + struct { + struct drm_gpuva_manager *mgr; + struct drm_gpuva_ops *ops; + } args; + int ret; + + ops = kzalloc(sizeof(*ops), GFP_KERNEL); + if (unlikely(!ops)) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&ops->list); + + args.mgr = mgr; + args.ops = ops; + + ret = __drm_gpuva_sm_map(mgr, &gpuva_list_ops, &args, + req_addr, req_range, + req_obj, req_offset); + if (ret) + goto err_free_ops; + + return ops; + +err_free_ops: + drm_gpuva_ops_free(mgr, ops); + return ERR_PTR(ret); +} +EXPORT_SYMBOL(drm_gpuva_sm_map_ops_create); + +/** + * drm_gpuva_sm_unmap_ops_create - creates the &drm_gpuva_ops to split on unmap + * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @req_addr: the start address of the range to unmap + * @req_range: the range of the mappings to unmap + * + * This function creates a list of operations to perform unmapping and, if + * required, splitting of the mappings overlapping the unmap range. + * + * The list can be iterated with &drm_gpuva_for_each_op and must be processed + * in the given order. It can contain unmap and remap operations, depending on + * whether there are actual overlapping mappings to split. + * + * There can be an arbitrary amount of unmap operations and a maximum of two + * remap operations. + * + * Note that before calling this function again with another range to unmap it + * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The + * previously obtained operations must be processed or abandoned. To update the + * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(), + * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be + * used. + * + * After the caller finished processing the returned &drm_gpuva_ops, they must + * be freed with &drm_gpuva_ops_free. + * + * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure + */ +struct drm_gpuva_ops * +drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr, + u64 req_addr, u64 req_range) +{ + struct drm_gpuva_ops *ops; + struct { + struct drm_gpuva_manager *mgr; + struct drm_gpuva_ops *ops; + } args; + int ret; + + ops = kzalloc(sizeof(*ops), GFP_KERNEL); + if (unlikely(!ops)) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&ops->list); + + args.mgr = mgr; + args.ops = ops; + + ret = __drm_gpuva_sm_unmap(mgr, &gpuva_list_ops, &args, + req_addr, req_range); + if (ret) + goto err_free_ops; + + return ops; + +err_free_ops: + drm_gpuva_ops_free(mgr, ops); + return ERR_PTR(ret); +} +EXPORT_SYMBOL(drm_gpuva_sm_unmap_ops_create); + +/** + * drm_gpuva_prefetch_ops_create - creates the &drm_gpuva_ops to prefetch + * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @addr: the start address of the range to prefetch + * @range: the range of the mappings to prefetch + * + * This function creates a list of operations to perform prefetching. + * + * The list can be iterated with &drm_gpuva_for_each_op and must be processed + * in the given order. It can contain prefetch operations. + * + * There can be an arbitrary amount of prefetch operations. + * + * After the caller finished processing the returned &drm_gpuva_ops, they must + * be freed with &drm_gpuva_ops_free. + * + * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure + */ +struct drm_gpuva_ops * +drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr, + u64 addr, u64 range) +{ + DRM_GPUVA_ITER(it, mgr, addr); + struct drm_gpuva_ops *ops; + struct drm_gpuva_op *op; + struct drm_gpuva *va; + int ret; + + ops = kzalloc(sizeof(*ops), GFP_KERNEL); + if (!ops) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&ops->list); + + drm_gpuva_iter_for_each_range(va, it, addr + range) { + op = gpuva_op_alloc(mgr); + if (!op) { + ret = -ENOMEM; + goto err_free_ops; + } + + op->op = DRM_GPUVA_OP_PREFETCH; + op->prefetch.va = va; + list_add_tail(&op->entry, &ops->list); + } + + return ops; + +err_free_ops: + drm_gpuva_ops_free(mgr, ops); + return ERR_PTR(ret); +} +EXPORT_SYMBOL(drm_gpuva_prefetch_ops_create); + +/** + * drm_gpuva_gem_unmap_ops_create - creates the &drm_gpuva_ops to unmap a GEM + * @mgr: the &drm_gpuva_manager representing the GPU VA space + * @obj: the &drm_gem_object to unmap + * + * This function creates a list of operations to perform unmapping for every + * GPUVA attached to a GEM. + * + * The list can be iterated with &drm_gpuva_for_each_op and consists out of an + * arbitrary amount of unmap operations. + * + * After the caller finished processing the returned &drm_gpuva_ops, they must + * be freed with &drm_gpuva_ops_free. + * + * It is the callers responsibility to protect the GEMs GPUVA list against + * concurrent access. + * + * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure + */ +struct drm_gpuva_ops * +drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj) +{ + struct drm_gpuva_ops *ops; + struct drm_gpuva_op *op; + struct drm_gpuva *va; + int ret; + + ops = kzalloc(sizeof(*ops), GFP_KERNEL); + if (!ops) + return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&ops->list); + + drm_gem_for_each_gpuva(va, obj) { + op = gpuva_op_alloc(mgr); + if (!op) { + ret = -ENOMEM; + goto err_free_ops; + } + + op->op = DRM_GPUVA_OP_UNMAP; + op->unmap.va = va; + list_add_tail(&op->entry, &ops->list); + } + + return ops; + +err_free_ops: + drm_gpuva_ops_free(mgr, ops); + return ERR_PTR(ret); +} +EXPORT_SYMBOL(drm_gpuva_gem_unmap_ops_create); + + +/** + * drm_gpuva_ops_free - free the given &drm_gpuva_ops + * @mgr: the &drm_gpuva_manager the ops were created for + * @ops: the &drm_gpuva_ops to free + * + * Frees the given &drm_gpuva_ops structure including all the ops associated + * with it. + */ +void +drm_gpuva_ops_free(struct drm_gpuva_manager *mgr, + struct drm_gpuva_ops *ops) +{ + struct drm_gpuva_op *op, *next; + + drm_gpuva_for_each_op_safe(op, next, ops) { + list_del(&op->entry); + + if (op->op == DRM_GPUVA_OP_REMAP) { + kfree(op->remap.prev); + kfree(op->remap.next); + kfree(op->remap.unmap); + } + + gpuva_op_free(mgr, op); + } + + kfree(ops); +} +EXPORT_SYMBOL(drm_gpuva_ops_free); diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index 5b86bb7603e7..9b6b4bd8d65a 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -104,6 +104,12 @@ enum drm_driver_feature { * acceleration should be handled by two drivers that are connected using auxiliary bus. */ DRIVER_COMPUTE_ACCEL = BIT(7), + /** + * @DRIVER_GEM_GPUVA: + * + * Driver supports user defined GPU VA bindings for GEM objects. + */ + DRIVER_GEM_GPUVA = BIT(8), /* IMPORTANT: Below are all the legacy flags, add new ones above. */ diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index c76e651f2d44..c00c4fb73224 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -36,6 +36,8 @@ #include #include +#include +#include #include @@ -347,6 +349,17 @@ struct drm_gem_object { */ struct dma_resv _resv; + /** + * @gpuva: + * + * Provides the list and list mutex of GPU VAs attached to this + * GEM object. + */ + struct { + struct list_head list; + struct mutex mutex; + } gpuva; + /** * @funcs: * @@ -491,4 +504,66 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan, int drm_gem_evict(struct drm_gem_object *obj); +/** + * drm_gem_gpuva_init - initialize the gpuva list of a GEM object + * @obj: the &drm_gem_object + * + * This initializes the &drm_gem_object's &drm_gpuva list and the mutex + * protecting it. + * + * Calling this function is only necessary for drivers intending to support the + * &drm_driver_feature DRIVER_GEM_GPUVA. + */ +static inline void drm_gem_gpuva_init(struct drm_gem_object *obj) +{ + INIT_LIST_HEAD(&obj->gpuva.list); + mutex_init(&obj->gpuva.mutex); +} + +/** + * drm_gem_gpuva_lock - lock the GEM's gpuva list mutex + * @obj: the &drm_gem_object + * + * This unlocks the mutex protecting the &drm_gem_object's &drm_gpuva list. + */ +static inline void drm_gem_gpuva_lock(struct drm_gem_object *obj) +{ + mutex_lock(&obj->gpuva.mutex); +} + +/** + * drm_gem_gpuva_unlock - unlock the GEM's gpuva list mutex + * @obj: the &drm_gem_object + * + * This unlocks the mutex protecting the &drm_gem_object's &drm_gpuva list. + */ +static inline void drm_gem_gpuva_unlock(struct drm_gem_object *obj) +{ + mutex_unlock(&obj->gpuva.mutex); +} + +/** + * drm_gem_for_each_gpuva - iternator to walk over a list of gpuvas + * @entry: &drm_gpuva structure to assign to in each iteration step + * @obj: the &drm_gem_object the &drm_gpuvas to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gpuva_manager. + */ +#define drm_gem_for_each_gpuva(entry__, obj__) \ + list_for_each_entry(entry__, &(obj__)->gpuva.list, gem.entry) + +/** + * drm_gem_for_each_gpuva_safe - iternator to safely walk over a list of gpuvas + * @entry: &drm_gpuva structure to assign to in each iteration step + * @next: &next &drm_gpuva to store the next step + * @obj: the &drm_gem_object the &drm_gpuvas to walk are associated with + * + * This iterator walks over all &drm_gpuva structures associated with the + * &drm_gem_object. It is implemented with list_for_each_entry_safe(), hence + * it is save against removal of elements. + */ +#define drm_gem_for_each_gpuva_safe(entry__, next__, obj__) \ + list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, gem.entry) + #endif /* __DRM_GEM_H__ */ diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuva_mgr.h new file mode 100644 index 000000000000..62169d850098 --- /dev/null +++ b/include/drm/drm_gpuva_mgr.h @@ -0,0 +1,681 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __DRM_GPUVA_MGR_H__ +#define __DRM_GPUVA_MGR_H__ + +/* + * Copyright (c) 2022 Red Hat. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ + +#include +#include +#include +#include +#include + +struct drm_gpuva_manager; +struct drm_gpuva_fn_ops; +struct drm_gpuva_prealloc; + +/** + * enum drm_gpuva_flags - flags for struct drm_gpuva + */ +enum drm_gpuva_flags { + /** + * @DRM_GPUVA_EVICTED: + * + * Flag indicating that the &drm_gpuva's backing GEM is evicted. + */ + DRM_GPUVA_EVICTED = (1 << 0), + + /** + * @DRM_GPUVA_SPARSE: + * + * Flag indicating that the &drm_gpuva is a sparse mapping. + */ + DRM_GPUVA_SPARSE = (1 << 1), + + /** + * @DRM_GPUVA_USERBITS: user defined bits + */ + DRM_GPUVA_USERBITS = (1 << 2), +}; + +/** + * struct drm_gpuva - structure to track a GPU VA mapping + * + * This structure represents a GPU VA mapping and is associated with a + * &drm_gpuva_manager. + * + * Typically, this structure is embedded in bigger driver structures. + */ +struct drm_gpuva { + /** + * @mgr: the &drm_gpuva_manager this object is associated with + */ + struct drm_gpuva_manager *mgr; + + /** + * @flags: the &drm_gpuva_flags for this mapping + */ + enum drm_gpuva_flags flags; + + /** + * @va: structure containing the address and range of the &drm_gpuva + */ + struct { + /** + * @addr: the start address + */ + u64 addr; + + /* + * @range: the range + */ + u64 range; + } va; + + /** + * @gem: structure containing the &drm_gem_object and it's offset + */ + struct { + /** + * @offset: the offset within the &drm_gem_object + */ + u64 offset; + + /** + * @obj: the mapped &drm_gem_object + */ + struct drm_gem_object *obj; + + /** + * @entry: the &list_head to attach this object to a &drm_gem_object + */ + struct list_head entry; + } gem; +}; + +void drm_gpuva_link(struct drm_gpuva *va); +void drm_gpuva_unlink(struct drm_gpuva *va); + +int drm_gpuva_insert(struct drm_gpuva_manager *mgr, + struct drm_gpuva *va); +int drm_gpuva_insert_prealloc(struct drm_gpuva_manager *mgr, + struct drm_gpuva_prealloc *pa, + struct drm_gpuva *va); +void drm_gpuva_remove(struct drm_gpuva *va); + +struct drm_gpuva *drm_gpuva_find(struct drm_gpuva_manager *mgr, + u64 addr, u64 range); +struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuva_manager *mgr, + u64 addr, u64 range); +struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start); +struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end); + +bool drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range); + +/** + * drm_gpuva_evict - sets whether the backing GEM of this &drm_gpuva is evicted + * @va: the &drm_gpuva to set the evict flag for + * @evict: indicates whether the &drm_gpuva is evicted + */ +static inline void drm_gpuva_evict(struct drm_gpuva *va, bool evict) +{ + if (evict) + va->flags |= DRM_GPUVA_EVICTED; + else + va->flags &= ~DRM_GPUVA_EVICTED; +} + +/** + * drm_gpuva_evicted - indicates whether the backing BO of this &drm_gpuva + * is evicted + * @va: the &drm_gpuva to check + */ +static inline bool drm_gpuva_evicted(struct drm_gpuva *va) +{ + return va->flags & DRM_GPUVA_EVICTED; +} + +/** + * struct drm_gpuva_manager - DRM GPU VA Manager + * + * The DRM GPU VA Manager keeps track of a GPU's virtual address space by using + * &maple_tree structures. Typically, this structure is embedded in bigger + * driver structures. + * + * Drivers can pass addresses and ranges in an arbitrary unit, e.g. bytes or + * pages. + * + * There should be one manager instance per GPU virtual address space. + */ +struct drm_gpuva_manager { + /** + * @name: the name of the DRM GPU VA space + */ + const char *name; + + /** + * @mm_start: start of the VA space + */ + u64 mm_start; + + /** + * @mm_range: length of the VA space + */ + u64 mm_range; + + /** + * @mtree: the &maple_tree to track GPU VA mappings + */ + struct maple_tree mtree; + + /** + * @kernel_alloc_node: + * + * &drm_gpuva representing the address space cutout reserved for + * the kernel + */ + struct drm_gpuva kernel_alloc_node; + + /** + * @ops: &drm_gpuva_fn_ops providing the split/merge steps to drivers + */ + struct drm_gpuva_fn_ops *ops; +}; + +void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr, + const char *name, + u64 start_offset, u64 range, + u64 reserve_offset, u64 reserve_range, + struct drm_gpuva_fn_ops *ops); +void drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr); + +/** + * struct drm_gpuva_prealloc - holds a preallocated node for the + * &drm_gpuva_manager to insert a single new entry + */ +struct drm_gpuva_prealloc { + /** + * @mas: the maple tree advanced state + */ + struct ma_state mas; +}; + +struct drm_gpuva_prealloc * drm_gpuva_prealloc_create(void); +void drm_gpuva_prealloc_destroy(struct drm_gpuva_prealloc *pa); + +/** + * struct drm_gpuva_iterator - iterator for walking the internal (maple) tree + */ +struct drm_gpuva_iterator { + /** + * @mas: the maple tree advanced state + */ + struct ma_state mas; + + /** + * @mgr: the &drm_gpuva_manager to iterate + */ + struct drm_gpuva_manager *mgr; +}; +typedef struct drm_gpuva_iterator * drm_gpuva_state_t; + +void drm_gpuva_iter_remove(struct drm_gpuva_iterator *it); +int drm_gpuva_iter_va_replace(struct drm_gpuva_iterator *it, + struct drm_gpuva *va); + +static inline struct drm_gpuva * +drm_gpuva_iter_find(struct drm_gpuva_iterator *it, unsigned long max) +{ + struct drm_gpuva *va; + + mas_lock(&it->mas); + va = mas_find(&it->mas, max); + mas_unlock(&it->mas); + + return va; +} + +/** + * DRM_GPUVA_ITER - create an iterator structure to iterate the &drm_gpuva tree + * @name: the name of the &drm_gpuva_iterator to create + * @mgr__: the &drm_gpuva_manager to iterate + * @start: starting offset, the first entry will overlap this + */ +#define DRM_GPUVA_ITER(name, mgr__, start) \ + struct drm_gpuva_iterator name = { \ + .mas = MA_STATE_INIT(&(mgr__)->mtree, start, 0), \ + .mgr = mgr__, \ + } + +/** + * drm_gpuva_iter_for_each_range - iternator to walk over a range of entries + * @va__: the &drm_gpuva found for the current iteration + * @it__: &drm_gpuva_iterator structure to assign to in each iteration step + * @end__: ending offset, the last entry will start before this (but may overlap) + * + * This function can be used to iterate &drm_gpuva objects. + * + * It is safe against the removal of elements using &drm_gpuva_iter_remove, + * however it is not safe against the removal of elements using + * &drm_gpuva_remove. + */ +#define drm_gpuva_iter_for_each_range(va__, it__, end__) \ + while (((va__) = drm_gpuva_iter_find(&(it__), (end__) - 1))) + +/** + * drm_gpuva_iter_for_each - iternator to walk over all existing entries + * @va__: the &drm_gpuva found for the current iteration + * @it__: &drm_gpuva_iterator structure to assign to in each iteration step + * + * This function can be used to iterate &drm_gpuva objects. + * + * In order to walk over all potentially existing entries, the + * &drm_gpuva_iterator must be initialized to start at + * &drm_gpuva_manager->mm_start or simply 0. + * + * It is safe against the removal of elements using &drm_gpuva_iter_remove, + * however it is not safe against the removal of elements using + * &drm_gpuva_remove. + */ +#define drm_gpuva_iter_for_each(va__, it__) \ + drm_gpuva_iter_for_each_range(va__, it__, (it__).mgr->mm_start + (it__).mgr->mm_range) + +/** + * enum drm_gpuva_op_type - GPU VA operation type + * + * Operations to alter the GPU VA mappings tracked by the &drm_gpuva_manager. + */ +enum drm_gpuva_op_type { + /** + * @DRM_GPUVA_OP_MAP: the map op type + */ + DRM_GPUVA_OP_MAP, + + /** + * @DRM_GPUVA_OP_REMAP: the remap op type + */ + DRM_GPUVA_OP_REMAP, + + /** + * @DRM_GPUVA_OP_UNMAP: the unmap op type + */ + DRM_GPUVA_OP_UNMAP, + + /** + * @DRM_GPUVA_OP_PREFETCH: the prefetch op type + */ + DRM_GPUVA_OP_PREFETCH, +}; + +/** + * struct drm_gpuva_op_map - GPU VA map operation + * + * This structure represents a single map operation generated by the + * DRM GPU VA manager. + */ +struct drm_gpuva_op_map { + /** + * @va: structure containing address and range of a map + * operation + */ + struct { + /** + * @addr: the base address of the new mapping + */ + u64 addr; + + /** + * @range: the range of the new mapping + */ + u64 range; + } va; + + /** + * @gem: structure containing the &drm_gem_object and it's offset + */ + struct { + /** + * @offset: the offset within the &drm_gem_object + */ + u64 offset; + + /** + * @obj: the &drm_gem_object to map + */ + struct drm_gem_object *obj; + } gem; +}; + +/** + * struct drm_gpuva_op_unmap - GPU VA unmap operation + * + * This structure represents a single unmap operation generated by the + * DRM GPU VA manager. + */ +struct drm_gpuva_op_unmap { + /** + * @va: the &drm_gpuva to unmap + */ + struct drm_gpuva *va; + + /** + * @keep: + * + * Indicates whether this &drm_gpuva is physically contiguous with the + * original mapping request. + * + * Optionally, if &keep is set, drivers may keep the actual page table + * mappings for this &drm_gpuva, adding the missing page table entries + * only and update the &drm_gpuva_manager accordingly. + */ + bool keep; +}; + +/** + * struct drm_gpuva_op_remap - GPU VA remap operation + * + * This represents a single remap operation generated by the DRM GPU VA manager. + * + * A remap operation is generated when an existing GPU VA mmapping is split up + * by inserting a new GPU VA mapping or by partially unmapping existent + * mapping(s), hence it consists of a maximum of two map and one unmap + * operation. + * + * The @unmap operation takes care of removing the original existing mapping. + * @prev is used to remap the preceding part, @next the subsequent part. + * + * If either a new mapping's start address is aligned with the start address + * of the old mapping or the new mapping's end address is aligned with the + * end address of the old mapping, either @prev or @next is NULL. + * + * Note, the reason for a dedicated remap operation, rather than arbitrary + * unmap and map operations, is to give drivers the chance of extracting driver + * specific data for creating the new mappings from the unmap operations's + * &drm_gpuva structure which typically is embedded in larger driver specific + * structures. + */ +struct drm_gpuva_op_remap { + /** + * @prev: the preceding part of a split mapping + */ + struct drm_gpuva_op_map *prev; + + /** + * @next: the subsequent part of a split mapping + */ + struct drm_gpuva_op_map *next; + + /** + * @unmap: the unmap operation for the original existing mapping + */ + struct drm_gpuva_op_unmap *unmap; +}; + +/** + * struct drm_gpuva_op_prefetch - GPU VA prefetch operation + * + * This structure represents a single prefetch operation generated by the + * DRM GPU VA manager. + */ +struct drm_gpuva_op_prefetch { + /** + * @va: the &drm_gpuva to prefetch + */ + struct drm_gpuva *va; +}; + +/** + * struct drm_gpuva_op - GPU VA operation + * + * This structure represents a single generic operation. + * + * The particular type of the operation is defined by @op. + */ +struct drm_gpuva_op { + /** + * @entry: + * + * The &list_head used to distribute instances of this struct within + * &drm_gpuva_ops. + */ + struct list_head entry; + + /** + * @op: the type of the operation + */ + enum drm_gpuva_op_type op; + + union { + /** + * @map: the map operation + */ + struct drm_gpuva_op_map map; + + /** + * @remap: the remap operation + */ + struct drm_gpuva_op_remap remap; + + /** + * @unmap: the unmap operation + */ + struct drm_gpuva_op_unmap unmap; + + /** + * @prefetch: the prefetch operation + */ + struct drm_gpuva_op_prefetch prefetch; + }; +}; + +/** + * struct drm_gpuva_ops - wraps a list of &drm_gpuva_op + */ +struct drm_gpuva_ops { + /** + * @list: the &list_head + */ + struct list_head list; +}; + +/** + * drm_gpuva_for_each_op - iterator to walk over &drm_gpuva_ops + * @op: &drm_gpuva_op to assign in each iteration step + * @ops: &drm_gpuva_ops to walk + * + * This iterator walks over all ops within a given list of operations. + */ +#define drm_gpuva_for_each_op(op, ops) list_for_each_entry(op, &(ops)->list, entry) + +/** + * drm_gpuva_for_each_op_safe - iterator to safely walk over &drm_gpuva_ops + * @op: &drm_gpuva_op to assign in each iteration step + * @next: &next &drm_gpuva_op to store the next step + * @ops: &drm_gpuva_ops to walk + * + * This iterator walks over all ops within a given list of operations. It is + * implemented with list_for_each_safe(), so save against removal of elements. + */ +#define drm_gpuva_for_each_op_safe(op, next, ops) \ + list_for_each_entry_safe(op, next, &(ops)->list, entry) + +/** + * drm_gpuva_for_each_op_from_reverse - iterate backwards from the given point + * @op: &drm_gpuva_op to assign in each iteration step + * @ops: &drm_gpuva_ops to walk + * + * This iterator walks over all ops within a given list of operations beginning + * from the given operation in reverse order. + */ +#define drm_gpuva_for_each_op_from_reverse(op, ops) \ + list_for_each_entry_from_reverse(op, &(ops)->list, entry) + +/** + * drm_gpuva_first_op - returns the first &drm_gpuva_op from &drm_gpuva_ops + * @ops: the &drm_gpuva_ops to get the fist &drm_gpuva_op from + */ +#define drm_gpuva_first_op(ops) \ + list_first_entry(&(ops)->list, struct drm_gpuva_op, entry) + +/** + * drm_gpuva_last_op - returns the last &drm_gpuva_op from &drm_gpuva_ops + * @ops: the &drm_gpuva_ops to get the last &drm_gpuva_op from + */ +#define drm_gpuva_last_op(ops) \ + list_last_entry(&(ops)->list, struct drm_gpuva_op, entry) + +/** + * drm_gpuva_prev_op - previous &drm_gpuva_op in the list + * @op: the current &drm_gpuva_op + */ +#define drm_gpuva_prev_op(op) list_prev_entry(op, entry) + +/** + * drm_gpuva_next_op - next &drm_gpuva_op in the list + * @op: the current &drm_gpuva_op + */ +#define drm_gpuva_next_op(op) list_next_entry(op, entry) + +struct drm_gpuva_ops * +drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr, + u64 addr, u64 range, + struct drm_gem_object *obj, u64 offset); +struct drm_gpuva_ops * +drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr, + u64 addr, u64 range); + +struct drm_gpuva_ops * +drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr, + u64 addr, u64 range); + +struct drm_gpuva_ops * +drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr, + struct drm_gem_object *obj); + +void drm_gpuva_ops_free(struct drm_gpuva_manager *mgr, + struct drm_gpuva_ops *ops); + +/** + * struct drm_gpuva_fn_ops - callbacks for split/merge steps + * + * This structure defines the callbacks used by &drm_gpuva_sm_map and + * &drm_gpuva_sm_unmap to provide the split/merge steps for map and unmap + * operations to drivers. + */ +struct drm_gpuva_fn_ops { + /** + * @op_alloc: called when the &drm_gpuva_manager allocates + * a struct drm_gpuva_op + * + * Some drivers may want to embed struct drm_gpuva_op into driver + * specific structures. By implementing this callback drivers can + * allocate memory accordingly. + * + * This callback is optional. + */ + struct drm_gpuva_op *(*op_alloc)(void); + + /** + * @op_free: called when the &drm_gpuva_manager frees a + * struct drm_gpuva_op + * + * Some drivers may want to embed struct drm_gpuva_op into driver + * specific structures. By implementing this callback drivers can + * free the previously allocated memory accordingly. + * + * This callback is optional. + */ + void (*op_free)(struct drm_gpuva_op *op); + + /** + * @sm_step_map: called from &drm_gpuva_sm_map to finally insert the + * mapping once all previous steps were completed + * + * The &priv pointer matches the one the driver passed to + * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively. + * + * Can be NULL if &drm_gpuva_sm_map is used. + */ + int (*sm_step_map)(struct drm_gpuva_op *op, void *priv); + + /** + * @sm_step_remap: called from &drm_gpuva_sm_map and + * &drm_gpuva_sm_unmap to split up an existent mapping + * + * This callback is called when existent mapping needs to be split up. + * This is the case when either a newly requested mapping overlaps or + * is enclosed by an existent mapping or a partial unmap of an existent + * mapping is requested. + * + * Drivers must not modify the GPUVA space with accessors that do not + * take a &drm_gpuva_state as argument from this callback. + * + * The &priv pointer matches the one the driver passed to + * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively. + * + * Can be NULL if neither &drm_gpuva_sm_map nor &drm_gpuva_sm_unmap is + * used. + */ + int (*sm_step_remap)(struct drm_gpuva_op *op, + drm_gpuva_state_t state, + void *priv); + + /** + * @sm_step_unmap: called from &drm_gpuva_sm_map and + * &drm_gpuva_sm_unmap to unmap an existent mapping + * + * This callback is called when existent mapping needs to be unmapped. + * This is the case when either a newly requested mapping encloses an + * existent mapping or an unmap of an existent mapping is requested. + * + * Drivers must not modify the GPUVA space with accessors that do not + * take a &drm_gpuva_state as argument from this callback. + * + * The &priv pointer matches the one the driver passed to + * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively. + * + * Can be NULL if neither &drm_gpuva_sm_map nor &drm_gpuva_sm_unmap is + * used. + */ + int (*sm_step_unmap)(struct drm_gpuva_op *op, + drm_gpuva_state_t state, + void *priv); +}; + +int drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv, + u64 addr, u64 range, + struct drm_gem_object *obj, u64 offset); + +int drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv, + u64 addr, u64 range); + +int drm_gpuva_map(struct drm_gpuva_manager *mgr, + struct drm_gpuva_prealloc *pa, + struct drm_gpuva *va); +int drm_gpuva_remap(drm_gpuva_state_t state, + struct drm_gpuva *prev, + struct drm_gpuva *next); +void drm_gpuva_unmap(drm_gpuva_state_t state); + +#endif /* __DRM_GPUVA_MGR_H__ */ From patchwork Tue Apr 4 01:27:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78813 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2706519vqo; Mon, 3 Apr 2023 18:47:38 -0700 (PDT) X-Google-Smtp-Source: AKy350Z1WR/0BSpIYBXc+gK7hzZ8D0MXN1CANBahTtWHV1V45oZhSs6FpCmRADf7tWoVs8YuE+g5 X-Received: by 2002:aa7:d955:0:b0:502:a700:dc8b with SMTP id l21-20020aa7d955000000b00502a700dc8bmr884216eds.9.1680572857897; Mon, 03 Apr 2023 18:47:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572857; cv=none; d=google.com; s=arc-20160816; b=RQovYFfRfuDOsE9Rydva/XoFZ0MxIvFLuJeF2l3tu9+mrZzErnwQDg+E72HPQZjxze MBGjTz5H9a1zOs/XRjYWjCNqbHkFdtaXgWCIJVvlXVzZnsoylAGn8cSZNgwbgFlroP1+ EPlFtmdMs9eBwFBPFiQKsJVHAeJC9kF5n4bS0wthCVyYVNpc3Yv0RiultvTLiM4nq0Ct TmeLrktCB/kXV3g3Rljkvv/0ghdiLr9JmE0gtjMWS1ebk8W9CwKzwernPo7w89qCUXqX SVcZLOmlXFJXec0K9WhnCLvk3LrfcmWE1iu97xX8PZ84kkWxx29XzPW12wp1BFrx3D4R MkmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=EZFPqC4WQTzfq8S5r5hBCf2SlUHwstyrVjQ9+R5S5vs=; b=WwYYI5n/m+4/z/JSK/W0oMYlAZF3LibVpAVJ71UyUNMcrb2Lsu0KI3pbcfRCszx1vE wYITOGF9SSNr5eHGKAw8UhZKMFt29DoGflAEIXPdlc6cWPGBB/4bIlX5H7ahkpw3oqI7 UN/wVdgfW8ruyrZZ9BAcjrcevrPCqdPt+SbfwWc2DdVtQPgmbrNWdbIEsFQymieUqgFv mWmQsYgjyTbFYGothvA8UwR2YsBB3PfCvPY3O4UqylrZa8K2A1GqJkjwgyyIoFn0E16s 1sxG45YliQlfgCiNqrIZzNYvbnwE11dLyM5uft9IaN8QZFDpcedOjL+/9SuEyzBeezqx CQwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YNRx4UrZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f24-20020a056402161800b004acc823ab81si1427225edv.166.2023.04.03.18.47.14; Mon, 03 Apr 2023 18:47:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YNRx4UrZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233000AbjDDB3f (ORCPT + 99 others); Mon, 3 Apr 2023 21:29:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232760AbjDDB3Y (ORCPT ); Mon, 3 Apr 2023 21:29:24 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BE411739 for ; Mon, 3 Apr 2023 18:28:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571688; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EZFPqC4WQTzfq8S5r5hBCf2SlUHwstyrVjQ9+R5S5vs=; b=YNRx4UrZ/sZfw91o6z9UkZZrlXIrS3BLIMsb7JFuma3IuWnbCmqJfceD3+EyXYuEhiZ6/S aRI9pZE5OspRAYIeRbDaAQ4JtqFH03TJKkimZWwrJAaorlqGyEw/0oeGhZF1KGhUS1YUNE GgT6hyBtyFHSTScz2p11x7P1Tlukl+o= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-201-r3AV0OwmPx6IBs7piOWEmw-1; Mon, 03 Apr 2023 21:28:07 -0400 X-MC-Unique: r3AV0OwmPx6IBs7piOWEmw-1 Received: by mail-ed1-f71.google.com with SMTP id i42-20020a0564020f2a00b004fd23c238beso43499845eda.0 for ; Mon, 03 Apr 2023 18:28:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571686; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EZFPqC4WQTzfq8S5r5hBCf2SlUHwstyrVjQ9+R5S5vs=; b=bMylu9b2xhsO+XcyhFzKRL3cQ8lBsRCQeXH8GpP5loC++m4j0oxDkqY05aL5EvMU2Y fMm1KRCfVexGCbF7yNhumNZh7Hm9uemutnNqTSRMhPXT0G+a3seX8lu8oUv0uYE7wepi A8m+O5JWCJdRFWsRoywnSATtwDpVYEMiE16LTIACKtT/maLCNtXdrKHWJbPXrwB3QB4a EulIppg4pp2GgABFKkI1jVInAkr7q8xBE6jbOV0wG1XWona2ytc/sPjd4KiIEJVshCXS 5KkRNANbV35BvrimAeN8hdf8j+svz8RxnVDzHc9cZOg0/SZyT72L5NSBxu53Y/w+pG+F Ysuw== X-Gm-Message-State: AAQBX9f7SoK+nlWNIexIIDz0n3MIMkzpWQMwSf8xbGC9Fj86JdKH2/jw Woe3OQVwvORfjWUIbMKphRvLvRUdsV4jLTt7N6lS/ZHvJCdYDZKzusQOZJ+mo1+tk1k9KF9MY1n gjJviGVkJVTtt8Nz35Zr2ByEN X-Received: by 2002:a17:906:3a9b:b0:93d:770:25df with SMTP id y27-20020a1709063a9b00b0093d077025dfmr587851ejd.37.1680571686001; Mon, 03 Apr 2023 18:28:06 -0700 (PDT) X-Received: by 2002:a17:906:3a9b:b0:93d:770:25df with SMTP id y27-20020a1709063a9b00b0093d077025dfmr587820ejd.37.1680571685708; Mon, 03 Apr 2023 18:28:05 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id dx21-20020a170906a85500b008d044ede804sm5170526ejb.163.2023.04.03.18.28.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:05 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 05/15] drm: debugfs: provide infrastructure to dump a DRM GPU VA space Date: Tue, 4 Apr 2023 03:27:31 +0200 Message-Id: <20230404012741.116502-6-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208365222972940?= X-GMAIL-MSGID: =?utf-8?q?1762208365222972940?= This commit adds a function to dump a DRM GPU VA space and a macro for drivers to register the struct drm_info_list 'gpuvas' entry. Most likely, most drivers might maintain one DRM GPU VA space per struct drm_file, but there might also be drivers not having a fixed relation between DRM GPU VA spaces and a DRM core infrastructure, hence we need the indirection via the driver iterating it's maintained DRM GPU VA spaces. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/drm_debugfs.c | 41 +++++++++++++++++++++++++++++++++++ include/drm/drm_debugfs.h | 25 +++++++++++++++++++++ 2 files changed, 66 insertions(+) diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c index 4855230ba2c6..82180fb1c200 100644 --- a/drivers/gpu/drm/drm_debugfs.c +++ b/drivers/gpu/drm/drm_debugfs.c @@ -39,6 +39,7 @@ #include #include #include +#include #include "drm_crtc_internal.h" #include "drm_internal.h" @@ -175,6 +176,46 @@ static const struct file_operations drm_debugfs_fops = { .release = single_release, }; +/** + * drm_debugfs_gpuva_info - dump the given DRM GPU VA space + * @m: pointer to the &seq_file to write + * @mgr: the &drm_gpuva_manager representing the GPU VA space + * + * Dumps the GPU VA mappings of a given DRM GPU VA manager. + * + * For each DRM GPU VA space drivers should call this function from their + * &drm_info_list's show callback. + * + * Returns: 0 on success, -ENODEV if the &mgr is not initialized + */ +int drm_debugfs_gpuva_info(struct seq_file *m, + struct drm_gpuva_manager *mgr) +{ + DRM_GPUVA_ITER(it, mgr, 0); + struct drm_gpuva *va, *kva = &mgr->kernel_alloc_node; + + if (!mgr->name) + return -ENODEV; + + seq_printf(m, "DRM GPU VA space (%s) [0x%016llx;0x%016llx]\n", + mgr->name, mgr->mm_start, mgr->mm_start + mgr->mm_range); + seq_printf(m, "Kernel reserved node [0x%016llx;0x%016llx]\n", + kva->va.addr, kva->va.addr + kva->va.range); + seq_puts(m, "\n"); + seq_puts(m, " VAs | start | range | end | object | object offset\n"); + seq_puts(m, "-------------------------------------------------------------------------------------------------------------\n"); + drm_gpuva_iter_for_each(va, it) { + if (unlikely(va == &mgr->kernel_alloc_node)) + continue; + + seq_printf(m, " | 0x%016llx | 0x%016llx | 0x%016llx | 0x%016llx | 0x%016llx\n", + va->va.addr, va->va.range, va->va.addr + va->va.range, + (u64)va->gem.obj, va->gem.offset); + } + + return 0; +} +EXPORT_SYMBOL(drm_debugfs_gpuva_info); /** * drm_debugfs_create_files - Initialize a given set of debugfs files for DRM diff --git a/include/drm/drm_debugfs.h b/include/drm/drm_debugfs.h index 7616f457ce70..cb2c1956a214 100644 --- a/include/drm/drm_debugfs.h +++ b/include/drm/drm_debugfs.h @@ -34,6 +34,22 @@ #include #include + +#include + +/** + * DRM_DEBUGFS_GPUVA_INFO - &drm_info_list entry to dump a GPU VA space + * @show: the &drm_info_list's show callback + * @data: driver private data + * + * Drivers should use this macro to define a &drm_info_list entry to provide a + * debugfs file for dumping the GPU VA space regions and mappings. + * + * For each DRM GPU VA space drivers should call drm_debugfs_gpuva_info() from + * their @show callback. + */ +#define DRM_DEBUGFS_GPUVA_INFO(show, data) {"gpuvas", show, DRIVER_GEM_GPUVA, data} + /** * struct drm_info_list - debugfs info list entry * @@ -134,6 +150,9 @@ void drm_debugfs_add_file(struct drm_device *dev, const char *name, void drm_debugfs_add_files(struct drm_device *dev, const struct drm_debugfs_info *files, int count); + +int drm_debugfs_gpuva_info(struct seq_file *m, + struct drm_gpuva_manager *mgr); #else static inline void drm_debugfs_create_files(const struct drm_info_list *files, int count, struct dentry *root, @@ -155,6 +174,12 @@ static inline void drm_debugfs_add_files(struct drm_device *dev, const struct drm_debugfs_info *files, int count) {} + +static inline int drm_debugfs_gpuva_info(struct seq_file *m, + struct drm_gpuva_manager *mgr) +{ + return 0; +} #endif #endif /* _DRM_DEBUGFS_H_ */ From patchwork Tue Apr 4 01:27:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78811 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2705534vqo; Mon, 3 Apr 2023 18:45:18 -0700 (PDT) X-Google-Smtp-Source: AKy350ZaQ+9K2DOFNG9HKORXk08sgvOzsXvBErODgTDYbuwV/sFM/5+EKlqRLLGtNweZa7gAVVEh X-Received: by 2002:a17:907:7677:b0:8fa:b2b:9de with SMTP id kk23-20020a170907767700b008fa0b2b09demr526190ejc.25.1680572718264; Mon, 03 Apr 2023 18:45:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572718; cv=none; d=google.com; s=arc-20160816; b=O2YiH36LXjN5H+qBjDJ+Caba8iTr62lwo4PZlQ8eHZ/zetv17VgIrsc5Ar3VOzqFgV hkYB3WrkV4kK0421rCHYJ0Fc4Uu5sx5M0md1LMlb+llEPU1d+KPCbfPJlosV7RxUextO hDThvMZP0j3QceMr2x9o0UPwLkjoL8hCY7/ilohT/rMUGReLdtq8mY4VBGhR9vQXSfD+ mjbJDFuyxsMPGAHFDO1HfMZcNfz/9rPoMR9EoGmGbwlpSapmljwp2icFWZQ3I6L+UZ1B mt1r3LN4TTqwOirtMBtsjYTWgwSkaEuChPlXGMjcOBi5Ym7oTDOexmbIzw2tfoi3XiiZ Oclw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=A+yeIS1YieJP1Cji25ksXDVS6ot9QpdrhfmcgsCaS4Q=; b=RF66NNbVQP5hPSVQEsI7qU6flNxy4T1gx4bqiHZT+nmt0y/3ALVkDJG8xWGfb4ubcd Hbjqoyyj3YFzc/Sj1mlTVl/oh06H/leTlkmx1Xg2jBqdsXbx65etu0TggzG/I7QId0X+ Op3M+V3s8QXTb1EshRCvPXADovopjFdQhkm/D4+zC6LwEY/n0s0yDDhnv1V0gL2poxeN scSpJhwAqjNk4nj02wRXLEpbEjP7noASnARUFu0yaznJMf/yvsKjeu2FQsqN1BjmYSzX JodwpT/oIThJYrajQzl+0k9dfjucXTCr5z2wA9B3ptL+cWN1YQnAgP/Q9+tFZPUx4hm4 3FCg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=L7gBgueW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h7-20020a17090619c700b0092744b8d1b8si174653ejd.904.2023.04.03.18.44.54; Mon, 03 Apr 2023 18:45:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=L7gBgueW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233075AbjDDB3m (ORCPT + 99 others); Mon, 3 Apr 2023 21:29:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232789AbjDDB3Y (ORCPT ); Mon, 3 Apr 2023 21:29:24 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA1A81993 for ; Mon, 3 Apr 2023 18:28:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571692; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=A+yeIS1YieJP1Cji25ksXDVS6ot9QpdrhfmcgsCaS4Q=; b=L7gBgueWwacXAMT3NszhT1N05+DkOJBzGDXrcnQjhXKfJ6cLpGfz3lj/xjYNfbvSZ98s6m LT17vDYaB/m3s3bKZdFjm7kFnu94VGmcdeH9V+irJ6wZpLRsfAZoB90TY11YOepFIDYV36 Hi0hJ01YvMRZN5wON5xmwGNcja/5bDw= Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-86-O2OhZIPLO4yOiUHZobNqCw-1; Mon, 03 Apr 2023 21:28:11 -0400 X-MC-Unique: O2OhZIPLO4yOiUHZobNqCw-1 Received: by mail-ed1-f69.google.com with SMTP id v1-20020a50d581000000b0050291cda08aso10029102edi.15 for ; Mon, 03 Apr 2023 18:28:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571690; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A+yeIS1YieJP1Cji25ksXDVS6ot9QpdrhfmcgsCaS4Q=; b=s/4vBbD3y7144CgUzLgJF57D8P14BT2Mtb4N7V7k8rqYU5x8DXb4tF6amhb9UDHsxH GHzx1T2vxmJDwz2QBS/s/Odagma4/09ppFXeJfie/4KWstTrZb9GHyX84HFCbWVrD+0H cu7E0VgxhMl5/FYMUeeuX5pE7FdhgZz76LHmqyl6pSGg0/9M7TPTzW89OK/bRlWl0Nq8 qeAx8A3fRAUEulHtI4shS1Kjt5EHi2TLReWaiWNhIGqRRloKct7nEYL7DIgQmCOTvS7K 7wwKtxepKRLDSBsZZpgPbzemuqH0ky8aq/E0SqLwFQoeuwasEMDf6q46n+P4FQ+iQwEY mFSg== X-Gm-Message-State: AAQBX9f9Tdkf0iZvtVx/9YwAFWAvvyT87PMYlSs5In0nHIWMJTbLe/3k qM5veaPp13Q4jhI6Jr3vKusEOWT2D0RKYiTMSwS3u3WIxWKzIC35LewU0nr/Pc284CZn9JT1fU3 rjHnrzMmsJjI45bRNLB8/Ptyo X-Received: by 2002:aa7:de89:0:b0:4fd:2674:4ff2 with SMTP id j9-20020aa7de89000000b004fd26744ff2mr950462edv.11.1680571689798; Mon, 03 Apr 2023 18:28:09 -0700 (PDT) X-Received: by 2002:aa7:de89:0:b0:4fd:2674:4ff2 with SMTP id j9-20020aa7de89000000b004fd26744ff2mr950448edv.11.1680571689584; Mon, 03 Apr 2023 18:28:09 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id q17-20020a50c351000000b004bf76fdfdb3sm5182293edb.26.2023.04.03.18.28.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:09 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich , Dave Airlie Subject: [PATCH drm-next v3 06/15] drm/nouveau: new VM_BIND uapi interfaces Date: Tue, 4 Apr 2023 03:27:32 +0200 Message-Id: <20230404012741.116502-7-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208218440299281?= X-GMAIL-MSGID: =?utf-8?q?1762208218440299281?= This commit provides the interfaces for the new UAPI motivated by the Vulkan API. It allows user mode drivers (UMDs) to: 1) Initialize a GPU virtual address (VA) space via the new DRM_IOCTL_NOUVEAU_VM_INIT ioctl. UMDs can provide a kernel reserved VA area. 2) Bind and unbind GPU VA space mappings via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl. 3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl. Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC support asynchronous processing with DRM syncobjs as synchronization mechanism. The default DRM_IOCTL_NOUVEAU_VM_BIND is synchronous processing, DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only. Co-authored-by: Dave Airlie Signed-off-by: Danilo Krummrich --- Documentation/gpu/driver-uapi.rst | 8 ++ include/uapi/drm/nouveau_drm.h | 209 ++++++++++++++++++++++++++++++ 2 files changed, 217 insertions(+) diff --git a/Documentation/gpu/driver-uapi.rst b/Documentation/gpu/driver-uapi.rst index 4411e6919a3d..9c7ca6e33a68 100644 --- a/Documentation/gpu/driver-uapi.rst +++ b/Documentation/gpu/driver-uapi.rst @@ -6,3 +6,11 @@ drm/i915 uAPI ============= .. kernel-doc:: include/uapi/drm/i915_drm.h + +drm/nouveau uAPI +================ + +VM_BIND / EXEC uAPI +------------------- + +.. kernel-doc:: include/uapi/drm/nouveau_drm.h diff --git a/include/uapi/drm/nouveau_drm.h b/include/uapi/drm/nouveau_drm.h index 853a327433d3..4d3a70529637 100644 --- a/include/uapi/drm/nouveau_drm.h +++ b/include/uapi/drm/nouveau_drm.h @@ -126,6 +126,209 @@ struct drm_nouveau_gem_cpu_fini { __u32 handle; }; +/** + * struct drm_nouveau_sync - sync object + * + * This structure serves as synchronization mechanism for (potentially) + * asynchronous operations such as EXEC or VM_BIND. + */ +struct drm_nouveau_sync { + /** + * @flags: the flags for a sync object + * + * The first 8 bits are used to determine the type of the sync object. + */ + __u32 flags; +#define DRM_NOUVEAU_SYNC_SYNCOBJ 0x0 +#define DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ 0x1 +#define DRM_NOUVEAU_SYNC_TYPE_MASK 0xf + /** + * @handle: the handle of the sync object + */ + __u32 handle; + /** + * @timeline_value: + * + * The timeline point of the sync object in case the syncobj is of + * type DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ. + */ + __u64 timeline_value; +}; + +/** + * struct drm_nouveau_vm_init - GPU VA space init structure + * + * Used to initialize the GPU's VA space for a user client, telling the kernel + * which portion of the VA space is managed by the UMD and kernel respectively. + */ +struct drm_nouveau_vm_init { + /** + * @unmanaged_addr: start address of the kernel managed VA space region + */ + __u64 unmanaged_addr; + /** + * @unmanaged_size: size of the kernel managed VA space region in bytes + */ + __u64 unmanaged_size; +}; + +/** + * struct drm_nouveau_vm_bind_op - VM_BIND operation + * + * This structure represents a single VM_BIND operation. UMDs should pass + * an array of this structure via struct drm_nouveau_vm_bind's &op_ptr field. + */ +struct drm_nouveau_vm_bind_op { + /** + * @op: the operation type + */ + __u32 op; +/** + * @DRM_NOUVEAU_VM_BIND_OP_MAP: + * + * Map a GEM object to the GPU's VA space. Optionally, the + * &DRM_NOUVEAU_VM_BIND_SPARSE flag can be passed to instruct the kernel to + * create sparse mappings for the given range. + */ +#define DRM_NOUVEAU_VM_BIND_OP_MAP 0x0 +/** + * @DRM_NOUVEAU_VM_BIND_OP_UNMAP: + * + * Unmap an existing mapping in the GPU's VA space. If the region the mapping + * is located in is a sparse region, new sparse mappings are created where the + * unmapped (memory backed) mapping was mapped previously. To remove a sparse + * region the &DRM_NOUVEAU_VM_BIND_SPARSE must be set. + */ +#define DRM_NOUVEAU_VM_BIND_OP_UNMAP 0x1 + /** + * @flags: the flags for a &drm_nouveau_vm_bind_op + */ + __u32 flags; +/** + * @DRM_NOUVEAU_VM_BIND_SPARSE: + * + * Indicates that an allocated VA space region should be sparse. + */ +#define DRM_NOUVEAU_VM_BIND_SPARSE (1 << 8) + /** + * @handle: the handle of the DRM GEM object to map + */ + __u32 handle; + /** + * @pad: 32 bit padding, should be 0 + */ + __u32 pad; + /** + * @addr: + * + * the address the VA space region or (memory backed) mapping should be mapped to + */ + __u64 addr; + /** + * @bo_offset: the offset within the BO backing the mapping + */ + __u64 bo_offset; + /** + * @range: the size of the requested mapping in bytes + */ + __u64 range; +}; + +/** + * struct drm_nouveau_vm_bind - structure for DRM_IOCTL_NOUVEAU_VM_BIND + */ +struct drm_nouveau_vm_bind { + /** + * @op_count: the number of &drm_nouveau_vm_bind_op + */ + __u32 op_count; + /** + * @flags: the flags for a &drm_nouveau_vm_bind ioctl + */ + __u32 flags; +/** + * @DRM_NOUVEAU_VM_BIND_RUN_ASYNC: + * + * Indicates that the given VM_BIND operation should be executed asynchronously + * by the kernel. + * + * If this flag is not supplied the kernel executes the associated operations + * synchronously and doesn't accept any &drm_nouveau_sync objects. + */ +#define DRM_NOUVEAU_VM_BIND_RUN_ASYNC 0x1 + /** + * @wait_count: the number of wait &drm_nouveau_syncs + */ + __u32 wait_count; + /** + * @sig_count: the number of &drm_nouveau_syncs to signal when finished + */ + __u32 sig_count; + /** + * @wait_ptr: pointer to &drm_nouveau_syncs to wait for + */ + __u64 wait_ptr; + /** + * @sig_ptr: pointer to &drm_nouveau_syncs to signal when finished + */ + __u64 sig_ptr; + /** + * @op_ptr: pointer to the &drm_nouveau_vm_bind_ops to execute + */ + __u64 op_ptr; +}; + +/** + * struct drm_nouveau_exec_push - EXEC push operation + * + * This structure represents a single EXEC push operation. UMDs should pass an + * array of this structure via struct drm_nouveau_exec's &push_ptr field. + */ +struct drm_nouveau_exec_push { + /** + * @va: the virtual address of the push buffer mapping + */ + __u64 va; + /** + * @va_len: the length of the push buffer mapping + */ + __u64 va_len; +}; + +/** + * struct drm_nouveau_exec - structure for DRM_IOCTL_NOUVEAU_EXEC + */ +struct drm_nouveau_exec { + /** + * @channel: the channel to execute the push buffer in + */ + __u32 channel; + /** + * @push_count: the number of &drm_nouveau_exec_push ops + */ + __u32 push_count; + /** + * @wait_count: the number of wait &drm_nouveau_syncs + */ + __u32 wait_count; + /** + * @sig_count: the number of &drm_nouveau_syncs to signal when finished + */ + __u32 sig_count; + /** + * @wait_ptr: pointer to &drm_nouveau_syncs to wait for + */ + __u64 wait_ptr; + /** + * @sig_ptr: pointer to &drm_nouveau_syncs to signal when finished + */ + __u64 sig_ptr; + /** + * @push_ptr: pointer to &drm_nouveau_exec_push ops + */ + __u64 push_ptr; +}; + #define DRM_NOUVEAU_GETPARAM 0x00 /* deprecated */ #define DRM_NOUVEAU_SETPARAM 0x01 /* deprecated */ #define DRM_NOUVEAU_CHANNEL_ALLOC 0x02 /* deprecated */ @@ -136,6 +339,9 @@ struct drm_nouveau_gem_cpu_fini { #define DRM_NOUVEAU_NVIF 0x07 #define DRM_NOUVEAU_SVM_INIT 0x08 #define DRM_NOUVEAU_SVM_BIND 0x09 +#define DRM_NOUVEAU_VM_INIT 0x10 +#define DRM_NOUVEAU_VM_BIND 0x11 +#define DRM_NOUVEAU_EXEC 0x12 #define DRM_NOUVEAU_GEM_NEW 0x40 #define DRM_NOUVEAU_GEM_PUSHBUF 0x41 #define DRM_NOUVEAU_GEM_CPU_PREP 0x42 @@ -197,6 +403,9 @@ struct drm_nouveau_svm_bind { #define DRM_IOCTL_NOUVEAU_GEM_CPU_FINI DRM_IOW (DRM_COMMAND_BASE + DRM_NOUVEAU_GEM_CPU_FINI, struct drm_nouveau_gem_cpu_fini) #define DRM_IOCTL_NOUVEAU_GEM_INFO DRM_IOWR(DRM_COMMAND_BASE + DRM_NOUVEAU_GEM_INFO, struct drm_nouveau_gem_info) +#define DRM_IOCTL_NOUVEAU_VM_INIT DRM_IOWR(DRM_COMMAND_BASE + DRM_NOUVEAU_VM_INIT, struct drm_nouveau_vm_init) +#define DRM_IOCTL_NOUVEAU_VM_BIND DRM_IOWR(DRM_COMMAND_BASE + DRM_NOUVEAU_VM_BIND, struct drm_nouveau_vm_bind) +#define DRM_IOCTL_NOUVEAU_EXEC DRM_IOWR(DRM_COMMAND_BASE + DRM_NOUVEAU_EXEC, struct drm_nouveau_exec) #if defined(__cplusplus) } #endif From patchwork Tue Apr 4 01:27:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78818 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2708331vqo; Mon, 3 Apr 2023 18:52:56 -0700 (PDT) X-Google-Smtp-Source: AKy350bJa64vERb7UdXm9IoALgAU2nyVFgUZkJJ9Knfvmi0VnH4Eld2dywg1DCZv1auwkMP38gLN X-Received: by 2002:a05:6402:4cf:b0:500:2cc6:36da with SMTP id n15-20020a05640204cf00b005002cc636damr964257edw.19.1680573176266; Mon, 03 Apr 2023 18:52:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680573176; cv=none; d=google.com; s=arc-20160816; b=Wu0wowIHFeE1JeKFykdzWtyv9LyouqahBTiE1FZLuvvfMx7p6LHOTEH/fU10NooqDm b4fyLuMFOmix9+2yDRVKM0Gx/RQsCRnGU13uAp/ErZ3fOwMlo0hNYjxUlG4CezmRJ4Et BuBti2VW5uk1Gv5G1goJhKfdmPnPHofHVHd9RiBrpJHZPCZm923xBkXGC2Xx6kxUuCQ9 DC3YZASyUf3ozyu8fFKbhh1smauS62aygwCsCWVPNffLy8w/UWJmrCvXRxUhdXkgFlI0 hjnHO0b5xwIIvIO978oJmGFZjwfjkeUqrNeAsAVkOv7M0IY8rvvlCR7/kP1aeDPUxdQB IEOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=C41um5yLWIzQt7F7rKhcCeybPcAXgIiNcRs8QRFgiIk=; b=HuMhouthWEVxXs1Ih4MMfZOA0yLXlPGSZTkOrKatMKhribvu0kEN3kKJvvKaDFRUwF BamKY7I1XUHLw/SINIBcvBKTnVMA+9TDe4xWyxRWKhvCBhB6qJO9iF1xaav3vtoWHVLG PGjt/1wQHmrs3gJ8b/zek6YCrvdIlQOAs1VOx9B2QaoD4oeDAuS/308Q6D2lEbOOcoE4 kqEPa18lR2680iFDJG4Ygnpo5xnjnCUuCEeqRtqkx8G1Bwf7rQRn1igsmgXeB8Qp7+4G yBNST31GcwT5Uu6q+LOzRG/Wuh+T/hmTKT7Q7fvQJQ8X7m8Ct9vFcDVmHo4LOtDuHfeD 85kQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=i1KRwvgp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m16-20020aa7c2d0000000b004fbe5ca598fsi9146893edp.643.2023.04.03.18.52.32; Mon, 03 Apr 2023 18:52:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=i1KRwvgp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232850AbjDDB30 (ORCPT + 99 others); Mon, 3 Apr 2023 21:29:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232656AbjDDB3W (ORCPT ); Mon, 3 Apr 2023 21:29:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC5792108 for ; Mon, 3 Apr 2023 18:28:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571696; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C41um5yLWIzQt7F7rKhcCeybPcAXgIiNcRs8QRFgiIk=; b=i1KRwvgpDKNo3CogI/H5QvnOEoKuwqCwUzLjPHAo63NRBfoCssGU1q0lFTHnZ6hWvLI6uM NLz2zKk9yb5K44muPtldoSjZu/3CEJIbpHOk7Y6RWenTSki5X7bhJ7Y/xow1mHLimazKA6 kQaUZHlNFRxO/KfnEP5iLSRfVDurRwk= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-64-miohBxddOSyawI4l0oRDPA-1; Mon, 03 Apr 2023 21:28:15 -0400 X-MC-Unique: miohBxddOSyawI4l0oRDPA-1 Received: by mail-ed1-f71.google.com with SMTP id b18-20020a50b412000000b0050234a3ad75so38956802edh.23 for ; Mon, 03 Apr 2023 18:28:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571693; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C41um5yLWIzQt7F7rKhcCeybPcAXgIiNcRs8QRFgiIk=; b=4DeNzJkdydfy6OKglRqNs/h2HA9ma1z/NyZL929imObUTUaCl+gGlNGcMFjWco/N/F g8K9P2TZUyoicypvjER/iym/ralg+6UfQP68Mtomvcg8bBPbYMhUbMwPmC0HFiZFhxkF Q2DomJHp1d2/bUm0QpTAOz30zPqqSEAVx2/o+8GRwYeYxmPyO2mUmBWXWawaZRa/RQsr UmKcNtPiqk8oTwufMjlMSo9TVaDDnbMhTs4YLSe3WtBPcLLGnsk8irJ6NSvY/EtRUC9v czWUI3Obm1wwhjmi5CEV/PlkaTX1CY9TfYoo5Gm7h3Cvcn1JQCbKLSk398YwhkZeBxEB GNbQ== X-Gm-Message-State: AAQBX9dPsNMI42HJNQSzvp/nL902iyKtZIV8vN/X3O81e9Cbkz7cywye AHXWA5Xg/aHi3Lf9xdObGbBBVLS2qSB59igr2/EzaF/aHlsHx9Np5AczJu+ZyHPrJRqlZ3XsAjH hFdUqpcv9BWgy9QuRo4KhyazTv2F/DRGq X-Received: by 2002:aa7:c90e:0:b0:4fa:aee8:235f with SMTP id b14-20020aa7c90e000000b004faaee8235fmr943446edt.9.1680571693571; Mon, 03 Apr 2023 18:28:13 -0700 (PDT) X-Received: by 2002:aa7:c90e:0:b0:4fa:aee8:235f with SMTP id b14-20020aa7c90e000000b004faaee8235fmr943422edt.9.1680571693408; Mon, 03 Apr 2023 18:28:13 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id cq5-20020a056402220500b005023ddb37eesm5156889edb.8.2023.04.03.18.28.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:13 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 07/15] drm/nouveau: get vmm via nouveau_cli_vmm() Date: Tue, 4 Apr 2023 03:27:33 +0200 Message-Id: <20230404012741.116502-8-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208698626141886?= X-GMAIL-MSGID: =?utf-8?q?1762208698626141886?= Provide a getter function for the client's current vmm context. Since we'll add a new (u)vmm context for UMD bindings in subsequent commits, this will keep the code clean. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +- drivers/gpu/drm/nouveau/nouveau_chan.c | 2 +- drivers/gpu/drm/nouveau/nouveau_drv.h | 9 +++++++++ drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +++--- 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index c2ec91cc845d..7724fe63067d 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -204,7 +204,7 @@ nouveau_bo_alloc(struct nouveau_cli *cli, u64 *size, int *align, u32 domain, struct nouveau_drm *drm = cli->drm; struct nouveau_bo *nvbo; struct nvif_mmu *mmu = &cli->mmu; - struct nvif_vmm *vmm = cli->svm.cli ? &cli->svm.vmm : &cli->vmm.vmm; + struct nvif_vmm *vmm = &nouveau_cli_vmm(cli)->vmm; int i, pi = -1; if (!*size) { diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c index e648ecd0c1a0..1068abe41024 100644 --- a/drivers/gpu/drm/nouveau/nouveau_chan.c +++ b/drivers/gpu/drm/nouveau/nouveau_chan.c @@ -148,7 +148,7 @@ nouveau_channel_prep(struct nouveau_drm *drm, struct nvif_device *device, chan->device = device; chan->drm = drm; - chan->vmm = cli->svm.cli ? &cli->svm : &cli->vmm; + chan->vmm = nouveau_cli_vmm(cli); atomic_set(&chan->killed, 0); /* allocate memory for dma push buffer */ diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index b5de312a523f..81350e685b50 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -112,6 +112,15 @@ struct nouveau_cli_work { struct dma_fence_cb cb; }; +static inline struct nouveau_vmm * +nouveau_cli_vmm(struct nouveau_cli *cli) +{ + if (cli->svm.cli) + return &cli->svm; + + return &cli->vmm; +} + void nouveau_cli_work_queue(struct nouveau_cli *, struct dma_fence *, struct nouveau_cli_work *); diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index f77e44958037..08689ced4f6a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -103,7 +103,7 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv) struct nouveau_bo *nvbo = nouveau_gem_object(gem); struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev); struct device *dev = drm->dev->dev; - struct nouveau_vmm *vmm = cli->svm.cli ? &cli->svm : &cli->vmm; + struct nouveau_vmm *vmm = nouveau_cli_vmm(cli); struct nouveau_vma *vma; int ret; @@ -180,7 +180,7 @@ nouveau_gem_object_close(struct drm_gem_object *gem, struct drm_file *file_priv) struct nouveau_bo *nvbo = nouveau_gem_object(gem); struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev); struct device *dev = drm->dev->dev; - struct nouveau_vmm *vmm = cli->svm.cli ? &cli->svm : & cli->vmm; + struct nouveau_vmm *vmm = nouveau_cli_vmm(cli); struct nouveau_vma *vma; int ret; @@ -269,7 +269,7 @@ nouveau_gem_info(struct drm_file *file_priv, struct drm_gem_object *gem, { struct nouveau_cli *cli = nouveau_cli(file_priv); struct nouveau_bo *nvbo = nouveau_gem_object(gem); - struct nouveau_vmm *vmm = cli->svm.cli ? &cli->svm : &cli->vmm; + struct nouveau_vmm *vmm = nouveau_cli_vmm(cli); struct nouveau_vma *vma; if (is_power_of_2(nvbo->valid_domains)) From patchwork Tue Apr 4 01:27:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78810 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2705387vqo; Mon, 3 Apr 2023 18:44:56 -0700 (PDT) X-Google-Smtp-Source: AKy350YendgER6qFKzC7t1H81sftW35S5g889WRTI2jPqbDBvHVKyqk9kyHA2NKwi9no+84K946s X-Received: by 2002:a17:906:85c2:b0:8eb:d3a5:b9f0 with SMTP id i2-20020a17090685c200b008ebd3a5b9f0mr522440ejy.67.1680572696263; Mon, 03 Apr 2023 18:44:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572696; cv=none; d=google.com; s=arc-20160816; b=0GG6WvhkGA5zdTYR9Wna99B73StLrPk0ur5mCli1pYv346R4XdYId4gukKuli696AI SKTRe8Cml47PwV4xlFmfQjbaeJRaJBb/EDqmf/c4RDM0LY5O7JJZ6m6t6zewTiNz+4TI hvkwP5XqjZs8cxCO7/yqoV7nAz9fcHoieFvJkowMvIJBv6PMNR/UZd049CUu9Wrl9Nge 5Fi5RE7Hek0h0ODsz1f1TGsf/u/QqFYFi7aCWofK7CnSlDwB1EEPEAFp21aGi9yEvndW HawUINke40PQnrR0BZYxBOlGV2xfztaENlF24N/OJOeADptLu3ipQnd/pFs2kpQDK2U0 GiLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=wzrmMp6+aqWo/TvsnNZiIixlQKMUKus548lCDw5p9Z4=; b=a3cegvnL9TKYHauNFr5NA1L9LjitslGpfn+ngPJya1TXSsDpDF1uin243N/v85cdLi lcMmDJQeD/per9OXwbejUdVKKhgvk9wTx9Z2DeQCp9jbr4bPsuOJMYWzVRHbwf+Sm79s TesCXEIpRfZpn4taiHNPC27pMWwfI1LiLSYILXVadBVuzVURgvS9CptwvOLtqTqLZDzx taWu0MBSA/6P8Fp3IT16GUCpU7Md8Ijd6wYSvKCyK9/aved2ZnAmFTZUbdyd0GiVqB+c AqmvczN+7rIbtlJWokIQII1tf7hcp/6YBCZXYXJNEOEeO+fKdyn0N+WZ7lmzhegz3LL6 r7JQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Zgmp8DvK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rp20-20020a170906d97400b00930c07c35c3si8916251ejb.847.2023.04.03.18.44.32; Mon, 03 Apr 2023 18:44:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Zgmp8DvK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232750AbjDDB3X (ORCPT + 99 others); Mon, 3 Apr 2023 21:29:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48220 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232653AbjDDB3W (ORCPT ); Mon, 3 Apr 2023 21:29:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38EFC210A for ; Mon, 3 Apr 2023 18:28:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571699; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wzrmMp6+aqWo/TvsnNZiIixlQKMUKus548lCDw5p9Z4=; b=Zgmp8DvKxIe7f2bAFXoRTwUQNlv6Uh5/8AYJAGRL847XIkPKks/0gJR7DRVwlUOJUCJyaT C9Re8i1GkWlBE/wTY51Z8WNAcGLKk6JkuH6vHyCuGgSmQh1byX3Py/tUMfQe19y4ZzSStA Bq9J18QZpLPGOf23SQSNkpQnVygmFyw= Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-31-6aTcZizXPUKMjiWAwVQ4_w-1; Mon, 03 Apr 2023 21:28:18 -0400 X-MC-Unique: 6aTcZizXPUKMjiWAwVQ4_w-1 Received: by mail-ed1-f70.google.com with SMTP id c1-20020a0564021f8100b004acbe232c03so42921457edc.9 for ; Mon, 03 Apr 2023 18:28:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571697; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wzrmMp6+aqWo/TvsnNZiIixlQKMUKus548lCDw5p9Z4=; b=Aq22NKsxmWDHQGYMzMS6cgebV3yDHHDr6ZyG8kLDW/hzFd627za4meJkrujxtmLBmJ GSbmZl+ZypDD4O8VZQBr5egLlg/qbvEz5wwhtQ3cg3H+vK2P+ohs5rzLx9yrZfziGjZ+ n2Z+BM+wyzzbm9AM4ukZhH3Um58I/4RrrMixZuq8IRW3loxlVQnXJvUuUQ+KPwRimeDN 0XbKwYawC/HMcY4eArGPg+D7R+cYkiW8XfBNnE/GHgVo4od1FqPRoa4E0UkSyk3UkCHo 9UxSZxxxVaxdLg7UaV7q0V+VuZwtwHsqIbE48padVr1Y4XqZe0T3Axlkgh+GAH9uorzQ fj/Q== X-Gm-Message-State: AAQBX9eV6s2CAwL6FxVesDESgc+O2P+vc76T41aZcK7HG9WOo4jTxR76 BHVPI9djdMl18gQoyypf8e6vwOtjzTztnmVqa6bSWr20n/eCpG/TT7osFTrft9DNlyESB5Z5HCU TyjRGGnvCBgYlWVsN8GoimhBr X-Received: by 2002:a17:907:b12:b0:8c3:3439:24d9 with SMTP id h18-20020a1709070b1200b008c3343924d9mr549188ejl.24.1680571697341; Mon, 03 Apr 2023 18:28:17 -0700 (PDT) X-Received: by 2002:a17:907:b12:b0:8c3:3439:24d9 with SMTP id h18-20020a1709070b1200b008c3343924d9mr549180ejl.24.1680571697206; Mon, 03 Apr 2023 18:28:17 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id w14-20020a170906384e00b00932fa67b48fsm5210797ejc.183.2023.04.03.18.28.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:16 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 08/15] drm/nouveau: bo: initialize GEM GPU VA interface Date: Tue, 4 Apr 2023 03:27:34 +0200 Message-Id: <20230404012741.116502-9-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208195634913983?= X-GMAIL-MSGID: =?utf-8?q?1762208195634913983?= Initialize the GEM's DRM GPU VA manager interface in preparation for the (u)vmm implementation, provided by subsequent commits, to make use of it. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_bo.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 7724fe63067d..057bc995f19b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -215,11 +215,14 @@ nouveau_bo_alloc(struct nouveau_cli *cli, u64 *size, int *align, u32 domain, nvbo = kzalloc(sizeof(struct nouveau_bo), GFP_KERNEL); if (!nvbo) return ERR_PTR(-ENOMEM); + INIT_LIST_HEAD(&nvbo->head); INIT_LIST_HEAD(&nvbo->entry); INIT_LIST_HEAD(&nvbo->vma_list); nvbo->bo.bdev = &drm->ttm.bdev; + drm_gem_gpuva_init(&nvbo->bo.base); + /* This is confusing, and doesn't actually mean we want an uncached * mapping, but is what NOUVEAU_GEM_DOMAIN_COHERENT gets translated * into in nouveau_gem_new(). From patchwork Tue Apr 4 01:27:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78809 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2704250vqo; Mon, 3 Apr 2023 18:42:52 -0700 (PDT) X-Google-Smtp-Source: AKy350agU0mdmHU23K8qemuljmM77EUVtTiPAn6Mwiwv5NqoP5O3elO0epLmHkcuuAndG9lDRFwi X-Received: by 2002:a17:902:fb46:b0:19c:65bd:d44b with SMTP id lf6-20020a170902fb4600b0019c65bdd44bmr830281plb.60.1680572572499; Mon, 03 Apr 2023 18:42:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572572; cv=none; d=google.com; s=arc-20160816; b=EsTGENQiyXC5BZn8stskciQX1WvXtTDERCbTiXQGGmxxZMoLa4Pn6FMC6KXOENKyTg mQnpRDcpJYrCvsdHlTv87zN/HLnp3JR9qzRe0ESpgQpRnlEhTP4g8GZB6qmuaUd0VD2Q zvpdUZJ0TrUTp51lYWu/IcP74rs/3grUz+QKm/objOFYcaTmlVaG8ytx50mqZLnj9n6o hyeoCp0VbfHJlXgLg3U9fJwQc6gNFXUt0xLSSJh/FLY9YnDoHWg152yA/4T/FcNg0Opi kWz7cVCmLipuBf5Xw5gVPgj5iRYVPvfGYdSeKlCeOB+JIvSrM3B+EqkXzxvEqtKH+qUz dlVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=LKPAKixRboWIVKksjdDd3XWLOGimpIv4cn51ku79o4E=; b=dYFa70PX3om3dTNAbCfvHiw58KXIW5KWOn5/4R9lebtOQLY2DD5EE9QZCrwg7GVsNt xbUCrpUOYnLReFDswUM0N2pC/pYPJJGt94qmLe96xCtLbpkDBB3ZMl+42OA3D4t3Mez2 DwTVaZoB9b6cxrJ4P9JNnHeI9qe2XzZb1I0n0KCRtNn/nQT01SrHfIFU1hMRB9Vp9g6p uGsHBT9grOUfCaukSS2v+mJ/IER6s61LmwrQ3q4JWrz1fWstA6aJZf2u4A54KOjvj702 AeFN3XImH5Vu/CrEGbIOeb5PKjLbBuKP0uf+7I74ETVfmqgkaQkMWN9zcKqmG1St0wcj Nx1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=hL8BzSEE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x20-20020a170902b41400b001a1894a141esi8831285plr.43.2023.04.03.18.42.39; Mon, 03 Apr 2023 18:42:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=hL8BzSEE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232870AbjDDB33 (ORCPT + 99 others); Mon, 3 Apr 2023 21:29:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232676AbjDDB3W (ORCPT ); Mon, 3 Apr 2023 21:29:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B04D22126 for ; Mon, 3 Apr 2023 18:28:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571703; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LKPAKixRboWIVKksjdDd3XWLOGimpIv4cn51ku79o4E=; b=hL8BzSEEr9vCGsD6ptm/Ycc/PzCaMWKVPHF7uOPkzlQCjYGo/h7PWYhRQqUxnN5UVly0OT izRkqWQrAm4+xYqrh5zbtdbu2YM/qxeXvjmB0XGrK0JP8FCjcMA/nhLq+Yv4aTXmUcNM90 lbGk7KRUPRbX0cWyflHU2ok2z4hhIlo= Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-482-v2M6SWBrPgKnu3Doz6jxpQ-1; Mon, 03 Apr 2023 21:28:22 -0400 X-MC-Unique: v2M6SWBrPgKnu3Doz6jxpQ-1 Received: by mail-ed1-f69.google.com with SMTP id r19-20020a50aad3000000b005002e950cd3so44132030edc.11 for ; Mon, 03 Apr 2023 18:28:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LKPAKixRboWIVKksjdDd3XWLOGimpIv4cn51ku79o4E=; b=TZ4+vq2s4a5Da5sTjbmOvjbWI0Dw5AL1n1Gkh3eetv6E/fNeOZZLjMjqdCn1+xH6+R nDfQ0JNquzAk4x0SKDNvQMQMTgyuGvD/VkUtcGuQV7rDbkmTpXmU7fXSUgvjlx7YabXw rt+XiUjL2pg3kLP6zVITt40eqgaXkJen8ZHXi0GGR/154Faq4l+2uqLGwiPStCGta56D +yo2EDEO3l3CsKWqoYD7sUR3GZk2sZ3iC7Pqx3kiJQAOjIhVvhfQMbEQgEuifFf8DC7X JqJu6LzobD/ZcG91qAZd2Mmd9OwjUT1hRRFsC3EQESJ+TBgzv+BFDG4rpi22kl3rCbkz AIPA== X-Gm-Message-State: AAQBX9cmB9Z0ZzdbH0eB6NlBIxKbdpdg4XQAsDDljxYZGecrp4MbvvhR 45aK/hFHOqV4kRVJAgF9irO3PYDQ4D4a2XKLGEKKwF4vZmtL9ELR8B7BCtRdUArFJTeQucgf1Na b5uyMdHYco7RFQHl+f9vZqW87 X-Received: by 2002:a17:906:7c53:b0:8b1:7de3:cfaa with SMTP id g19-20020a1709067c5300b008b17de3cfaamr521665ejp.3.1680571701402; Mon, 03 Apr 2023 18:28:21 -0700 (PDT) X-Received: by 2002:a17:906:7c53:b0:8b1:7de3:cfaa with SMTP id g19-20020a1709067c5300b008b17de3cfaamr521648ejp.3.1680571701165; Mon, 03 Apr 2023 18:28:21 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id j10-20020a17090643ca00b0092f38a6d082sm5159990ejn.209.2023.04.03.18.28.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:20 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 09/15] drm/nouveau: move usercopy helpers to nouveau_drv.h Date: Tue, 4 Apr 2023 03:27:35 +0200 Message-Id: <20230404012741.116502-10-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208065650665088?= X-GMAIL-MSGID: =?utf-8?q?1762208065650665088?= Move the usercopy helpers to a common driver header file to make it usable for the new API added in subsequent commits. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_drv.h | 26 ++++++++++++++++++++++++++ drivers/gpu/drm/nouveau/nouveau_gem.c | 26 -------------------------- 2 files changed, 26 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index 81350e685b50..20a7f31b9082 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -130,6 +130,32 @@ nouveau_cli(struct drm_file *fpriv) return fpriv ? fpriv->driver_priv : NULL; } +static inline void +u_free(void *addr) +{ + kvfree(addr); +} + +static inline void * +u_memcpya(uint64_t user, unsigned nmemb, unsigned size) +{ + void *mem; + void __user *userptr = (void __force __user *)(uintptr_t)user; + + size *= nmemb; + + mem = kvmalloc(size, GFP_KERNEL); + if (!mem) + return ERR_PTR(-ENOMEM); + + if (copy_from_user(mem, userptr, size)) { + u_free(mem); + return ERR_PTR(-EFAULT); + } + + return mem; +} + #include #include diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 08689ced4f6a..4369c8dc8b5b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -613,32 +613,6 @@ nouveau_gem_pushbuf_validate(struct nouveau_channel *chan, return 0; } -static inline void -u_free(void *addr) -{ - kvfree(addr); -} - -static inline void * -u_memcpya(uint64_t user, unsigned nmemb, unsigned size) -{ - void *mem; - void __user *userptr = (void __force __user *)(uintptr_t)user; - - size *= nmemb; - - mem = kvmalloc(size, GFP_KERNEL); - if (!mem) - return ERR_PTR(-ENOMEM); - - if (copy_from_user(mem, userptr, size)) { - u_free(mem); - return ERR_PTR(-EFAULT); - } - - return mem; -} - static int nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli, struct drm_nouveau_gem_pushbuf *req, From patchwork Tue Apr 4 01:27:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78812 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2706116vqo; Mon, 3 Apr 2023 18:46:37 -0700 (PDT) X-Google-Smtp-Source: AKy350bLB6UWcr71OSakGUx0gHi9rwM0vnqRKXk+hM8LgOB3nf2tQY0JofKK+swNb1fUdr9JvDyO X-Received: by 2002:a05:6402:445:b0:4fb:5fe1:bc3b with SMTP id p5-20020a056402044500b004fb5fe1bc3bmr990062edw.0.1680572797628; Mon, 03 Apr 2023 18:46:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572797; cv=none; d=google.com; s=arc-20160816; b=o9NmjXKDQBSradcI54nI7A/wsruFJ4dYcZ3Op6GS78fJr1yCDbgfORo9Hp+xRLGbSs nvJlmAblQeyYwuheeiCCGCqJ0vA0U2DKmbC1u2jEza17Qu84gTR7Sl8MNGwZ7MPBXNj4 Lvc+Rww5QGZOqFRnQYi3LYruV9851OXSl7Q5P4uB03dtVPzBtmkWpGmtmikRUsaRZPUz 9lkbWdp9RIyO8e2dVFEViUHOhUqjXz7ZpQn5de1h/pl37Znjj/ZN6NZarZe27iYWAhQD Jy8TdPZk9vwvXqWTGY0B3Y1XNRNgUFGX8zL93RPuigjNMJg0V9JEHRF70q8KFd7/qsOb oAOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WJjo+FH94rqm+YFtx8z0gG8uW3UTDYEscdTvstDnOSI=; b=hHCN61Fg9EMq5jNBqRVhdMbnXhiRSZpfzRHmKsYprffB40PHlpCZuJSUYx5FfCFmkL prob3QPLN6CoXUGuALzrwLWkxMEfU9oNlnS4D1IYngQelptiMWRgNL0RxVEkp4UTuD/i UHIQGl/m6teZXNJbFMyYAWURs4T2OcFMJqdd61cux5ozqOQ+QOk7v3K70m0lCNY96nkJ Gfa494mqAAt7THzNiI+fg94iAFsrJwa2zWHVqvEY9glBAakVhJMMmRyetbVu9cbFiW5j XXVQceeZbTb3k3fgbKGKNxP9Wn31MDV2NkWdDT/tmGRsArLAY3csFx7MWbTPxDxv/xi3 bxEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BHBe0yye; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b9-20020aa7df89000000b004fd29e926fasi713588edy.559.2023.04.03.18.46.13; Mon, 03 Apr 2023 18:46:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=BHBe0yye; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232940AbjDDB3d (ORCPT + 99 others); Mon, 3 Apr 2023 21:29:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232690AbjDDB3W (ORCPT ); Mon, 3 Apr 2023 21:29:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F22562127 for ; Mon, 3 Apr 2023 18:28:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571707; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WJjo+FH94rqm+YFtx8z0gG8uW3UTDYEscdTvstDnOSI=; b=BHBe0yyeEeui3IWtWm4M7fNpRtmanu7lzyVHPv9caSHXEeV4JYGXRtfsrSEbPlHBiXrCf4 mu0zsW2QSyn/bh2A5PNKkikmGhnav8HnBfBHWXWn8eW/HK9lPY3WRH+7jejxVdR8eIIkux cYvwqkyadXL8lZl+21IASM6mVxK6ESI= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-441-ZzHUSNzVOXakHyxafezeNQ-1; Mon, 03 Apr 2023 21:28:26 -0400 X-MC-Unique: ZzHUSNzVOXakHyxafezeNQ-1 Received: by mail-ed1-f72.google.com with SMTP id v1-20020a50d581000000b0050291cda08aso10029912edi.15 for ; Mon, 03 Apr 2023 18:28:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571705; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WJjo+FH94rqm+YFtx8z0gG8uW3UTDYEscdTvstDnOSI=; b=P4cV5TbXVbjnlF/lZu9v/jIwwr1frV77SUStlJc5BnySeZ9NWL9eLtDdPYWpVfgANS +O5b8qNlq2j86avFVELcbo81O+t9ZmUewtxQcyXTB4uCWHMKnQlMd1m5ggGgAV5QnN5x pr3ILMgTuyuD9aYgNZeT6Ns7WcmP2Q4jj4wIdRNiqPRaiUriwiM2szKBMrbAAhj61zvy lyX3SBDleOOUmYqVaInpw8QrPgy2/vgURb/W7f05Z8juNVfa4ip20XCQyEznOqXsGpCl qHL+nbTNPOQfT+YvQqTwre3KZWCE4tEaIUTniABMltXZnp3hGKRlX7Al+N6p96jvK6xX 6aqQ== X-Gm-Message-State: AAQBX9c2pYHNyaMwrpTNAN/eauFL3MaxHk+iVKNLfhYD6h2O8wej6+Zc Z3M66BjDcvOxzNKrvVLzd2yAEWG7nbbpiSTslJf6aDuL6Pz5VyaPYoJIeL7n5R3tvpdQaDMPwXK ni9f0edrqemImi9D/PnL5H/vV X-Received: by 2002:a17:906:3555:b0:92b:e330:a79e with SMTP id s21-20020a170906355500b0092be330a79emr803374eja.29.1680571705138; Mon, 03 Apr 2023 18:28:25 -0700 (PDT) X-Received: by 2002:a17:906:3555:b0:92b:e330:a79e with SMTP id s21-20020a170906355500b0092be330a79emr803358eja.29.1680571704920; Mon, 03 Apr 2023 18:28:24 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id t17-20020a1709060c5100b00927341bf69dsm5212418ejf.88.2023.04.03.18.28.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:24 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 10/15] drm/nouveau: fence: separate fence alloc and emit Date: Tue, 4 Apr 2023 03:27:36 +0200 Message-Id: <20230404012741.116502-11-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208301848207033?= X-GMAIL-MSGID: =?utf-8?q?1762208301848207033?= The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence, in order to emit fences within DMA fence signalling critical sections (e.g. as typically done in the DRM GPU schedulers run_job() callback) we need to separate fence allocation and fence emitting. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/dispnv04/crtc.c | 9 ++++- drivers/gpu/drm/nouveau/nouveau_bo.c | 52 +++++++++++++++---------- drivers/gpu/drm/nouveau/nouveau_chan.c | 6 ++- drivers/gpu/drm/nouveau/nouveau_dmem.c | 9 +++-- drivers/gpu/drm/nouveau/nouveau_fence.c | 16 +++----- drivers/gpu/drm/nouveau/nouveau_fence.h | 3 +- drivers/gpu/drm/nouveau/nouveau_gem.c | 5 ++- 7 files changed, 59 insertions(+), 41 deletions(-) diff --git a/drivers/gpu/drm/nouveau/dispnv04/crtc.c b/drivers/gpu/drm/nouveau/dispnv04/crtc.c index a6f2e681bde9..a34924523133 100644 --- a/drivers/gpu/drm/nouveau/dispnv04/crtc.c +++ b/drivers/gpu/drm/nouveau/dispnv04/crtc.c @@ -1122,11 +1122,18 @@ nv04_page_flip_emit(struct nouveau_channel *chan, PUSH_NVSQ(push, NV_SW, NV_SW_PAGE_FLIP, 0x00000000); PUSH_KICK(push); - ret = nouveau_fence_new(chan, false, pfence); + ret = nouveau_fence_new(pfence); if (ret) goto fail; + ret = nouveau_fence_emit(*pfence, chan); + if (ret) + goto fail_fence_unref; + return 0; + +fail_fence_unref: + nouveau_fence_unref(pfence); fail: spin_lock_irqsave(&dev->event_lock, flags); list_del(&s->head); diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 057bc995f19b..e9cbbf594e6f 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -820,29 +820,39 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict, mutex_lock(&cli->mutex); else mutex_lock_nested(&cli->mutex, SINGLE_DEPTH_NESTING); + ret = nouveau_fence_sync(nouveau_bo(bo), chan, true, ctx->interruptible); - if (ret == 0) { - ret = drm->ttm.move(chan, bo, bo->resource, new_reg); - if (ret == 0) { - ret = nouveau_fence_new(chan, false, &fence); - if (ret == 0) { - /* TODO: figure out a better solution here - * - * wait on the fence here explicitly as going through - * ttm_bo_move_accel_cleanup somehow doesn't seem to do it. - * - * Without this the operation can timeout and we'll fallback to a - * software copy, which might take several minutes to finish. - */ - nouveau_fence_wait(fence, false, false); - ret = ttm_bo_move_accel_cleanup(bo, - &fence->base, - evict, false, - new_reg); - nouveau_fence_unref(&fence); - } - } + if (ret) + goto out_unlock; + + ret = drm->ttm.move(chan, bo, bo->resource, new_reg); + if (ret) + goto out_unlock; + + ret = nouveau_fence_new(&fence); + if (ret) + goto out_unlock; + + ret = nouveau_fence_emit(fence, chan); + if (ret) { + nouveau_fence_unref(&fence); + goto out_unlock; } + + /* TODO: figure out a better solution here + * + * wait on the fence here explicitly as going through + * ttm_bo_move_accel_cleanup somehow doesn't seem to do it. + * + * Without this the operation can timeout and we'll fallback to a + * software copy, which might take several minutes to finish. + */ + nouveau_fence_wait(fence, false, false); + ret = ttm_bo_move_accel_cleanup(bo, &fence->base, evict, false, + new_reg); + nouveau_fence_unref(&fence); + +out_unlock: mutex_unlock(&cli->mutex); return ret; } diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c index 1068abe41024..f47c0363683c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_chan.c +++ b/drivers/gpu/drm/nouveau/nouveau_chan.c @@ -62,9 +62,11 @@ nouveau_channel_idle(struct nouveau_channel *chan) struct nouveau_fence *fence = NULL; int ret; - ret = nouveau_fence_new(chan, false, &fence); + ret = nouveau_fence_new(&fence); if (!ret) { - ret = nouveau_fence_wait(fence, false, false); + ret = nouveau_fence_emit(fence, chan); + if (!ret) + ret = nouveau_fence_wait(fence, false, false); nouveau_fence_unref(&fence); } diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 789857faa048..4ad40e42cae1 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -209,7 +209,8 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) goto done; } - nouveau_fence_new(dmem->migrate.chan, false, &fence); + if (!nouveau_fence_new(&fence)) + nouveau_fence_emit(fence, dmem->migrate.chan); migrate_vma_pages(&args); nouveau_dmem_fence_done(&fence); dma_unmap_page(drm->dev->dev, dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); @@ -402,7 +403,8 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk) } } - nouveau_fence_new(chunk->drm->dmem->migrate.chan, false, &fence); + if (!nouveau_fence_new(&fence)) + nouveau_fence_emit(fence, chunk->drm->dmem->migrate.chan); migrate_device_pages(src_pfns, dst_pfns, npages); nouveau_dmem_fence_done(&fence); migrate_device_finalize(src_pfns, dst_pfns, npages); @@ -675,7 +677,8 @@ static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm, addr += PAGE_SIZE; } - nouveau_fence_new(drm->dmem->migrate.chan, false, &fence); + if (!nouveau_fence_new(&fence)) + nouveau_fence_emit(fence, chunk->drm->dmem->migrate.chan); migrate_vma_pages(args); nouveau_dmem_fence_done(&fence); nouveau_pfns_map(svmm, args->vma->vm_mm, args->start, pfns, i); diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c index ee5e9d40c166..e946408f945b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -210,6 +210,9 @@ nouveau_fence_emit(struct nouveau_fence *fence, struct nouveau_channel *chan) struct nouveau_fence_priv *priv = (void*)chan->drm->fence; int ret; + if (unlikely(!chan->fence)) + return -ENODEV; + fence->channel = chan; fence->timeout = jiffies + (15 * HZ); @@ -396,25 +399,16 @@ nouveau_fence_unref(struct nouveau_fence **pfence) } int -nouveau_fence_new(struct nouveau_channel *chan, bool sysmem, - struct nouveau_fence **pfence) +nouveau_fence_new(struct nouveau_fence **pfence) { struct nouveau_fence *fence; - int ret = 0; - - if (unlikely(!chan->fence)) - return -ENODEV; fence = kzalloc(sizeof(*fence), GFP_KERNEL); if (!fence) return -ENOMEM; - ret = nouveau_fence_emit(fence, chan); - if (ret) - nouveau_fence_unref(&fence); - *pfence = fence; - return ret; + return 0; } static const char *nouveau_fence_get_get_driver_name(struct dma_fence *fence) diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.h b/drivers/gpu/drm/nouveau/nouveau_fence.h index 0ca2bc85adf6..7c73c7c9834a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.h +++ b/drivers/gpu/drm/nouveau/nouveau_fence.h @@ -17,8 +17,7 @@ struct nouveau_fence { unsigned long timeout; }; -int nouveau_fence_new(struct nouveau_channel *, bool sysmem, - struct nouveau_fence **); +int nouveau_fence_new(struct nouveau_fence **); void nouveau_fence_unref(struct nouveau_fence **); int nouveau_fence_emit(struct nouveau_fence *, struct nouveau_channel *); diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 4369c8dc8b5b..061cfd55217a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -867,8 +867,11 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, } } - ret = nouveau_fence_new(chan, false, &fence); + ret = nouveau_fence_new(&fence); + if (!ret) + ret = nouveau_fence_emit(fence, chan); if (ret) { + nouveau_fence_unref(&fence); NV_PRINTK(err, cli, "error fencing pushbuf: %d\n", ret); WIND_RING(chan); goto out; From patchwork Tue Apr 4 01:27:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78821 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2709027vqo; Mon, 3 Apr 2023 18:54:55 -0700 (PDT) X-Google-Smtp-Source: AKy350YCkB6jj0CpUsDYWtzZI/skvI1EwP94Zhx8s7Gtwvmv7aQUBX4ZLmwQ7PA90a9k8+SY6HOX X-Received: by 2002:aa7:92ce:0:b0:62d:9b98:8042 with SMTP id k14-20020aa792ce000000b0062d9b988042mr18294379pfa.0.1680573294933; Mon, 03 Apr 2023 18:54:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680573294; cv=none; d=google.com; s=arc-20160816; b=LOmS+jqRCpNtsHsrAibb2re9+9pHag+hX1tosttbUNlbI3cgM4MnFJH4wiFW/Zfdsq xf+b1uCw0w9mfycJlaDvl1XisbAGZpD8hc0+5AwSiO+eDFmxnX8e1amIB9ThWbx2QPef ubP/uLs3TdsmLgJkMkHig19aAElDi9ceXco4hIWccd7heJrwEhq8HG5AH8/hCONmVyWD taXJlGJquU4C5XKKbN3JeaeRUcwnsu4OI0hrL3WpOHj27qFodpLOUIA7dw1tYHdBFfow Y+oCzfq8L63/gC7dfx4BmRC0RDMViPY/MzxpHXLwG3FUbkx9/v5ZRCmWksAJB7QzEuKe nT3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1l/vBoUEhgxsmLv104lwmJ8KryX+Oa0PJH6QLfzc/6M=; b=GKHhpYqsgx0ARs6fUI2lZR43n9Zx1SxvHJEtu9Tm2fmKaTtYW4oJ/n8M08NhjehSfu sxtLVI/hUEVCT3S1SS5H3BUSRGhOvOQkpDxnkbheFcyuiTVegXfrLK9q3v08yBXG/RTc LCejDmqFvk+le4QOMBsWJSOWHsjEHsBde2PToZm4y8ItMhSpe3C7V8V7Q9gxfBgxPV+Q vJ7N6Ci1hC7bWh4EIKCUdpVxjAwp0p3dMVuuqeMKgFlgdZfM+80MiVpo6sgyzLgfaFfp GkcXdXOlNAMwc9YjsMiUDtxQ1VLIsQk0HGlT6EZNnipZaL9Plhg8VaXO+p6kUnOxsXCO q9bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=f3oTlt7O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g21-20020a632015000000b0050c0549c4c6si9273238pgg.764.2023.04.03.18.54.42; Mon, 03 Apr 2023 18:54:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=f3oTlt7O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233182AbjDDBa1 (ORCPT + 99 others); Mon, 3 Apr 2023 21:30:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232869AbjDDB31 (ORCPT ); Mon, 3 Apr 2023 21:29:27 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D00EF213F for ; Mon, 3 Apr 2023 18:28:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571711; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1l/vBoUEhgxsmLv104lwmJ8KryX+Oa0PJH6QLfzc/6M=; b=f3oTlt7OVF6UfUTeKAd8QsV8ISBpiTQy3ttPC94IbERoh7KzHRUm2D5gGOEaGG7B8pqJkL KjLmTh74kXOcWr8c1m+KJK3tftEYsVNMFckEDkuJDcdHBxlz5ag9UM4BdGlrttLExKugSm Qx04ot/UJp/WyYM47r4AVqWCmcpFKCs= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-282-iL0PaeuDO8iJMD4lN0orqA-1; Mon, 03 Apr 2023 21:28:29 -0400 X-MC-Unique: iL0PaeuDO8iJMD4lN0orqA-1 Received: by mail-ed1-f71.google.com with SMTP id b6-20020a509f06000000b005029d95390aso7116947edf.2 for ; Mon, 03 Apr 2023 18:28:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571709; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1l/vBoUEhgxsmLv104lwmJ8KryX+Oa0PJH6QLfzc/6M=; b=DdlyNzBhdigP/pppjf6vpsEU9xjMQkn1tID1rDzPF3k3tYaX28aC0GyhV+qVTMKvIJ ua3YMlEyUn2ZAzz+xLQGs4YGOfwJSba+JC21lTTyX9m/8a/w02mtrAlEnNNiwNw9FoDz a+KRi3wjMotpAmIG0sJTyATbNe9JzF3bz7+ezrDx08bTBHrli4DzK4yoTMOsTzaa1wkA +K4iSFBwxBQiGLctnG/0XI38rTFuooWWNzEx36Iq0sUJUafxpLHyoc/vpdx4DV2vhWwl XD7iSJjPCLH/z8usjTtOcGFmSYfBxuXHLib59AVE3TxHlnS2DM9Kf76k338ZnyH54xrs fxVg== X-Gm-Message-State: AAQBX9fA3ZPm/QpSk4P0JFk9J4rb8ykJNc6EYMwIKGT1SblcGRJlLn0/ ITT7aPDRv0uxzYTMSfXC8o9drucffwxU+jU4SK0NZC8AkCBDq/tvWzFyL08XTy9I7oJNvgieNNs W5RNfKj09oBysjYOQ4HRZ7MN/ X-Received: by 2002:a17:906:1dd4:b0:92a:8067:7637 with SMTP id v20-20020a1709061dd400b0092a80677637mr476852ejh.61.1680571708918; Mon, 03 Apr 2023 18:28:28 -0700 (PDT) X-Received: by 2002:a17:906:1dd4:b0:92a:8067:7637 with SMTP id v20-20020a1709061dd400b0092a80677637mr476837ejh.61.1680571708714; Mon, 03 Apr 2023 18:28:28 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id ty25-20020a170907c71900b00948ca65d61fsm1287053ejc.140.2023.04.03.18.28.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:28 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 11/15] drm/nouveau: fence: fail to emit when fence context is killed Date: Tue, 4 Apr 2023 03:27:37 +0200 Message-Id: <20230404012741.116502-12-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208822958450691?= X-GMAIL-MSGID: =?utf-8?q?1762208822958450691?= The new VM_BIND UAPI implementation introduced in subsequent commits will allow asynchronous jobs processing push buffers and emitting fences. If a fence context is killed, e.g. due to a channel fault, jobs which are already queued for execution might still emit new fences. In such a case a job would hang forever. To fix that, fail to emit a new fence on a killed fence context with -ENODEV to unblock the job. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_fence.c | 7 +++++++ drivers/gpu/drm/nouveau/nouveau_fence.h | 2 +- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c index e946408f945b..77c739a55b19 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.c +++ b/drivers/gpu/drm/nouveau/nouveau_fence.c @@ -96,6 +96,7 @@ nouveau_fence_context_kill(struct nouveau_fence_chan *fctx, int error) if (nouveau_fence_signal(fence)) nvif_event_block(&fctx->event); } + fctx->killed = 1; spin_unlock_irqrestore(&fctx->lock, flags); } @@ -229,6 +230,12 @@ nouveau_fence_emit(struct nouveau_fence *fence, struct nouveau_channel *chan) dma_fence_get(&fence->base); spin_lock_irq(&fctx->lock); + if (unlikely(fctx->killed)) { + spin_unlock_irq(&fctx->lock); + dma_fence_put(&fence->base); + return -ENODEV; + } + if (nouveau_fence_update(chan, fctx)) nvif_event_block(&fctx->event); diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.h b/drivers/gpu/drm/nouveau/nouveau_fence.h index 7c73c7c9834a..2c72d96ef17d 100644 --- a/drivers/gpu/drm/nouveau/nouveau_fence.h +++ b/drivers/gpu/drm/nouveau/nouveau_fence.h @@ -44,7 +44,7 @@ struct nouveau_fence_chan { char name[32]; struct nvif_event event; - int notify_ref, dead; + int notify_ref, dead, killed; }; struct nouveau_fence_priv { From patchwork Tue Apr 4 01:27:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78806 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2701132vqo; Mon, 3 Apr 2023 18:35:22 -0700 (PDT) X-Google-Smtp-Source: AKy350YaIIwTCaeeXzoCN55KznoihoG9RnEsl44orfBr83POSdADXChTQ0Eix4Ehd1kXkyDX0Tsb X-Received: by 2002:a17:90b:4d09:b0:241:c25:14c0 with SMTP id mw9-20020a17090b4d0900b002410c2514c0mr922400pjb.24.1680572122622; Mon, 03 Apr 2023 18:35:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572122; cv=none; d=google.com; s=arc-20160816; b=V1UspNeHi0f5/vorYsAC/31LzNBIMHq7wZJZ0Cb7Zi/21i8tJLKihOziZB/Km3FZuw Cr6VGvkOWcIl2F4giW7jfhz5vxrTxlVm+baorXEGbnW6nPOPtXckWceASz69M9yTknrn 3pE6OIeS+430i9GPn6Gqjbl8gvLpLHsq5c0WpUDf+HT2CsyN81AmqnmWmVhu/WdjMM5V xbcHbX4cBcVo4hj9sVV9amw6fCemwszEoVxGjoxxHa+c/Jvc+czxKlbLj9afx0vxLrgZ DNDeFolklNiSG+ENTBaSxY39NH7Us/KEw+gaz/9NTxHoPIfU0BR9g00q2oXOxCjm9l0n vWRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rFDnwLdkC+a84lmheZyyv02Y7sEGNyBLjO5UYHqgwOc=; b=SDGXCosLiNakchdSRgeRKTcOS8gSr26Fgtjv7mcwwLgC4nu0NL+jMCsXGwXd/nB6OM //4a7MmJohGYh5tkuHSAi0uo02ssboOFiRDesy0A0BiC0iWN5RWNZ6hHYWB7z/QuWFtf v7fsJLnL+OnglWWB3cQr9+HIgHCaTCu/YEXywqzbLRTzGsNXAnOHyAcsjs0zKm1waNEx 7GRBuu0b4ca8sT3Au5DUwJw118KnMvfCiB8hDNE3PLkJYeP5qQ8g73NJuoCbZM6atfsP JpHSN2p12OGuGQ3vLe+bXrQL6OWrpAI7nXw+2R1KW7ZXfXQo6lakP1YiAs4LLrimy+ZE ifqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RkrfMZjU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w10-20020a170902d70a00b0019cbe4c22f6si8791071ply.516.2023.04.03.18.35.09; Mon, 03 Apr 2023 18:35:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RkrfMZjU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232812AbjDDBaO (ORCPT + 99 others); Mon, 3 Apr 2023 21:30:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232883AbjDDB33 (ORCPT ); Mon, 3 Apr 2023 21:29:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4ED81FC4 for ; Mon, 3 Apr 2023 18:28:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571715; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rFDnwLdkC+a84lmheZyyv02Y7sEGNyBLjO5UYHqgwOc=; b=RkrfMZjUnVt71py2jam34nXnQWqJEQMQFbjfDw8wE99w2AVhru8aGvz5yxdSuyhmEFAHi+ QpoDmHMlB/sUCTZfVeI4RKf7C6JUOxbJHOT1nf7ZLJ82ZSehBPR/uZVJF4IP8VOXwQ+JBV Wu6UQPbPMT283Sq6a4fAj9qmKGK2hew= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-584-z1mnrVkCM2OzFnge8hSjRg-1; Mon, 03 Apr 2023 21:28:33 -0400 X-MC-Unique: z1mnrVkCM2OzFnge8hSjRg-1 Received: by mail-ed1-f71.google.com with SMTP id t26-20020a50d71a000000b005003c5087caso43213927edi.1 for ; Mon, 03 Apr 2023 18:28:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571712; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rFDnwLdkC+a84lmheZyyv02Y7sEGNyBLjO5UYHqgwOc=; b=zdakXcqsde5ODeXJplDVHo1AOAucntNaVqZ+9/oUTwIkUDoVtL4EG3PISGaIqjLMzD nUd6ccaN6unod3nazs+XxrnugC541YcE0rxmi5+6GJlZ7G+F/lltJ8I8x6yr/qHwJK2V erZsCX1YT00NEXW4b2Rp2hH5iS1ogwGMZpkRVNEkOibJoNkB41Qcj9VP9nbqEAW/yI9W vs9zzs2FlIcQ+wjTWUf+2Uao69ANJrocClTOhzcRlrRgS1ElEjJTWRwUtnR7Tts4+t7i 3/v7YuHzC9n8tzN+P7vXCkzgtm8R7vUMV93Q/NdrnR+aV83SYUZc8Y+Tn3yIs9Z+APG9 29Sw== X-Gm-Message-State: AAQBX9fO+kvCTPm4YA6JW6EQekrMM/4jctph2GRZWPywQXnCW9l5iErL PxYeXYn+//jgeaBu9AirPGsm0WVwQQWYcLKcJYdAp5jYhjz8G5gsr1XKDT+5mClmM8i3QYONu7m QkgBvZ6B2h3+YJcBEKItMM64d X-Received: by 2002:a17:907:3f22:b0:931:86cf:9556 with SMTP id hq34-20020a1709073f2200b0093186cf9556mr754089ejc.23.1680571712721; Mon, 03 Apr 2023 18:28:32 -0700 (PDT) X-Received: by 2002:a17:907:3f22:b0:931:86cf:9556 with SMTP id hq34-20020a1709073f2200b0093186cf9556mr754076ejc.23.1680571712535; Mon, 03 Apr 2023 18:28:32 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id n6-20020a5099c6000000b004aee4e2a56esm5318778edb.0.2023.04.03.18.28.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:32 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 12/15] drm/nouveau: chan: provide nouveau_channel_kill() Date: Tue, 4 Apr 2023 03:27:38 +0200 Message-Id: <20230404012741.116502-13-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762207593879487947?= X-GMAIL-MSGID: =?utf-8?q?1762207593879487947?= The new VM_BIND UAPI implementation introduced in subsequent commits will allow asynchronous jobs processing push buffers and emitting fences. If a job times out, we need a way to recover from this situation. For now, simply kill the channel to unblock all hung up jobs and signal userspace that the device is dead on the next EXEC or VM_BIND ioctl. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_chan.c | 14 +++++++++++--- drivers/gpu/drm/nouveau/nouveau_chan.h | 1 + 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c index f47c0363683c..a975f8b0e0e5 100644 --- a/drivers/gpu/drm/nouveau/nouveau_chan.c +++ b/drivers/gpu/drm/nouveau/nouveau_chan.c @@ -40,6 +40,14 @@ MODULE_PARM_DESC(vram_pushbuf, "Create DMA push buffers in VRAM"); int nouveau_vram_pushbuf; module_param_named(vram_pushbuf, nouveau_vram_pushbuf, int, 0400); +void +nouveau_channel_kill(struct nouveau_channel *chan) +{ + atomic_set(&chan->killed, 1); + if (chan->fence) + nouveau_fence_context_kill(chan->fence, -ENODEV); +} + static int nouveau_channel_killed(struct nvif_event *event, void *repv, u32 repc) { @@ -47,9 +55,9 @@ nouveau_channel_killed(struct nvif_event *event, void *repv, u32 repc) struct nouveau_cli *cli = (void *)chan->user.client; NV_PRINTK(warn, cli, "channel %d killed!\n", chan->chid); - atomic_set(&chan->killed, 1); - if (chan->fence) - nouveau_fence_context_kill(chan->fence, -ENODEV); + + if (unlikely(!atomic_read(&chan->killed))) + nouveau_channel_kill(chan); return NVIF_EVENT_DROP; } diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.h b/drivers/gpu/drm/nouveau/nouveau_chan.h index e06a8ffed31a..e483f4a254da 100644 --- a/drivers/gpu/drm/nouveau/nouveau_chan.h +++ b/drivers/gpu/drm/nouveau/nouveau_chan.h @@ -65,6 +65,7 @@ int nouveau_channel_new(struct nouveau_drm *, struct nvif_device *, bool priv, u32 vram, u32 gart, struct nouveau_channel **); void nouveau_channel_del(struct nouveau_channel **); int nouveau_channel_idle(struct nouveau_channel *); +void nouveau_channel_kill(struct nouveau_channel *); extern int nouveau_vram_pushbuf; From patchwork Tue Apr 4 01:27:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78808 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2702132vqo; Mon, 3 Apr 2023 18:38:01 -0700 (PDT) X-Google-Smtp-Source: AKy350Z6Y5U1orqOLEhwGz1Zj2Rt9e6YVRV2XsLdIhwb+i5WF7MbkFxQNv8/EZxhtDDu1ngbSJL3 X-Received: by 2002:a17:902:d18c:b0:1a2:8fa7:7b9f with SMTP id m12-20020a170902d18c00b001a28fa77b9fmr879458plb.22.1680572281501; Mon, 03 Apr 2023 18:38:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680572281; cv=none; d=google.com; s=arc-20160816; b=Sc3wi7j4ez5MsKEmkoSW+ue0V3Rg4a3+zKg+oM5QRxmMTIIXqdr54k1b4k4881D4g+ PuCVzWKAnumZVOYQ+M7Jjh/3gJF9HVenlj8TxyVm1n7Gkkp9MiklQpEAeDXtIc7xJj4t ArSAYwCwN6wvYBdWdrzvsQo6bm8Wnb6Bk92bZHGZwqDq5C2csmA0bHcMpmGvT7bIJHEh ZyKFliHr7fWIqpnRqFL9OW55kmSaT1YdTIx+5kngD532XGI8BiN9GiJEhCrO6p2jcLSs XbqbiQfDJbo0jlBeD0wAV3/WSp4BdNZIIcTpvb0LrM5DuLUjO4vnmoYY8l0n6hgRshhG VFTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cWTZBZicfDqCruNwY1eccO+jyB5dwf/F94HLhKSPpUY=; b=FmS31r4hPruNIU8z/R4LKQU1JYUjnbfu2gg8MBNYzmZp5UZQH1ZCu3qlAigp4/UhEY kvjcX9bdlYpnQ7jipsUQKEHXPE/WZt7d2MM4fBADS33aaVIcM5jAZ4P7z3IENYCS3/GR /2za48Py1+MLu1O8hmGKdM0+XUmWdA0FmqCy9uElvHaE6L0aOuJllnKhiIqX16vtrGx+ vJZRHlnqNtv2LIiZ81Ii6IrSv2FnUtsFcJa81DYErBe3Z07uwSleFgB5CgVjYq1EpySu X6AF0jZFMTjS9nAmEyQs9Zh/ifZTiGaMlXvQkV6Pl8JxzhH6vq0AYHpq+sWBRVWIcXXB ENRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MZufnyWp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n18-20020a170903111200b001a27af16626si9600513plh.569.2023.04.03.18.37.48; Mon, 03 Apr 2023 18:38:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=MZufnyWp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232934AbjDDBaR (ORCPT + 99 others); Mon, 3 Apr 2023 21:30:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49002 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233100AbjDDB3r (ORCPT ); Mon, 3 Apr 2023 21:29:47 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E327B4499 for ; Mon, 3 Apr 2023 18:28:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571725; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cWTZBZicfDqCruNwY1eccO+jyB5dwf/F94HLhKSPpUY=; b=MZufnyWpCsLk/nq75dhVt9dG1TFKbnJNtQXsKoCmdvDdyopC+nXZpuvLIuKK8w1SY+Ga+e ZWnaTWKZ2H6k6rojyDOfD/wAV5B/+j7MdyQC0Y4LjajRBJs3GVNdVN9h2LpqAo9I8aTfaV yb+20JmUAdjamaf570atE0SMIPxdDaU= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-567-8K-AajS1OBKhFazmqSn00Q-1; Mon, 03 Apr 2023 21:28:43 -0400 X-MC-Unique: 8K-AajS1OBKhFazmqSn00Q-1 Received: by mail-ed1-f72.google.com with SMTP id ev6-20020a056402540600b004bc2358ac04so44161275edb.21 for ; Mon, 03 Apr 2023 18:28:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571722; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cWTZBZicfDqCruNwY1eccO+jyB5dwf/F94HLhKSPpUY=; b=K7Kk6HTavvlLo6Yv8CaLV6H+6DQFXEgp/boUXDHTdtzpn0hK1d4vzcgE9hCXZ7SQ+Q cvQSVMZJb1xUFcSiSsbe1lfkjER1gWIlMO/dyDjc/xpR95CwrVA6ahJVTwAE268fnXwu bR+aRGwk960M5xF3wbg10guqPdpqeilerI99pl+qCCfRu6J04onOAAPlI71XoL2Xn7sV NyhFMYX4zVRJoQlGcQd9r/d7iUvm765c9Qt8anJBthQJEeu6axS3lEkoh6IvbN1nbime quYVoCLzf2EZZe3RhIFPF6XG6l9BLopGZymPy8XeKnTeehTDAdJ2oQdKuAyJf1ClxC5f 77cw== X-Gm-Message-State: AAQBX9dLK1zN/gy12kIMETmtVaXdFlhvgxCmNl+CCryxzxQhy9znUSdf VqzXI/tzqU+jJyQtC2NCaX8TrZmJ31TFKMtmXyUamtKUbTU759CQ3T4dnNRq5xwpDMp3Fr2Arnh tcNSaY5Eyf+w4Tq7Ejcb/qmGy X-Received: by 2002:a17:906:c1d8:b0:878:545b:e540 with SMTP id bw24-20020a170906c1d800b00878545be540mr497039ejb.51.1680571721263; Mon, 03 Apr 2023 18:28:41 -0700 (PDT) X-Received: by 2002:a17:906:c1d8:b0:878:545b:e540 with SMTP id bw24-20020a170906c1d800b00878545be540mr496989ejb.51.1680571720411; Mon, 03 Apr 2023 18:28:40 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id o12-20020a170906600c00b0092bea699124sm5191767ejj.106.2023.04.03.18.28.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:40 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 14/15] drm/nouveau: implement new VM_BIND uAPI Date: Tue, 4 Apr 2023 03:27:40 +0200 Message-Id: <20230404012741.116502-15-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762207760723766063?= X-GMAIL-MSGID: =?utf-8?q?1762207760723766063?= This commit provides the implementation for the new uapi motivated by the Vulkan API. It allows user mode drivers (UMDs) to: 1) Initialize a GPU virtual address (VA) space via the new DRM_IOCTL_NOUVEAU_VM_INIT ioctl for UMDs to specify the portion of VA space managed by the kernel and userspace, respectively. 2) Allocate and free a VA space region as well as bind and unbind memory to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl. UMDs can request the named operations to be processed either synchronously or asynchronously. It supports DRM syncobjs (incl. timelines) as synchronization mechanism. The management of the GPU VA mappings is implemented with the DRM GPU VA manager. 3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl. The execution happens asynchronously. It supports DRM syncobj (incl. timelines) as synchronization mechanism. DRM GEM object locking is handled with drm_exec. Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM GPU scheduler for the asynchronous paths. Signed-off-by: Danilo Krummrich --- Documentation/gpu/driver-uapi.rst | 3 + drivers/gpu/drm/nouveau/Kbuild | 3 + drivers/gpu/drm/nouveau/Kconfig | 2 + drivers/gpu/drm/nouveau/nouveau_abi16.c | 24 + drivers/gpu/drm/nouveau/nouveau_abi16.h | 1 + drivers/gpu/drm/nouveau/nouveau_bo.c | 147 +- drivers/gpu/drm/nouveau/nouveau_bo.h | 2 +- drivers/gpu/drm/nouveau/nouveau_drm.c | 27 +- drivers/gpu/drm/nouveau/nouveau_drv.h | 59 +- drivers/gpu/drm/nouveau/nouveau_exec.c | 363 +++++ drivers/gpu/drm/nouveau/nouveau_exec.h | 42 + drivers/gpu/drm/nouveau/nouveau_gem.c | 25 +- drivers/gpu/drm/nouveau/nouveau_mem.h | 5 + drivers/gpu/drm/nouveau/nouveau_prime.c | 2 +- drivers/gpu/drm/nouveau/nouveau_sched.c | 494 ++++++ drivers/gpu/drm/nouveau/nouveau_sched.h | 116 ++ drivers/gpu/drm/nouveau/nouveau_uvmm.c | 1836 +++++++++++++++++++++++ drivers/gpu/drm/nouveau/nouveau_uvmm.h | 98 ++ 18 files changed, 3184 insertions(+), 65 deletions(-) create mode 100644 drivers/gpu/drm/nouveau/nouveau_exec.c create mode 100644 drivers/gpu/drm/nouveau/nouveau_exec.h create mode 100644 drivers/gpu/drm/nouveau/nouveau_sched.c create mode 100644 drivers/gpu/drm/nouveau/nouveau_sched.h create mode 100644 drivers/gpu/drm/nouveau/nouveau_uvmm.c create mode 100644 drivers/gpu/drm/nouveau/nouveau_uvmm.h diff --git a/Documentation/gpu/driver-uapi.rst b/Documentation/gpu/driver-uapi.rst index 9c7ca6e33a68..c08bcbb95fb3 100644 --- a/Documentation/gpu/driver-uapi.rst +++ b/Documentation/gpu/driver-uapi.rst @@ -13,4 +13,7 @@ drm/nouveau uAPI VM_BIND / EXEC uAPI ------------------- +.. kernel-doc:: drivers/gpu/drm/nouveau/nouveau_exec.c + :doc: Overview + .. kernel-doc:: include/uapi/drm/nouveau_drm.h diff --git a/drivers/gpu/drm/nouveau/Kbuild b/drivers/gpu/drm/nouveau/Kbuild index 5e5617006da5..cf6b3a80c0c8 100644 --- a/drivers/gpu/drm/nouveau/Kbuild +++ b/drivers/gpu/drm/nouveau/Kbuild @@ -47,6 +47,9 @@ nouveau-y += nouveau_prime.o nouveau-y += nouveau_sgdma.o nouveau-y += nouveau_ttm.o nouveau-y += nouveau_vmm.o +nouveau-y += nouveau_exec.o +nouveau-y += nouveau_sched.o +nouveau-y += nouveau_uvmm.o # DRM - modesetting nouveau-$(CONFIG_DRM_NOUVEAU_BACKLIGHT) += nouveau_backlight.o diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig index a70bd65e1400..c52e8096cca4 100644 --- a/drivers/gpu/drm/nouveau/Kconfig +++ b/drivers/gpu/drm/nouveau/Kconfig @@ -10,6 +10,8 @@ config DRM_NOUVEAU select DRM_KMS_HELPER select DRM_TTM select DRM_TTM_HELPER + select DRM_EXEC + select DRM_SCHED select I2C select I2C_ALGOBIT select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT diff --git a/drivers/gpu/drm/nouveau/nouveau_abi16.c b/drivers/gpu/drm/nouveau/nouveau_abi16.c index 82dab51d8aeb..a112f28681d3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_abi16.c +++ b/drivers/gpu/drm/nouveau/nouveau_abi16.c @@ -35,6 +35,7 @@ #include "nouveau_chan.h" #include "nouveau_abi16.h" #include "nouveau_vmm.h" +#include "nouveau_sched.h" static struct nouveau_abi16 * nouveau_abi16(struct drm_file *file_priv) @@ -125,6 +126,17 @@ nouveau_abi16_chan_fini(struct nouveau_abi16 *abi16, { struct nouveau_abi16_ntfy *ntfy, *temp; + /* When a client exits without waiting for it's queued up jobs to + * finish it might happen that we fault the channel. This is due to + * drm_file_free() calling drm_gem_release() before the postclose() + * callback. Hence, we can't tear down this scheduler entity before + * uvmm mappings are unmapped. Currently, we can't detect this case. + * + * However, this should be rare and harmless, since the channel isn't + * needed anymore. + */ + nouveau_sched_entity_fini(&chan->sched_entity); + /* wait for all activity to stop before cleaning up */ if (chan->chan) nouveau_channel_idle(chan->chan); @@ -261,6 +273,13 @@ nouveau_abi16_ioctl_channel_alloc(ABI16_IOCTL_ARGS) if (!drm->channel) return nouveau_abi16_put(abi16, -ENODEV); + /* If uvmm wasn't initialized until now disable it completely to prevent + * userspace from mixing up UAPIs. + * + * The client lock is already acquired by nouveau_abi16_get(). + */ + __nouveau_cli_uvmm_disable(cli); + device = &abi16->device; engine = NV_DEVICE_HOST_RUNLIST_ENGINES_GR; @@ -304,6 +323,11 @@ nouveau_abi16_ioctl_channel_alloc(ABI16_IOCTL_ARGS) if (ret) goto done; + ret = nouveau_sched_entity_init(&chan->sched_entity, &drm->sched, + drm->sched_wq); + if (ret) + goto done; + init->channel = chan->chan->chid; if (device->info.family >= NV_DEVICE_INFO_V0_TESLA) diff --git a/drivers/gpu/drm/nouveau/nouveau_abi16.h b/drivers/gpu/drm/nouveau/nouveau_abi16.h index 27eae85f33e6..8209eb28feaf 100644 --- a/drivers/gpu/drm/nouveau/nouveau_abi16.h +++ b/drivers/gpu/drm/nouveau/nouveau_abi16.h @@ -26,6 +26,7 @@ struct nouveau_abi16_chan { struct nouveau_bo *ntfy; struct nouveau_vma *ntfy_vma; struct nvkm_mm heap; + struct nouveau_sched_entity sched_entity; }; struct nouveau_abi16 { diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index e9cbbf594e6f..6487185f2d11 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -199,7 +199,7 @@ nouveau_bo_fixup_align(struct nouveau_bo *nvbo, int *align, u64 *size) struct nouveau_bo * nouveau_bo_alloc(struct nouveau_cli *cli, u64 *size, int *align, u32 domain, - u32 tile_mode, u32 tile_flags) + u32 tile_mode, u32 tile_flags, bool internal) { struct nouveau_drm *drm = cli->drm; struct nouveau_bo *nvbo; @@ -235,68 +235,103 @@ nouveau_bo_alloc(struct nouveau_cli *cli, u64 *size, int *align, u32 domain, nvbo->force_coherent = true; } - if (cli->device.info.family >= NV_DEVICE_INFO_V0_FERMI) { - nvbo->kind = (tile_flags & 0x0000ff00) >> 8; - if (!nvif_mmu_kind_valid(mmu, nvbo->kind)) { - kfree(nvbo); - return ERR_PTR(-EINVAL); + nvbo->contig = !(tile_flags & NOUVEAU_GEM_TILE_NONCONTIG); + if (!nouveau_cli_uvmm(cli) || internal) { + /* for BO noVM allocs, don't assign kinds */ + if (cli->device.info.family >= NV_DEVICE_INFO_V0_FERMI) { + nvbo->kind = (tile_flags & 0x0000ff00) >> 8; + if (!nvif_mmu_kind_valid(mmu, nvbo->kind)) { + kfree(nvbo); + return ERR_PTR(-EINVAL); + } + + nvbo->comp = mmu->kind[nvbo->kind] != nvbo->kind; + } else if (cli->device.info.family >= NV_DEVICE_INFO_V0_TESLA) { + nvbo->kind = (tile_flags & 0x00007f00) >> 8; + nvbo->comp = (tile_flags & 0x00030000) >> 16; + if (!nvif_mmu_kind_valid(mmu, nvbo->kind)) { + kfree(nvbo); + return ERR_PTR(-EINVAL); + } + } else { + nvbo->zeta = (tile_flags & 0x00000007); } + nvbo->mode = tile_mode; + + /* Determine the desirable target GPU page size for the buffer. */ + for (i = 0; i < vmm->page_nr; i++) { + /* Because we cannot currently allow VMM maps to fail + * during buffer migration, we need to determine page + * size for the buffer up-front, and pre-allocate its + * page tables. + * + * Skip page sizes that can't support needed domains. + */ + if (cli->device.info.family > NV_DEVICE_INFO_V0_CURIE && + (domain & NOUVEAU_GEM_DOMAIN_VRAM) && !vmm->page[i].vram) + continue; + if ((domain & NOUVEAU_GEM_DOMAIN_GART) && + (!vmm->page[i].host || vmm->page[i].shift > PAGE_SHIFT)) + continue; - nvbo->comp = mmu->kind[nvbo->kind] != nvbo->kind; - } else - if (cli->device.info.family >= NV_DEVICE_INFO_V0_TESLA) { - nvbo->kind = (tile_flags & 0x00007f00) >> 8; - nvbo->comp = (tile_flags & 0x00030000) >> 16; - if (!nvif_mmu_kind_valid(mmu, nvbo->kind)) { + /* Select this page size if it's the first that supports + * the potential memory domains, or when it's compatible + * with the requested compression settings. + */ + if (pi < 0 || !nvbo->comp || vmm->page[i].comp) + pi = i; + + /* Stop once the buffer is larger than the current page size. */ + if (*size >= 1ULL << vmm->page[i].shift) + break; + } + + if (WARN_ON(pi < 0)) { kfree(nvbo); return ERR_PTR(-EINVAL); } - } else { - nvbo->zeta = (tile_flags & 0x00000007); - } - nvbo->mode = tile_mode; - nvbo->contig = !(tile_flags & NOUVEAU_GEM_TILE_NONCONTIG); - - /* Determine the desirable target GPU page size for the buffer. */ - for (i = 0; i < vmm->page_nr; i++) { - /* Because we cannot currently allow VMM maps to fail - * during buffer migration, we need to determine page - * size for the buffer up-front, and pre-allocate its - * page tables. - * - * Skip page sizes that can't support needed domains. - */ - if (cli->device.info.family > NV_DEVICE_INFO_V0_CURIE && - (domain & NOUVEAU_GEM_DOMAIN_VRAM) && !vmm->page[i].vram) - continue; - if ((domain & NOUVEAU_GEM_DOMAIN_GART) && - (!vmm->page[i].host || vmm->page[i].shift > PAGE_SHIFT)) - continue; - /* Select this page size if it's the first that supports - * the potential memory domains, or when it's compatible - * with the requested compression settings. - */ - if (pi < 0 || !nvbo->comp || vmm->page[i].comp) - pi = i; - - /* Stop once the buffer is larger than the current page size. */ - if (*size >= 1ULL << vmm->page[i].shift) - break; - } + /* Disable compression if suitable settings couldn't be found. */ + if (nvbo->comp && !vmm->page[pi].comp) { + if (mmu->object.oclass >= NVIF_CLASS_MMU_GF100) + nvbo->kind = mmu->kind[nvbo->kind]; + nvbo->comp = 0; + } + nvbo->page = vmm->page[pi].shift; + } else { + /* reject other tile flags when in VM mode. */ + if (tile_mode) + return ERR_PTR(-EINVAL); + if (tile_flags & ~NOUVEAU_GEM_TILE_NONCONTIG) + return ERR_PTR(-EINVAL); - if (WARN_ON(pi < 0)) { - kfree(nvbo); - return ERR_PTR(-EINVAL); - } + /* Determine the desirable target GPU page size for the buffer. */ + for (i = 0; i < vmm->page_nr; i++) { + /* Because we cannot currently allow VMM maps to fail + * during buffer migration, we need to determine page + * size for the buffer up-front, and pre-allocate its + * page tables. + * + * Skip page sizes that can't support needed domains. + */ + if ((domain & NOUVEAU_GEM_DOMAIN_VRAM) && !vmm->page[i].vram) + continue; + if ((domain & NOUVEAU_GEM_DOMAIN_GART) && + (!vmm->page[i].host || vmm->page[i].shift > PAGE_SHIFT)) + continue; - /* Disable compression if suitable settings couldn't be found. */ - if (nvbo->comp && !vmm->page[pi].comp) { - if (mmu->object.oclass >= NVIF_CLASS_MMU_GF100) - nvbo->kind = mmu->kind[nvbo->kind]; - nvbo->comp = 0; + if (pi < 0) + pi = i; + /* Stop once the buffer is larger than the current page size. */ + if (*size >= 1ULL << vmm->page[i].shift) + break; + } + if (WARN_ON(pi < 0)) { + kfree(nvbo); + return ERR_PTR(-EINVAL); + } + nvbo->page = vmm->page[pi].shift; } - nvbo->page = vmm->page[pi].shift; nouveau_bo_fixup_align(nvbo, align, size); @@ -334,7 +369,7 @@ nouveau_bo_new(struct nouveau_cli *cli, u64 size, int align, int ret; nvbo = nouveau_bo_alloc(cli, &size, &align, domain, tile_mode, - tile_flags); + tile_flags, true); if (IS_ERR(nvbo)) return PTR_ERR(nvbo); @@ -948,6 +983,7 @@ static void nouveau_bo_move_ntfy(struct ttm_buffer_object *bo, list_for_each_entry(vma, &nvbo->vma_list, head) { nouveau_vma_map(vma, mem); } + nouveau_uvmm_bo_map_all(nvbo, mem); } else { list_for_each_entry(vma, &nvbo->vma_list, head) { ret = dma_resv_wait_timeout(bo->base.resv, @@ -956,6 +992,7 @@ static void nouveau_bo_move_ntfy(struct ttm_buffer_object *bo, WARN_ON(ret <= 0); nouveau_vma_unmap(vma); } + nouveau_uvmm_bo_unmap_all(nvbo); } if (new_reg) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h index 774dd93ca76b..cb85207d9e8f 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.h +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h @@ -73,7 +73,7 @@ extern struct ttm_device_funcs nouveau_bo_driver; void nouveau_bo_move_init(struct nouveau_drm *); struct nouveau_bo *nouveau_bo_alloc(struct nouveau_cli *, u64 *size, int *align, - u32 domain, u32 tile_mode, u32 tile_flags); + u32 domain, u32 tile_mode, u32 tile_flags, bool internal); int nouveau_bo_init(struct nouveau_bo *, u64 size, int align, u32 domain, struct sg_table *sg, struct dma_resv *robj); int nouveau_bo_new(struct nouveau_cli *, u64 size, int align, u32 domain, diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index cc7c5b4a05fd..a06f8ad227ad 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -68,6 +68,9 @@ #include "nouveau_platform.h" #include "nouveau_svm.h" #include "nouveau_dmem.h" +#include "nouveau_exec.h" +#include "nouveau_uvmm.h" +#include "nouveau_sched.h" DECLARE_DYNDBG_CLASSMAP(drm_debug_classes, DD_CLASS_TYPE_DISJOINT_BITS, 0, "DRM_UT_CORE", @@ -190,6 +193,8 @@ nouveau_cli_fini(struct nouveau_cli *cli) WARN_ON(!list_empty(&cli->worker)); usif_client_fini(cli); + nouveau_uvmm_fini(&cli->uvmm); + nouveau_sched_entity_fini(&cli->sched_entity); nouveau_vmm_fini(&cli->svm); nouveau_vmm_fini(&cli->vmm); nvif_mmu_dtor(&cli->mmu); @@ -295,6 +300,12 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname, } cli->mem = &mems[ret]; + + ret = nouveau_sched_entity_init(&cli->sched_entity, &drm->sched, + drm->sched_wq); + if (ret) + goto done; + return 0; done: if (ret) @@ -548,10 +559,14 @@ nouveau_drm_device_init(struct drm_device *dev) nvif_parent_ctor(&nouveau_parent, &drm->parent); drm->master.base.object.parent = &drm->parent; - ret = nouveau_cli_init(drm, "DRM-master", &drm->master); + ret = nouveau_sched_init(drm); if (ret) goto fail_alloc; + ret = nouveau_cli_init(drm, "DRM-master", &drm->master); + if (ret) + goto fail_sched; + ret = nouveau_cli_init(drm, "DRM", &drm->client); if (ret) goto fail_master; @@ -608,7 +623,6 @@ nouveau_drm_device_init(struct drm_device *dev) } return 0; - fail_dispinit: nouveau_display_destroy(dev); fail_dispctor: @@ -621,6 +635,8 @@ nouveau_drm_device_init(struct drm_device *dev) nouveau_cli_fini(&drm->client); fail_master: nouveau_cli_fini(&drm->master); +fail_sched: + nouveau_sched_fini(drm); fail_alloc: nvif_parent_dtor(&drm->parent); kfree(drm); @@ -672,6 +688,8 @@ nouveau_drm_device_fini(struct drm_device *dev) } mutex_unlock(&drm->clients_lock); + nouveau_sched_fini(drm); + nouveau_cli_fini(&drm->client); nouveau_cli_fini(&drm->master); nvif_parent_dtor(&drm->parent); @@ -1173,6 +1191,9 @@ nouveau_ioctls[] = { DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_CPU_PREP, nouveau_gem_ioctl_cpu_prep, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_CPU_FINI, nouveau_gem_ioctl_cpu_fini, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_INFO, nouveau_gem_ioctl_info, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(NOUVEAU_VM_INIT, nouveau_uvmm_ioctl_vm_init, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(NOUVEAU_VM_BIND, nouveau_uvmm_ioctl_vm_bind, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(NOUVEAU_EXEC, nouveau_exec_ioctl_exec, DRM_RENDER_ALLOW), }; long @@ -1220,6 +1241,8 @@ nouveau_driver_fops = { static struct drm_driver driver_stub = { .driver_features = DRIVER_GEM | + DRIVER_SYNCOBJ | DRIVER_SYNCOBJ_TIMELINE | + DRIVER_GEM_GPUVA | DRIVER_MODESET | DRIVER_RENDER, .open = nouveau_drm_open, diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h index 20a7f31b9082..ab810b4e028b 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drv.h +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h @@ -10,8 +10,8 @@ #define DRIVER_DATE "20120801" #define DRIVER_MAJOR 1 -#define DRIVER_MINOR 3 -#define DRIVER_PATCHLEVEL 1 +#define DRIVER_MINOR 4 +#define DRIVER_PATCHLEVEL 0 /* * 1.1.1: @@ -63,7 +63,9 @@ struct platform_device; #include "nouveau_fence.h" #include "nouveau_bios.h" +#include "nouveau_sched.h" #include "nouveau_vmm.h" +#include "nouveau_uvmm.h" struct nouveau_drm_tile { struct nouveau_fence *fence; @@ -91,6 +93,10 @@ struct nouveau_cli { struct nvif_mmu mmu; struct nouveau_vmm vmm; struct nouveau_vmm svm; + struct nouveau_uvmm uvmm; + + struct nouveau_sched_entity sched_entity; + const struct nvif_mclass *mem; struct list_head head; @@ -112,15 +118,60 @@ struct nouveau_cli_work { struct dma_fence_cb cb; }; +static inline struct nouveau_uvmm * +nouveau_cli_uvmm(struct nouveau_cli *cli) +{ + if (!cli || !cli->uvmm.vmm.cli) + return NULL; + + return &cli->uvmm; +} + +static inline struct nouveau_uvmm * +nouveau_cli_uvmm_locked(struct nouveau_cli *cli) +{ + struct nouveau_uvmm *uvmm; + + mutex_lock(&cli->mutex); + uvmm = nouveau_cli_uvmm(cli); + mutex_unlock(&cli->mutex); + + return uvmm; +} + static inline struct nouveau_vmm * nouveau_cli_vmm(struct nouveau_cli *cli) { + struct nouveau_uvmm *uvmm; + + uvmm = nouveau_cli_uvmm(cli); + if (uvmm) + return &uvmm->vmm; + if (cli->svm.cli) return &cli->svm; return &cli->vmm; } +static inline void +__nouveau_cli_uvmm_disable(struct nouveau_cli *cli) +{ + struct nouveau_uvmm *uvmm; + + uvmm = nouveau_cli_uvmm(cli); + if (!uvmm) + cli->uvmm.disabled = true; +} + +static inline void +nouveau_cli_uvmm_disable(struct nouveau_cli *cli) +{ + mutex_lock(&cli->mutex); + __nouveau_cli_uvmm_disable(cli); + mutex_unlock(&cli->mutex); +} + void nouveau_cli_work_queue(struct nouveau_cli *, struct dma_fence *, struct nouveau_cli_work *); @@ -257,6 +308,10 @@ struct nouveau_drm { struct mutex lock; bool component_registered; } audio; + + struct drm_gpu_scheduler sched; + struct workqueue_struct *sched_wq; + }; static inline struct nouveau_drm * diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c new file mode 100644 index 000000000000..c511b8b6fa2f --- /dev/null +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c @@ -0,0 +1,363 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright (c) 2022 Red Hat. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Danilo Krummrich + * + */ + +#include + +#include "nouveau_drv.h" +#include "nouveau_gem.h" +#include "nouveau_mem.h" +#include "nouveau_dma.h" +#include "nouveau_exec.h" +#include "nouveau_abi16.h" +#include "nouveau_chan.h" +#include "nouveau_sched.h" +#include "nouveau_uvmm.h" + +/** + * DOC: Overview + * + * Nouveau's VM_BIND / EXEC UAPI consists of three ioctls: DRM_NOUVEAU_VM_INIT, + * DRM_NOUVEAU_VM_BIND and DRM_NOUVEAU_EXEC. + * + * In order to use the UAPI firstly a user client must initialize the VA space + * using the DRM_NOUVEAU_VM_INIT ioctl specifying which region of the VA space + * should be managed by the kernel and which by the UMD. + * + * The DRM_NOUVEAU_VM_BIND ioctl provides clients an interface to manage the + * userspace-managable portion of the VA space. It provides operations to map + * and unmap memory. Mappings may be flagged as sparse. Sparse mappings are not + * backed by a GEM object and the kernel will ignore GEM handles provided + * alongside a sparse mapping. + * + * Userspace may request memory backed mappings either within or outside of the + * bounds (but not crossing those bounds) of a previously mapped sparse + * mapping. Subsequently requested memory backed mappings within a sparse + * mapping will take precedence over the corresponding range of the sparse + * mapping. If such memory backed mappings are unmapped the kernel will make + * sure that the corresponding sparse mapping will take their place again. + * Requests to unmap a sparse mapping that still contains memory backed mappings + * will result in those memory backed mappings being unmapped first. + * + * Unmap requests are not bound to the range of existing mappings and can even + * overlap the bounds of sparse mappings. For such a request the kernel will + * make sure to unmap all memory backed mappings within the given range, + * splitting up memory backed mappings which are only partially contained + * within the given range. Unmap requests with the sparse flag set must match + * the range of a previously mapped sparse mapping exactly though. + * + * While the kernel generally permits arbitrary sequences and ranges of memory + * backed mappings being mapped and unmapped, either within a single or multiple + * VM_BIND ioctl calls, there are some restrictions for sparse mappings. + * + * The kernel does not permit to: + * - unmap non-existent sparse mappings + * - unmap a sparse mapping and map a new sparse mapping overlapping the range + * of the previously unmapped sparse mapping within the same VM_BIND ioctl + * - unmap a sparse mapping and map new memory backed mappings overlapping the + * range of the previously unmapped sparse mapping within the same VM_BIND + * ioctl + * + * When using the VM_BIND ioctl to request the kernel to map memory to a given + * virtual address in the GPU's VA space there is no guarantee that the actual + * mappings are created in the GPU's MMU. If the given memory is swapped out + * at the time the bind operation is executed the kernel will stash the mapping + * details into it's internal alloctor and create the actual MMU mappings once + * the memory is swapped back in. While this is transparent for userspace, it is + * guaranteed that all the backing memory is swapped back in and all the memory + * mappings, as requested by userspace previously, are actually mapped once the + * DRM_NOUVEAU_EXEC ioctl is called to submit an exec job. + * + * A VM_BIND job can be executed either synchronously or asynchronously. If + * exectued asynchronously, userspace may provide a list of syncobjs this job + * will wait for and/or a list of syncobj the kernel will signal once the + * VM_BIND job finished execution. If executed synchronously the ioctl will + * block until the bind job is finished. For synchronous jobs the kernel will + * not permit any syncobjs submitted to the kernel. + * + * To execute a push buffer the UAPI provides the DRM_NOUVEAU_EXEC ioctl. EXEC + * jobs are always executed asynchronously, and, equal to VM_BIND jobs, provide + * the option to synchronize them with syncobjs. + * + * Besides that, EXEC jobs can be scheduled for a specified channel to execute on. + * + * Since VM_BIND jobs update the GPU's VA space on job submit, EXEC jobs do have + * an up to date view of the VA space. However, the actual mappings might still + * be pending. Hence, EXEC jobs require to have the particular fences - of + * the corresponding VM_BIND jobs they depent on - attached to them. + */ + +static int +nouveau_exec_job_submit(struct nouveau_job *job) +{ + struct nouveau_exec_job *exec_job = to_nouveau_exec_job(job); + struct nouveau_cli *cli = exec_job->base.cli; + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(cli); + struct drm_exec *exec = &job->exec; + struct drm_gem_object *obj; + unsigned long index; + int ret; + + ret = nouveau_fence_new(&exec_job->fence); + if (ret) + return ret; + + nouveau_uvmm_lock(uvmm); + drm_exec_while_not_all_locked(exec) { + DRM_GPUVA_ITER(it, &uvmm->umgr, 0); + struct drm_gpuva *va; + + drm_gpuva_iter_for_each(va, it) { + + if (unlikely(va == &uvmm->umgr.kernel_alloc_node)) + continue; + + ret = drm_exec_prepare_obj(exec, va->gem.obj, 1); + drm_exec_break_on_contention(exec); + if (ret == -EALREADY) { + continue; + } else if (ret) { + nouveau_uvmm_unlock(uvmm); + return ret; + } + } + } + nouveau_uvmm_unlock(uvmm); + + drm_exec_for_each_locked_object(exec, index, obj) { + struct nouveau_bo *nvbo = nouveau_gem_object(obj); + + ret = nouveau_bo_validate(nvbo, true, false); + if (ret) + return ret; + } + + return 0; +} + +static struct dma_fence * +nouveau_exec_job_run(struct nouveau_job *job) +{ + struct nouveau_exec_job *exec_job = to_nouveau_exec_job(job); + struct nouveau_channel *chan = exec_job->chan; + struct nouveau_fence *fence = exec_job->fence; + int i, ret; + + ret = nouveau_dma_wait(chan, exec_job->push.count + 1, 16); + if (ret) { + NV_PRINTK(err, job->cli, "nv50cal_space: %d\n", ret); + return ERR_PTR(ret); + } + + for (i = 0; i < exec_job->push.count; i++) { + nv50_dma_push(chan, exec_job->push.s[i].va, + exec_job->push.s[i].va_len); + } + + ret = nouveau_fence_emit(fence, chan); + if (ret) { + NV_PRINTK(err, job->cli, "error fencing pushbuf: %d\n", ret); + WIND_RING(chan); + return ERR_PTR(ret); + } + + exec_job->fence = NULL; + + return &fence->base; +} + +static void +nouveau_exec_job_free(struct nouveau_job *job) +{ + struct nouveau_exec_job *exec_job = to_nouveau_exec_job(job); + + nouveau_base_job_free(job); + + nouveau_fence_unref(&exec_job->fence); + kfree(exec_job->push.s); + kfree(exec_job); +} + +static enum drm_gpu_sched_stat +nouveau_exec_job_timeout(struct nouveau_job *job) +{ + struct nouveau_exec_job *exec_job = to_nouveau_exec_job(job); + struct nouveau_channel *chan = exec_job->chan; + + if (unlikely(!atomic_read(&chan->killed))) + nouveau_channel_kill(chan); + + NV_PRINTK(warn, job->cli, "job timeout, channel %d killed!\n", + chan->chid); + + nouveau_sched_entity_fini(job->entity); + + return DRM_GPU_SCHED_STAT_ENODEV; +} + +static struct nouveau_job_ops nouveau_exec_job_ops = { + .submit = nouveau_exec_job_submit, + .run = nouveau_exec_job_run, + .free = nouveau_exec_job_free, + .timeout = nouveau_exec_job_timeout, +}; + +int +nouveau_exec_job_init(struct nouveau_exec_job **pjob, + struct nouveau_exec_job_args *args) +{ + struct nouveau_exec_job *job; + int ret; + + job = *pjob = kzalloc(sizeof(*job), GFP_KERNEL); + if (!job) + return -ENOMEM; + + job->push.count = args->push.count; + job->push.s = kmemdup(args->push.s, + sizeof(*args->push.s) * + args->push.count, + GFP_KERNEL); + if (!job->push.s) { + ret = -ENOMEM; + goto err_free_job; + } + + job->base.ops = &nouveau_exec_job_ops; + job->base.resv_usage = DMA_RESV_USAGE_WRITE; + job->chan = args->chan; + + ret = nouveau_base_job_init(&job->base, &args->base); + if (ret) + goto err_free_pushs; + + return 0; + +err_free_pushs: + kfree(job->push.s); +err_free_job: + kfree(job); + *pjob = NULL; + + return ret; +} + +static int +nouveau_exec(struct nouveau_exec_job_args *args) +{ + struct nouveau_exec_job *job; + int ret; + + ret = nouveau_exec_job_init(&job, args); + if (ret) + return ret; + + ret = nouveau_job_submit(&job->base); + if (ret) + goto err_job_fini; + + return 0; + +err_job_fini: + nouveau_job_fini(&job->base); + return ret; +} + +int +nouveau_exec_ioctl_exec(struct drm_device *dev, + void *data, + struct drm_file *file_priv) +{ + struct nouveau_abi16 *abi16 = nouveau_abi16_get(file_priv); + struct nouveau_cli *cli = nouveau_cli(file_priv); + struct nouveau_abi16_chan *chan16; + struct nouveau_channel *chan = NULL; + struct nouveau_exec_job_args args = {}; + struct drm_nouveau_exec *req = data; + int ret = 0; + + if (unlikely(!abi16)) + return -ENOMEM; + + /* abi16 locks already */ + if (unlikely(!nouveau_cli_uvmm(cli))) + return nouveau_abi16_put(abi16, -ENOSYS); + + list_for_each_entry(chan16, &abi16->channels, head) { + if (chan16->chan->chid == req->channel) { + chan = chan16->chan; + break; + } + } + + if (!chan) + return nouveau_abi16_put(abi16, -ENOENT); + + if (unlikely(atomic_read(&chan->killed))) + return nouveau_abi16_put(abi16, -ENODEV); + + if (!chan->dma.ib_max) + return nouveau_abi16_put(abi16, -ENOSYS); + + if (unlikely(req->push_count == 0)) + goto out; + + if (unlikely(req->push_count > NOUVEAU_GEM_MAX_PUSH)) { + NV_PRINTK(err, cli, "pushbuf push count exceeds limit: %d max %d\n", + req->push_count, NOUVEAU_GEM_MAX_PUSH); + return nouveau_abi16_put(abi16, -EINVAL); + } + + args.push.count = req->push_count; + args.push.s = u_memcpya(req->push_ptr, req->push_count, + sizeof(*args.push.s)); + if (IS_ERR(args.push.s)) { + ret = PTR_ERR(args.push.s); + goto out; + } + + ret = nouveau_job_ucopy_syncs(&args.base, + req->wait_count, req->wait_ptr, + req->sig_count, req->sig_ptr); + if (ret) + goto out_free_pushs; + + args.base.sched_entity = &chan16->sched_entity; + args.base.file_priv = file_priv; + args.chan = chan; + + ret = nouveau_exec(&args); + if (ret) + goto out_free_syncs; + +out_free_syncs: + u_free(args.base.out_sync.s); + u_free(args.base.in_sync.s); +out_free_pushs: + u_free(args.push.s); +out: + return nouveau_abi16_put(abi16, ret); +} diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.h b/drivers/gpu/drm/nouveau/nouveau_exec.h new file mode 100644 index 000000000000..4eb7e4cae2f8 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nouveau_exec.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: MIT */ + +#ifndef __NOUVEAU_EXEC_H__ +#define __NOUVEAU_EXEC_H__ + +#include + +#include "nouveau_drv.h" +#include "nouveau_sched.h" + +struct nouveau_exec_job_args { + struct nouveau_job_args base; + struct drm_exec exec; + struct nouveau_channel *chan; + + struct { + struct drm_nouveau_exec_push *s; + u32 count; + } push; +}; + +struct nouveau_exec_job { + struct nouveau_job base; + struct nouveau_fence *fence; + struct nouveau_channel *chan; + + struct { + struct drm_nouveau_exec_push *s; + u32 count; + } push; +}; + +#define to_nouveau_exec_job(job) \ + container_of((job), struct nouveau_exec_job, base) + +int nouveau_exec_job_init(struct nouveau_exec_job **job, + struct nouveau_exec_job_args *args); + +int nouveau_exec_ioctl_exec(struct drm_device *dev, void *data, + struct drm_file *file_priv); + +#endif diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 061cfd55217a..c58349ae762c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -120,7 +120,11 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv) goto out; } - ret = nouveau_vma_new(nvbo, vmm, &vma); + /* only create a VMA on binding */ + if (!nouveau_cli_uvmm(cli)) + ret = nouveau_vma_new(nvbo, vmm, &vma); + else + ret = 0; pm_runtime_mark_last_busy(dev); pm_runtime_put_autosuspend(dev); out: @@ -187,6 +191,9 @@ nouveau_gem_object_close(struct drm_gem_object *gem, struct drm_file *file_priv) if (vmm->vmm.object.oclass < NVIF_CLASS_VMM_NV50) return; + if (nouveau_cli_uvmm(cli)) + return; + ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); if (ret) return; @@ -231,7 +238,7 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain, domain |= NOUVEAU_GEM_DOMAIN_CPU; nvbo = nouveau_bo_alloc(cli, &size, &align, domain, tile_mode, - tile_flags); + tile_flags, false); if (IS_ERR(nvbo)) return PTR_ERR(nvbo); @@ -279,13 +286,15 @@ nouveau_gem_info(struct drm_file *file_priv, struct drm_gem_object *gem, else rep->domain = NOUVEAU_GEM_DOMAIN_VRAM; rep->offset = nvbo->offset; - if (vmm->vmm.object.oclass >= NVIF_CLASS_VMM_NV50) { + if (vmm->vmm.object.oclass >= NVIF_CLASS_VMM_NV50 && + !nouveau_cli_uvmm(cli)) { vma = nouveau_vma_find(nvbo, vmm); if (!vma) return -EINVAL; rep->offset = vma->addr; - } + } else + rep->offset = 0; rep->size = nvbo->bo.base.size; rep->map_handle = drm_vma_node_offset_addr(&nvbo->bo.base.vma_node); @@ -310,6 +319,11 @@ nouveau_gem_ioctl_new(struct drm_device *dev, void *data, struct nouveau_bo *nvbo = NULL; int ret = 0; + /* If uvmm wasn't initialized until now disable it completely to prevent + * userspace from mixing up UAPIs. + */ + nouveau_cli_uvmm_disable(cli); + ret = nouveau_gem_new(cli, req->info.size, req->align, req->info.domain, req->info.tile_mode, req->info.tile_flags, &nvbo); @@ -715,6 +729,9 @@ nouveau_gem_ioctl_pushbuf(struct drm_device *dev, void *data, if (unlikely(!abi16)) return -ENOMEM; + if (unlikely(nouveau_cli_uvmm(cli))) + return -ENOSYS; + list_for_each_entry(temp, &abi16->channels, head) { if (temp->chan->chid == req->channel) { chan = temp->chan; diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.h b/drivers/gpu/drm/nouveau/nouveau_mem.h index 76c86d8bb01e..5365a3d3a17f 100644 --- a/drivers/gpu/drm/nouveau/nouveau_mem.h +++ b/drivers/gpu/drm/nouveau/nouveau_mem.h @@ -35,4 +35,9 @@ int nouveau_mem_vram(struct ttm_resource *, bool contig, u8 page); int nouveau_mem_host(struct ttm_resource *, struct ttm_tt *); void nouveau_mem_fini(struct nouveau_mem *); int nouveau_mem_map(struct nouveau_mem *, struct nvif_vmm *, struct nvif_vma *); +int +nouveau_mem_map_fixed(struct nouveau_mem *mem, + struct nvif_vmm *vmm, + u8 kind, u64 addr, + u64 offset, u64 range); #endif diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index f42c2b1b0363..6a883b9a799a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -50,7 +50,7 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev, dma_resv_lock(robj, NULL); nvbo = nouveau_bo_alloc(&drm->client, &size, &align, - NOUVEAU_GEM_DOMAIN_GART, 0, 0); + NOUVEAU_GEM_DOMAIN_GART, 0, 0, true); if (IS_ERR(nvbo)) { obj = ERR_CAST(nvbo); goto unlock; diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c b/drivers/gpu/drm/nouveau/nouveau_sched.c new file mode 100644 index 000000000000..a27590d53eee --- /dev/null +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c @@ -0,0 +1,494 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright (c) 2022 Red Hat. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Danilo Krummrich + * + */ + +#include +#include +#include + +#include "nouveau_drv.h" +#include "nouveau_gem.h" +#include "nouveau_mem.h" +#include "nouveau_dma.h" +#include "nouveau_exec.h" +#include "nouveau_abi16.h" +#include "nouveau_sched.h" + +/* FIXME + * + * We want to make sure that jobs currently executing can't be deferred by + * other jobs competing for the hardware. Otherwise we might end up with job + * timeouts just because of too many clients submitting too many jobs. We don't + * want jobs to time out because of system load, but because of the job being + * too bulky. + * + * For now allow for up to 16 concurrent jobs in flight until we know how many + * rings the hardware can process in parallel. + */ +#define NOUVEAU_SCHED_HW_SUBMISSIONS 16 +#define NOUVEAU_SCHED_JOB_TIMEOUT_MS 10000 + +int +nouveau_job_ucopy_syncs(struct nouveau_job_args *args, + u32 inc, u64 ins, + u32 outc, u64 outs) +{ + struct drm_nouveau_sync **s; + int ret; + + if (inc) { + s = &args->in_sync.s; + + args->in_sync.count = inc; + *s = u_memcpya(ins, inc, sizeof(**s)); + if (IS_ERR(*s)) { + ret = PTR_ERR(*s); + goto err_out; + } + } + + if (outc) { + s = &args->out_sync.s; + + args->out_sync.count = outc; + *s = u_memcpya(outs, outc, sizeof(**s)); + if (IS_ERR(*s)) { + ret = PTR_ERR(*s); + goto err_free_ins; + } + } + + return 0; + +err_free_ins: + u_free(args->in_sync.s); +err_out: + return ret; +} + +int +nouveau_base_job_init(struct nouveau_job *job, + struct nouveau_job_args *args) +{ + struct nouveau_sched_entity *entity = args->sched_entity; + int ret; + + job->file_priv = args->file_priv; + job->cli = nouveau_cli(args->file_priv); + job->entity = entity; + + job->in_sync.count = args->in_sync.count; + if (job->in_sync.count) { + if (job->sync) + return -EINVAL; + + job->in_sync.data = kmemdup(args->in_sync.s, + sizeof(*args->in_sync.s) * + args->in_sync.count, + GFP_KERNEL); + if (!job->in_sync.data) + return -ENOMEM; + } + + job->out_sync.count = args->out_sync.count; + if (job->out_sync.count) { + if (job->sync) { + ret = -EINVAL; + goto err_free_in_sync; + } + + job->out_sync.data = kmemdup(args->out_sync.s, + sizeof(*args->out_sync.s) * + args->out_sync.count, + GFP_KERNEL); + if (!job->out_sync.data) { + ret = -ENOMEM; + goto err_free_in_sync; + } + + job->out_sync.objs = kcalloc(job->out_sync.count, + sizeof(*job->out_sync.objs), + GFP_KERNEL); + if (!job->out_sync.objs) { + ret = -ENOMEM; + goto err_free_out_sync; + } + + job->out_sync.chains = kcalloc(job->out_sync.count, + sizeof(*job->out_sync.chains), + GFP_KERNEL); + if (!job->out_sync.chains) { + ret = -ENOMEM; + goto err_free_objs; + } + + } + + ret = drm_sched_job_init(&job->base, &entity->base, NULL); + if (ret) + goto err_free_chains; + + job->state = NOUVEAU_JOB_INITIALIZED; + + return 0; + +err_free_chains: + kfree(job->out_sync.chains); +err_free_objs: + kfree(job->out_sync.objs); +err_free_out_sync: + kfree(job->out_sync.data); +err_free_in_sync: + kfree(job->in_sync.data); +return ret; +} + +void +nouveau_base_job_free(struct nouveau_job *job) +{ + kfree(job->in_sync.data); + kfree(job->out_sync.data); + kfree(job->out_sync.objs); + kfree(job->out_sync.chains); +} + +void nouveau_job_fini(struct nouveau_job *job) +{ + dma_fence_put(job->done_fence); + drm_sched_job_cleanup(&job->base); + job->ops->free(job); +} + +static int +sync_find_fence(struct nouveau_job *job, + struct drm_nouveau_sync *sync, + struct dma_fence **fence) +{ + u32 stype = sync->flags & DRM_NOUVEAU_SYNC_TYPE_MASK; + u64 point = 0; + int ret; + + if (stype != DRM_NOUVEAU_SYNC_SYNCOBJ && + stype != DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ) + return -EOPNOTSUPP; + + if (stype == DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ) + point = sync->timeline_value; + + ret = drm_syncobj_find_fence(job->file_priv, + sync->handle, point, + sync->flags, fence); + if (ret) + return ret; + + return 0; +} + +static int +nouveau_job_add_deps(struct nouveau_job *job) +{ + struct dma_fence *in_fence = NULL; + int ret, i; + + for (i = 0; i < job->in_sync.count; i++) { + struct drm_nouveau_sync *sync = &job->in_sync.data[i]; + + ret = sync_find_fence(job, sync, &in_fence); + if (ret) { + NV_PRINTK(warn, job->cli, + "Failed to find syncobj (-> in): handle=%d\n", + sync->handle); + return ret; + } + + ret = drm_sched_job_add_dependency(&job->base, in_fence); + if (ret) + return ret; + } + + return 0; +} + +static void +nouveau_job_fence_attach_cleanup(struct nouveau_job *job) +{ + int i; + + for (i = 0; i < job->out_sync.count; i++) { + struct drm_syncobj *obj = job->out_sync.objs[i]; + struct dma_fence_chain *chain = job->out_sync.chains[i]; + + if (obj) + drm_syncobj_put(obj); + + if (chain) + dma_fence_chain_free(chain); + } +} + +static int +nouveau_job_fence_attach_prepare(struct nouveau_job *job) +{ + int i, ret; + + for (i = 0; i < job->out_sync.count; i++) { + struct drm_nouveau_sync *sync = &job->out_sync.data[i]; + struct drm_syncobj **pobj = &job->out_sync.objs[i]; + struct dma_fence_chain **pchain = &job->out_sync.chains[i]; + u32 stype = sync->flags & DRM_NOUVEAU_SYNC_TYPE_MASK; + + if (stype != DRM_NOUVEAU_SYNC_SYNCOBJ && + stype != DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ) { + ret = -EINVAL; + goto err_sync_cleanup; + } + + *pobj = drm_syncobj_find(job->file_priv, sync->handle); + if (!*pobj) { + NV_PRINTK(warn, job->cli, + "Failed to find syncobj (-> out): handle=%d\n", + sync->handle); + ret = -ENOENT; + goto err_sync_cleanup; + } + + if (stype == DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ) { + *pchain = dma_fence_chain_alloc(); + if (!*pchain) { + ret = -ENOMEM; + goto err_sync_cleanup; + } + } + } + + return 0; + +err_sync_cleanup: + nouveau_job_fence_attach_cleanup(job); + return ret; +} + +static void +nouveau_job_fence_attach(struct nouveau_job *job) +{ + struct dma_fence *fence = job->done_fence; + int i; + + for (i = 0; i < job->out_sync.count; i++) { + struct drm_nouveau_sync *sync = &job->out_sync.data[i]; + struct drm_syncobj **pobj = &job->out_sync.objs[i]; + struct dma_fence_chain **pchain = &job->out_sync.chains[i]; + u32 stype = sync->flags & DRM_NOUVEAU_SYNC_TYPE_MASK; + + if (stype == DRM_NOUVEAU_SYNC_TIMELINE_SYNCOBJ) { + drm_syncobj_add_point(*pobj, *pchain, fence, + sync->timeline_value); + } else { + drm_syncobj_replace_fence(*pobj, fence); + } + + drm_syncobj_put(*pobj); + *pobj = NULL; + *pchain = NULL; + } +} + +static void +nouveau_job_resv_add_fence(struct nouveau_job *job) +{ + struct drm_exec *exec = &job->exec; + struct drm_gem_object *obj; + unsigned long index; + + drm_exec_for_each_locked_object(exec, index, obj) { + struct dma_resv *resv = obj->resv; + + dma_resv_add_fence(resv, job->done_fence, job->resv_usage); + } +} + +int +nouveau_job_submit(struct nouveau_job *job) +{ + struct nouveau_sched_entity *entity = to_nouveau_sched_entity(job->base.entity); + struct dma_fence *done_fence = NULL; + int ret; + + ret = nouveau_job_add_deps(job); + if (ret) + goto err; + + ret = nouveau_job_fence_attach_prepare(job); + if (ret) + goto err; + + /* Make sure the job appears on the sched_entity's queue in the same + * order as it was submitted. + */ + mutex_lock(&entity->mutex); + + drm_exec_init(&job->exec, true); + + /* Guarantee jobs we won't fail after the submit() callback + * returned successfully. + */ + if (job->ops->submit) { + ret = job->ops->submit(job); + if (ret) + goto err_cleanup; + } + + drm_sched_job_arm(&job->base); + job->done_fence = dma_fence_get(&job->base.s_fence->finished); + if (job->sync) + done_fence = dma_fence_get(job->done_fence); + + nouveau_job_fence_attach(job); + nouveau_job_resv_add_fence(job); + + drm_exec_fini(&job->exec); + + /* Set job state before pushing the job to the scheduler, + * such that we do not overwrite the job state set in run(). + */ + job->state = NOUVEAU_JOB_SUBMIT_SUCCESS; + + drm_sched_entity_push_job(&job->base); + + mutex_unlock(&entity->mutex); + + if (done_fence) { + dma_fence_wait(done_fence, true); + dma_fence_put(done_fence); + } + + return 0; + +err_cleanup: + drm_exec_fini(&job->exec); + mutex_unlock(&entity->mutex); + nouveau_job_fence_attach_cleanup(job); +err: + job->state = NOUVEAU_JOB_SUBMIT_FAILED; + return ret; +} + +bool +nouveau_sched_entity_qwork(struct nouveau_sched_entity *entity, + struct work_struct *work) +{ + return queue_work(entity->sched_wq, work); +} + +static struct dma_fence * +nouveau_job_run(struct nouveau_job *job) +{ + struct dma_fence *fence; + + fence = job->ops->run(job); + if (unlikely(IS_ERR(fence))) + job->state = NOUVEAU_JOB_RUN_FAILED; + else + job->state = NOUVEAU_JOB_RUN_SUCCESS; + + return fence; +} + +static struct dma_fence * +nouveau_sched_run_job(struct drm_sched_job *sched_job) +{ + struct nouveau_job *job = to_nouveau_job(sched_job); + + return nouveau_job_run(job); +} + +static enum drm_gpu_sched_stat +nouveau_sched_timedout_job(struct drm_sched_job *sched_job) +{ + struct nouveau_job *job = to_nouveau_job(sched_job); + + NV_PRINTK(warn, job->cli, "Job timed out.\n"); + + if (job->ops->timeout) + return job->ops->timeout(job); + + return DRM_GPU_SCHED_STAT_ENODEV; +} + +static void +nouveau_sched_free_job(struct drm_sched_job *sched_job) +{ + struct nouveau_job *job = to_nouveau_job(sched_job); + + nouveau_job_fini(job); +} + +int nouveau_sched_entity_init(struct nouveau_sched_entity *entity, + struct drm_gpu_scheduler *sched, + struct workqueue_struct *sched_wq) +{ + mutex_init(&entity->mutex); + spin_lock_init(&entity->job.list.lock); + INIT_LIST_HEAD(&entity->job.list.head); + init_waitqueue_head(&entity->job.wq); + + entity->sched_wq = sched_wq; + return drm_sched_entity_init(&entity->base, + DRM_SCHED_PRIORITY_NORMAL, + &sched, 1, NULL); +} + +void +nouveau_sched_entity_fini(struct nouveau_sched_entity *entity) +{ + drm_sched_entity_destroy(&entity->base); +} + +static const struct drm_sched_backend_ops nouveau_sched_ops = { + .run_job = nouveau_sched_run_job, + .timedout_job = nouveau_sched_timedout_job, + .free_job = nouveau_sched_free_job, +}; + +int nouveau_sched_init(struct nouveau_drm *drm) +{ + struct drm_gpu_scheduler *sched = &drm->sched; + long job_hang_limit = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS); + + drm->sched_wq = create_singlethread_workqueue("nouveau_sched_wq"); + if (!drm->sched_wq) + return ENOMEM; + + return drm_sched_init(sched, &nouveau_sched_ops, + NOUVEAU_SCHED_HW_SUBMISSIONS, 0, job_hang_limit, + NULL, NULL, "nouveau_sched", drm->dev->dev); +} + +void nouveau_sched_fini(struct nouveau_drm *drm) +{ + destroy_workqueue(drm->sched_wq); + drm_sched_fini(&drm->sched); +} diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.h b/drivers/gpu/drm/nouveau/nouveau_sched.h new file mode 100644 index 000000000000..662b420c5791 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nouveau_sched.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: MIT */ + +#ifndef NOUVEAU_SCHED_H +#define NOUVEAU_SCHED_H + +#include + +#include +#include + +#include "nouveau_drv.h" + +#define to_nouveau_job(sched_job) \ + container_of((sched_job), struct nouveau_job, base) + +enum nouveau_job_state { + NOUVEAU_JOB_UNINITIALIZED = 0, + NOUVEAU_JOB_INITIALIZED, + NOUVEAU_JOB_SUBMIT_SUCCESS, + NOUVEAU_JOB_SUBMIT_FAILED, + NOUVEAU_JOB_RUN_SUCCESS, + NOUVEAU_JOB_RUN_FAILED, +}; + +struct nouveau_job_args { + struct drm_file *file_priv; + struct nouveau_sched_entity *sched_entity; + + struct { + struct drm_nouveau_sync *s; + u32 count; + } in_sync; + + struct { + struct drm_nouveau_sync *s; + u32 count; + } out_sync; +}; + +struct nouveau_job { + struct drm_sched_job base; + + enum nouveau_job_state state; + + struct nouveau_sched_entity *entity; + + struct drm_file *file_priv; + struct nouveau_cli *cli; + + struct drm_exec exec; + enum dma_resv_usage resv_usage; + struct dma_fence *done_fence; + + bool sync; + + struct { + struct drm_nouveau_sync *data; + u32 count; + } in_sync; + + struct { + struct drm_nouveau_sync *data; + struct drm_syncobj **objs; + struct dma_fence_chain **chains; + u32 count; + } out_sync; + + struct nouveau_job_ops { + int (*submit)(struct nouveau_job *); + struct dma_fence *(*run)(struct nouveau_job *); + void (*free)(struct nouveau_job *); + enum drm_gpu_sched_stat (*timeout)(struct nouveau_job *); + } *ops; +}; + +int nouveau_job_ucopy_syncs(struct nouveau_job_args *args, + u32 inc, u64 ins, + u32 outc, u64 outs); + +int nouveau_base_job_init(struct nouveau_job *job, + struct nouveau_job_args *args); +void nouveau_base_job_free(struct nouveau_job *job); + +int nouveau_job_submit(struct nouveau_job *job); +void nouveau_job_fini(struct nouveau_job *job); + +#define to_nouveau_sched_entity(entity) \ + container_of((entity), struct nouveau_sched_entity, base) + +struct nouveau_sched_entity { + struct drm_sched_entity base; + struct mutex mutex; + + struct workqueue_struct *sched_wq; + + struct { + struct { + struct list_head head; + spinlock_t lock; + } list; + struct wait_queue_head wq; + } job; +}; + +int nouveau_sched_entity_init(struct nouveau_sched_entity *entity, + struct drm_gpu_scheduler *sched, + struct workqueue_struct *sched_wq); +void nouveau_sched_entity_fini(struct nouveau_sched_entity *entity); + +bool nouveau_sched_entity_qwork(struct nouveau_sched_entity *entity, + struct work_struct *work); + +int nouveau_sched_init(struct nouveau_drm *drm); +void nouveau_sched_fini(struct nouveau_drm *drm); + +#endif diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c new file mode 100644 index 000000000000..ef0effc59f41 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -0,0 +1,1836 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright (c) 2022 Red Hat. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * Authors: + * Danilo Krummrich + * + */ + +/* + * Locking: + * + * The uvmm mutex protects any operations on the GPU VA space provided by the + * DRM GPU VA manager. + * + * The DRM GEM GPUVA lock protects a GEM's GPUVA list. It also protects single + * map/unmap operations against a BO move, which itself walks the GEM's GPUVA + * list in order to map/unmap it's entries. + * + * We'd also need to protect the DRM_GPUVA_EVICTED flag for each individual + * GPUVA, however this isn't necessary since any read or write to this flag + * happens when we already took the DRM GEM GPUVA lock of the backing GEM of + * the particular GPUVA. + */ + +#include "nouveau_drv.h" +#include "nouveau_gem.h" +#include "nouveau_mem.h" +#include "nouveau_uvmm.h" + +#include +#include + +#include +#include +#include + +#define NOUVEAU_VA_SPACE_BITS 47 /* FIXME */ +#define NOUVEAU_VA_SPACE_START 0x0 +#define NOUVEAU_VA_SPACE_END (1ULL << NOUVEAU_VA_SPACE_BITS) + +#define list_last_op(_ops) list_last_entry(_ops, struct bind_job_op, entry) +#define list_prev_op(_op) list_prev_entry(_op, entry) +#define list_for_each_op(_op, _ops) list_for_each_entry(_op, _ops, entry) +#define list_for_each_op_from_reverse(_op, _ops) \ + list_for_each_entry_from_reverse(_op, _ops, entry) +#define list_for_each_op_safe(_op, _n, _ops) list_for_each_entry_safe(_op, _n, _ops, entry) + +enum vm_bind_op { + OP_MAP = DRM_NOUVEAU_VM_BIND_OP_MAP, + OP_UNMAP = DRM_NOUVEAU_VM_BIND_OP_UNMAP, + OP_MAP_SPARSE, + OP_UNMAP_SPARSE, +}; + +struct nouveau_uvma_prealloc { + struct nouveau_uvma *map; + struct nouveau_uvma *prev; + struct nouveau_uvma *next; +}; + +struct bind_job_op { + struct list_head entry; + + enum vm_bind_op op; + u32 flags; + + struct { + u64 addr; + u64 range; + } va; + + struct { + u32 handle; + u64 offset; + struct drm_gem_object *obj; + } gem; + + struct nouveau_uvma_region *reg; + struct nouveau_uvma_prealloc new; + struct drm_gpuva_ops *ops; +}; + +struct uvmm_map_args { + struct nouveau_uvma_region *region; + u64 addr; + u64 range; + u8 kind; +}; + +static int +nouveau_uvmm_vmm_sparse_ref(struct nouveau_uvmm *uvmm, + u64 addr, u64 range) +{ + struct nvif_vmm *vmm = &uvmm->vmm.vmm; + + return nvif_vmm_raw_sparse(vmm, addr, range, true); +} + +static int +nouveau_uvmm_vmm_sparse_unref(struct nouveau_uvmm *uvmm, + u64 addr, u64 range) +{ + struct nvif_vmm *vmm = &uvmm->vmm.vmm; + + return nvif_vmm_raw_sparse(vmm, addr, range, false); +} + +static int +nouveau_uvmm_vmm_get(struct nouveau_uvmm *uvmm, + u64 addr, u64 range) +{ + struct nvif_vmm *vmm = &uvmm->vmm.vmm; + + return nvif_vmm_raw_get(vmm, addr, range, PAGE_SHIFT); +} + +static int +nouveau_uvmm_vmm_put(struct nouveau_uvmm *uvmm, + u64 addr, u64 range) +{ + struct nvif_vmm *vmm = &uvmm->vmm.vmm; + + return nvif_vmm_raw_put(vmm, addr, range, PAGE_SHIFT); +} + +static int +nouveau_uvmm_vmm_unmap(struct nouveau_uvmm *uvmm, + u64 addr, u64 range, bool sparse) +{ + struct nvif_vmm *vmm = &uvmm->vmm.vmm; + + return nvif_vmm_raw_unmap(vmm, addr, range, PAGE_SHIFT, sparse); +} + +static int +nouveau_uvmm_vmm_map(struct nouveau_uvmm *uvmm, + u64 addr, u64 range, + u64 bo_offset, u8 kind, + struct nouveau_mem *mem) +{ + struct nvif_vmm *vmm = &uvmm->vmm.vmm; + union { + struct gf100_vmm_map_v0 gf100; + } args; + u32 argc = 0; + + switch (vmm->object.oclass) { + case NVIF_CLASS_VMM_GF100: + case NVIF_CLASS_VMM_GM200: + case NVIF_CLASS_VMM_GP100: + args.gf100.version = 0; + if (mem->mem.type & NVIF_MEM_VRAM) + args.gf100.vol = 0; + else + args.gf100.vol = 1; + args.gf100.ro = 0; + args.gf100.priv = 0; + args.gf100.kind = kind; + argc = sizeof(args.gf100); + break; + default: + WARN_ON(1); + return -ENOSYS; + } + + return nvif_vmm_raw_map(vmm, addr, range, PAGE_SHIFT, + &args, argc, + &mem->mem, bo_offset); +} + +static int +nouveau_uvma_region_sparse_unref(struct nouveau_uvma_region *reg) +{ + u64 addr = reg->va.addr; + u64 range = reg->va.range; + + return nouveau_uvmm_vmm_sparse_unref(reg->uvmm, addr, range); +} + +static int +nouveau_uvma_vmm_put(struct nouveau_uvma *uvma) +{ + u64 addr = uvma->va.va.addr; + u64 range = uvma->va.va.range; + + return nouveau_uvmm_vmm_put(uvma->uvmm, addr, range); +} + +static int +nouveau_uvma_map(struct nouveau_uvma *uvma, + struct nouveau_mem *mem) +{ + u64 addr = uvma->va.va.addr; + u64 offset = uvma->va.gem.offset; + u64 range = uvma->va.va.range; + + return nouveau_uvmm_vmm_map(uvma->uvmm, addr, range, + offset, uvma->kind, mem); +} + +static int +nouveau_uvma_unmap(struct nouveau_uvma *uvma) +{ + u64 addr = uvma->va.va.addr; + u64 range = uvma->va.va.range; + bool sparse = !!uvma->region; + + if (drm_gpuva_evicted(&uvma->va)) + return 0; + + return nouveau_uvmm_vmm_unmap(uvma->uvmm, addr, range, sparse); +} + +static int +nouveau_uvma_alloc(struct nouveau_uvma **puvma) +{ + *puvma = kzalloc(sizeof(**puvma), GFP_KERNEL); + if (!*puvma) + return -ENOMEM; + + return 0; +} + +static void +nouveau_uvma_free(struct nouveau_uvma *uvma) +{ + kfree(uvma); +} + +static int +__nouveau_uvma_insert(struct nouveau_uvmm *uvmm, + struct nouveau_uvma *uvma) +{ + return drm_gpuva_insert(&uvmm->umgr, &uvma->va); +} + +static int +nouveau_uvma_insert(struct nouveau_uvmm *uvmm, + struct nouveau_uvma *uvma, + struct nouveau_uvma_region *region, + struct drm_gem_object *obj, + u64 bo_offset, u64 addr, + u64 range, u8 kind) +{ + int ret; + + uvma->uvmm = uvmm; + uvma->region = region; + uvma->kind = kind; + uvma->va.va.addr = addr; + uvma->va.va.range = range; + uvma->va.gem.offset = bo_offset; + uvma->va.gem.obj = obj; + + ret = __nouveau_uvma_insert(uvmm, uvma); + if (ret) + return ret; + + return 0; +} + +static void +nouveau_uvma_remove(struct nouveau_uvma *uvma) +{ + drm_gpuva_remove(&uvma->va); +} + +static void +nouveau_uvma_gem_get(struct nouveau_uvma *uvma) +{ + drm_gem_object_get(uvma->va.gem.obj); +} + +static void +nouveau_uvma_gem_put(struct nouveau_uvma *uvma) +{ + drm_gem_object_put(uvma->va.gem.obj); +} + +static int +nouveau_uvma_region_alloc(struct nouveau_uvma_region **preg) +{ + *preg = kzalloc(sizeof(**preg), GFP_KERNEL); + if (!*preg) + return -ENOMEM; + + kref_init(&(*preg)->kref); + + return 0; +} + +static void +nouveau_uvma_region_free(struct kref *kref) +{ + struct nouveau_uvma_region *reg = + container_of(kref, struct nouveau_uvma_region, kref); + + kfree(reg); +} + +static void +nouveau_uvma_region_get(struct nouveau_uvma_region *reg) +{ + kref_get(®->kref); +} + +static void +nouveau_uvma_region_put(struct nouveau_uvma_region *reg) +{ + kref_put(®->kref, nouveau_uvma_region_free); +} + +static int +__nouveau_uvma_region_insert(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_region *reg) +{ + u64 addr = reg->va.addr; + u64 range = reg->va.range; + u64 last = addr + range - 1; + MA_STATE(mas, &uvmm->region_mt, addr, addr); + + if (unlikely(mas_walk(&mas))) { + mas_unlock(&mas); + return -EEXIST; + } + + if (unlikely(mas.last < last)) { + mas_unlock(&mas); + return -EEXIST; + } + + mas.index = addr; + mas.last = last; + + mas_store_gfp(&mas, reg, GFP_KERNEL); + + reg->uvmm = uvmm; + + return 0; +} + +static int +nouveau_uvma_region_insert(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_region *reg, + u64 addr, u64 range) +{ + int ret; + + reg->uvmm = uvmm; + reg->va.addr = addr; + reg->va.range = range; + + ret = __nouveau_uvma_region_insert(uvmm, reg); + if (ret) + return ret; + + return 0; +} + +static void +nouveau_uvma_region_remove(struct nouveau_uvma_region *reg) +{ + struct nouveau_uvmm *uvmm = reg->uvmm; + MA_STATE(mas, &uvmm->region_mt, reg->va.addr, 0); + + mas_erase(&mas); +} + +static int +nouveau_uvma_region_create(struct nouveau_uvmm *uvmm, + u64 addr, u64 range) +{ + struct nouveau_uvma_region *reg; + int ret; + + if (!drm_gpuva_interval_empty(&uvmm->umgr, addr, range)) + return -ENOSPC; + + ret = nouveau_uvma_region_alloc(®); + if (ret) + return ret; + + ret = nouveau_uvma_region_insert(uvmm, reg, addr, range); + if (ret) + goto err_free_region; + + ret = nouveau_uvmm_vmm_sparse_ref(uvmm, addr, range); + if (ret) + goto err_region_remove; + + return 0; + +err_region_remove: + nouveau_uvma_region_remove(reg); +err_free_region: + nouveau_uvma_region_put(reg); + return ret; +} + +static struct nouveau_uvma_region * +nouveau_uvma_region_find_first(struct nouveau_uvmm *uvmm, + u64 addr, u64 range) +{ + MA_STATE(mas, &uvmm->region_mt, addr, 0); + + return mas_find(&mas, addr + range - 1); +} + +static struct nouveau_uvma_region * +nouveau_uvma_region_find(struct nouveau_uvmm *uvmm, + u64 addr, u64 range) +{ + struct nouveau_uvma_region *reg; + + reg = nouveau_uvma_region_find_first(uvmm, addr, range); + if (!reg) + return NULL; + + if (reg->va.addr != addr || + reg->va.range != range) + return NULL; + + return reg; +} + +static bool +nouveau_uvma_region_empty(struct nouveau_uvma_region *reg) +{ + struct nouveau_uvmm *uvmm = reg->uvmm; + + return drm_gpuva_interval_empty(&uvmm->umgr, + reg->va.addr, + reg->va.range); +} + +static int +__nouveau_uvma_region_destroy(struct nouveau_uvma_region *reg) +{ + struct nouveau_uvmm *uvmm = reg->uvmm; + u64 addr = reg->va.addr; + u64 range = reg->va.range; + + if (!nouveau_uvma_region_empty(reg)) + return -EBUSY; + + nouveau_uvma_region_remove(reg); + nouveau_uvmm_vmm_sparse_unref(uvmm, addr, range); + nouveau_uvma_region_put(reg); + + return 0; +} + +static int +nouveau_uvma_region_destroy(struct nouveau_uvmm *uvmm, + u64 addr, u64 range) +{ + struct nouveau_uvma_region *reg; + + reg = nouveau_uvma_region_find(uvmm, addr, range); + if (!reg) + return -ENOENT; + + return __nouveau_uvma_region_destroy(reg); +} + +static void +nouveau_uvma_region_dirty(struct nouveau_uvma_region *reg) +{ + + init_completion(®->complete); + reg->dirty = true; +} + +static void +nouveau_uvma_region_complete(struct nouveau_uvma_region *reg) +{ + complete_all(®->complete); +} + +static void +op_map_prepare_unwind(struct nouveau_uvma *uvma) +{ + nouveau_uvma_gem_put(uvma); + nouveau_uvma_remove(uvma); + nouveau_uvma_free(uvma); +} + +static void +op_unmap_prepare_unwind(struct drm_gpuva *va) +{ + drm_gpuva_insert(va->mgr, va); +} + +static void +uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops, + struct drm_gpuva_op *last, + struct uvmm_map_args *args) +{ + struct drm_gpuva_op *op = last; + u64 vmm_get_start = args ? args->addr : 0; + u64 vmm_get_end = args ? args->addr + args->range : 0; + + /* Unwind GPUVA space. */ + drm_gpuva_for_each_op_from_reverse(op, ops) { + switch (op->op) { + case DRM_GPUVA_OP_MAP: + op_map_prepare_unwind(new->map); + break; + case DRM_GPUVA_OP_REMAP: { + struct drm_gpuva_op_remap *r = &op->remap; + + if (r->next) + op_map_prepare_unwind(new->next); + + if (r->prev) + op_map_prepare_unwind(new->prev); + + op_unmap_prepare_unwind(r->unmap->va); + break; + } + case DRM_GPUVA_OP_UNMAP: + op_unmap_prepare_unwind(op->unmap.va); + break; + default: + break; + } + } + + /* Unmap operation don't allocate page tables, hence skip the following + * page table unwind. + */ + if (!args) + return; + + drm_gpuva_for_each_op(op, ops) { + switch (op->op) { + case DRM_GPUVA_OP_MAP: { + u64 vmm_get_range = vmm_get_end - vmm_get_start; + + if (vmm_get_range) + nouveau_uvmm_vmm_put(uvmm, vmm_get_start, + vmm_get_range); + break; + } + case DRM_GPUVA_OP_REMAP: { + struct drm_gpuva_op_remap *r = &op->remap; + struct drm_gpuva *va = r->unmap->va; + u64 ustart = va->va.addr; + u64 urange = va->va.range; + u64 uend = ustart + urange; + + if (r->prev) + vmm_get_start = uend; + + if (r->next) + vmm_get_end = ustart; + + if (r->prev && r->next) + vmm_get_start = vmm_get_end = 0; + + break; + } + case DRM_GPUVA_OP_UNMAP: { + struct drm_gpuva_op_unmap *u = &op->unmap; + struct drm_gpuva *va = u->va; + u64 ustart = va->va.addr; + u64 urange = va->va.range; + u64 uend = ustart + urange; + + /* Nothing to do for mappings we merge with. */ + if (uend == vmm_get_start || + ustart == vmm_get_end) + break; + + if (ustart > vmm_get_start) { + u64 vmm_get_range = ustart - vmm_get_start; + + nouveau_uvmm_vmm_put(uvmm, vmm_get_start, + vmm_get_range); + } + vmm_get_start = uend; + break; + } + default: + break; + } + + if (op == last) + break; + } +} + +static void +nouveau_uvmm_sm_map_prepare_unwind(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops, + u64 addr, u64 range) +{ + struct drm_gpuva_op *last = drm_gpuva_last_op(ops); + struct uvmm_map_args args = { + .addr = addr, + .range = range, + }; + + uvmm_sm_prepare_unwind(uvmm, new, ops, last, &args); +} + +static void +nouveau_uvmm_sm_unmap_prepare_unwind(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops) +{ + struct drm_gpuva_op *last = drm_gpuva_last_op(ops); + + uvmm_sm_prepare_unwind(uvmm, new, ops, last, NULL); +} + +static int +op_map_prepare(struct nouveau_uvmm *uvmm, + struct nouveau_uvma **puvma, + struct drm_gpuva_op_map *m, + struct uvmm_map_args *args) +{ + struct nouveau_uvma *uvma; + int ret; + + ret = nouveau_uvma_alloc(&uvma); + if (ret) + goto err; + + ret = nouveau_uvma_insert(uvmm, uvma, args->region, + m->gem.obj, m->gem.offset, + m->va.addr, m->va.range, + args->kind); + if (ret) + goto err_free_uvma; + + /* Keep a reference until this uvma is destroyed. */ + nouveau_uvma_gem_get(uvma); + + *puvma = uvma; + return 0; + +err_free_uvma: + nouveau_uvma_free(uvma); +err: + *puvma = NULL; + return ret; +} + +static void +op_unmap_prepare(struct drm_gpuva_op_unmap *u) +{ + struct nouveau_uvma *uvma = uvma_from_va(u->va); + + nouveau_uvma_remove(uvma); +} + +static int +uvmm_sm_prepare(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops, + struct uvmm_map_args *args) +{ + struct drm_gpuva_op *op; + u64 vmm_get_start = args ? args->addr : 0; + u64 vmm_get_end = args ? args->addr + args->range : 0; + int ret; + + drm_gpuva_for_each_op(op, ops) { + switch (op->op) { + case DRM_GPUVA_OP_MAP: { + u64 vmm_get_range = vmm_get_end - vmm_get_start; + + ret = op_map_prepare(uvmm, &new->map, &op->map, args); + if (ret) + goto unwind; + + if (args && vmm_get_range) { + ret = nouveau_uvmm_vmm_get(uvmm, vmm_get_start, + vmm_get_range); + if (ret) { + op_map_prepare_unwind(new->map); + goto unwind; + } + } + break; + } + case DRM_GPUVA_OP_REMAP: { + struct drm_gpuva_op_remap *r = &op->remap; + struct drm_gpuva *va = r->unmap->va; + struct uvmm_map_args remap_args = { + .kind = uvma_from_va(va)->kind, + }; + u64 ustart = va->va.addr; + u64 urange = va->va.range; + u64 uend = ustart + urange; + + op_unmap_prepare(r->unmap); + + if (r->prev) { + ret = op_map_prepare(uvmm, &new->prev, r->prev, + &remap_args); + if (ret) + goto unwind; + + if (args) + vmm_get_start = uend; + } + + if (r->next) { + ret = op_map_prepare(uvmm, &new->next, r->next, + &remap_args); + if (ret) { + if (r->prev) + op_map_prepare_unwind(new->prev); + goto unwind; + } + + if (args) + vmm_get_end = ustart; + } + + if (args && (r->prev && r->next)) + vmm_get_start = vmm_get_end = 0; + + break; + } + case DRM_GPUVA_OP_UNMAP: { + struct drm_gpuva_op_unmap *u = &op->unmap; + struct drm_gpuva *va = u->va; + u64 ustart = va->va.addr; + u64 urange = va->va.range; + u64 uend = ustart + urange; + + op_unmap_prepare(u); + + if (!args) + break; + + /* Nothing to do for mappings we merge with. */ + if (uend == vmm_get_start || + ustart == vmm_get_end) + break; + + if (ustart > vmm_get_start) { + u64 vmm_get_range = ustart - vmm_get_start; + + ret = nouveau_uvmm_vmm_get(uvmm, vmm_get_start, + vmm_get_range); + if (ret) { + op_unmap_prepare_unwind(va); + goto unwind; + } + } + vmm_get_start = uend; + + break; + } + default: + ret = -EINVAL; + goto unwind; + } + } + + return 0; + +unwind: + if (op != drm_gpuva_first_op(ops)) + uvmm_sm_prepare_unwind(uvmm, new, ops, + drm_gpuva_prev_op(op), + args); + return ret; +} + +static int +nouveau_uvmm_sm_map_prepare(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct nouveau_uvma_region *region, + struct drm_gpuva_ops *ops, + u64 addr, u64 range, u8 kind) +{ + struct uvmm_map_args args = { + .region = region, + .addr = addr, + .range = range, + .kind = kind, + }; + + return uvmm_sm_prepare(uvmm, new, ops, &args); +} + +static int +nouveau_uvmm_sm_unmap_prepare(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops) +{ + return uvmm_sm_prepare(uvmm, new, ops, NULL); +} + +static struct drm_gem_object * +op_gem_obj(struct drm_gpuva_op *op) +{ + switch (op->op) { + case DRM_GPUVA_OP_MAP: + return op->map.gem.obj; + case DRM_GPUVA_OP_REMAP: + return op->remap.unmap->va->gem.obj; + case DRM_GPUVA_OP_UNMAP: + return op->unmap.va->gem.obj; + default: + WARN(1, "Unknown operation.\n"); + return NULL; + } +} + +static void +op_map(struct nouveau_uvma *uvma) +{ + struct nouveau_bo *nvbo = nouveau_gem_object(uvma->va.gem.obj); + + nouveau_uvma_map(uvma, nouveau_mem(nvbo->bo.resource)); + drm_gpuva_link(&uvma->va); +} + +static void +op_unmap(struct drm_gpuva_op_unmap *u) +{ + struct drm_gpuva *va = u->va; + struct nouveau_uvma *uvma = uvma_from_va(va); + + /* nouveau_uvma_unmap() does not unmap if backing BO is evicted. */ + if (!u->keep) + nouveau_uvma_unmap(uvma); + drm_gpuva_unlink(va); +} + +static void +op_unmap_range(struct drm_gpuva_op_unmap *u, + u64 addr, u64 range) +{ + struct nouveau_uvma *uvma = uvma_from_va(u->va); + bool sparse = !!uvma->region; + + if (!drm_gpuva_evicted(u->va)) + nouveau_uvmm_vmm_unmap(uvma->uvmm, addr, range, sparse); + + drm_gpuva_unlink(u->va); +} + +static void +op_remap(struct drm_gpuva_op_remap *r, + struct nouveau_uvma_prealloc *new) +{ + struct drm_gpuva_op_unmap *u = r->unmap; + struct nouveau_uvma *uvma = uvma_from_va(u->va); + u64 addr = uvma->va.va.addr; + u64 range = uvma->va.va.range; + + if (r->prev) { + addr = r->prev->va.addr + r->prev->va.range; + drm_gpuva_link(&new->prev->va); + } + + if (r->next) { + range = r->next->va.addr - addr; + drm_gpuva_link(&new->next->va); + } + + op_unmap_range(u, addr, range); +} + +static int +uvmm_sm(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops) +{ + struct drm_gpuva_op *op; + + drm_gpuva_for_each_op(op, ops) { + struct drm_gem_object *obj = op_gem_obj(op); + + if (!obj) + continue; + + drm_gem_gpuva_lock(obj); + switch (op->op) { + case DRM_GPUVA_OP_MAP: + op_map(new->map); + break; + case DRM_GPUVA_OP_REMAP: + op_remap(&op->remap, new); + break; + case DRM_GPUVA_OP_UNMAP: + op_unmap(&op->unmap); + break; + default: + break; + } + drm_gem_gpuva_unlock(obj); + } + + return 0; +} + +static int +nouveau_uvmm_sm_map(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops) +{ + return uvmm_sm(uvmm, new, ops); +} + +static int +nouveau_uvmm_sm_unmap(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops) +{ + return uvmm_sm(uvmm, new, ops); +} + +static void +uvmm_sm_cleanup(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops, bool unmap) +{ + struct drm_gpuva_op *op; + + drm_gpuva_for_each_op(op, ops) { + switch (op->op) { + case DRM_GPUVA_OP_MAP: + break; + case DRM_GPUVA_OP_REMAP: { + struct drm_gpuva_op_remap *r = &op->remap; + struct drm_gpuva_op_map *p = r->prev; + struct drm_gpuva_op_map *n = r->next; + struct drm_gpuva *va = r->unmap->va; + struct nouveau_uvma *uvma = uvma_from_va(va); + + if (unmap) { + u64 addr = va->va.addr; + u64 end = addr + va->va.range; + + if (p) + addr = p->va.addr + p->va.range; + + if (n) + end = n->va.addr; + + nouveau_uvmm_vmm_put(uvmm, addr, end - addr); + } + + nouveau_uvma_gem_put(uvma); + nouveau_uvma_free(uvma); + break; + } + case DRM_GPUVA_OP_UNMAP: { + struct drm_gpuva_op_unmap *u = &op->unmap; + struct drm_gpuva *va = u->va; + struct nouveau_uvma *uvma = uvma_from_va(va); + + if (unmap) + nouveau_uvma_vmm_put(uvma); + + nouveau_uvma_gem_put(uvma); + nouveau_uvma_free(uvma); + break; + } + default: + break; + } + } +} + +static void +nouveau_uvmm_sm_map_cleanup(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops) +{ + uvmm_sm_cleanup(uvmm, new, ops, false); +} + +static void +nouveau_uvmm_sm_unmap_cleanup(struct nouveau_uvmm *uvmm, + struct nouveau_uvma_prealloc *new, + struct drm_gpuva_ops *ops) +{ + uvmm_sm_cleanup(uvmm, new, ops, true); +} + +static int +nouveau_uvmm_validate_range(struct nouveau_uvmm *uvmm, u64 addr, u64 range) +{ + u64 end = addr + range; + u64 unmanaged_end = uvmm->unmanaged_addr + + uvmm->unmanaged_size; + + if (addr & ~PAGE_MASK) + return -EINVAL; + + if (range & ~PAGE_MASK) + return -EINVAL; + + if (end <= addr) + return -EINVAL; + + if (addr < NOUVEAU_VA_SPACE_START || + end > NOUVEAU_VA_SPACE_END) + return -EINVAL; + + if (addr < unmanaged_end && + end > uvmm->unmanaged_addr) + return -EINVAL; + + return 0; +} + +static int +nouveau_uvmm_bind_job_alloc(struct nouveau_uvmm_bind_job **pjob) +{ + *pjob = kzalloc(sizeof(**pjob), GFP_KERNEL); + if (!*pjob) + return -ENOMEM; + + kref_init(&(*pjob)->kref); + + return 0; +} + +static void +nouveau_uvmm_bind_job_free(struct kref *kref) +{ + struct nouveau_uvmm_bind_job *job = + container_of(kref, struct nouveau_uvmm_bind_job, kref); + + kfree(job); +} + +static void +nouveau_uvmm_bind_job_get(struct nouveau_uvmm_bind_job *job) +{ + kref_get(&job->kref); +} + +static void +nouveau_uvmm_bind_job_put(struct nouveau_uvmm_bind_job *job) +{ + kref_put(&job->kref, nouveau_uvmm_bind_job_free); +} + +static int +bind_validate_op(struct nouveau_job *job, + struct bind_job_op *op) +{ + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); + struct drm_gem_object *obj = op->gem.obj; + + if (op->op == OP_MAP) { + if (op->gem.offset & ~PAGE_MASK) + return -EINVAL; + + if (obj->size <= op->gem.offset) + return -EINVAL; + + if (op->va.range > (obj->size - op->gem.offset)) + return -EINVAL; + } + + return nouveau_uvmm_validate_range(uvmm, op->va.addr, op->va.range); +} + +static void +bind_validate_map_sparse(struct nouveau_job *job, u64 addr, u64 range) +{ + struct nouveau_uvmm_bind_job *bind_job; + struct nouveau_sched_entity *entity = job->entity; + struct bind_job_op *op; + u64 end = addr + range; + +again: + spin_lock(&entity->job.list.lock); + list_for_each_entry(bind_job, &entity->job.list.head, entry) { + list_for_each_op(op, &bind_job->ops) { + if (op->op == OP_UNMAP) { + u64 op_addr = op->va.addr; + u64 op_end = op_addr + op->va.range; + + if (!(end <= op_addr || addr >= op_end)) { + nouveau_uvmm_bind_job_get(bind_job); + spin_unlock(&entity->job.list.lock); + wait_for_completion(&bind_job->complete); + nouveau_uvmm_bind_job_put(bind_job); + goto again; + } + } + } + } + spin_unlock(&entity->job.list.lock); +} + +static int +bind_validate_map_common(struct nouveau_job *job, u64 addr, u64 range, + bool sparse) +{ + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); + struct nouveau_uvma_region *reg; + u64 reg_addr, reg_end; + u64 end = addr + range; + +again: + nouveau_uvmm_lock(uvmm); + reg = nouveau_uvma_region_find_first(uvmm, addr, range); + if (!reg) { + nouveau_uvmm_unlock(uvmm); + return 0; + } + + /* Generally, job submits are serialized, hence only + * dirty regions can be modified concurrently. */ + if (reg->dirty) { + nouveau_uvma_region_get(reg); + nouveau_uvmm_unlock(uvmm); + wait_for_completion(®->complete); + nouveau_uvma_region_put(reg); + goto again; + } + nouveau_uvmm_unlock(uvmm); + + if (sparse) + return -ENOSPC; + + reg_addr = reg->va.addr; + reg_end = reg_addr + reg->va.range; + + /* Make sure the mapping is either outside of a + * region or fully enclosed by a region. + */ + if (reg_addr > addr || reg_end < end) + return -ENOSPC; + + return 0; +} + +static int +bind_validate_region(struct nouveau_job *job) +{ + struct nouveau_uvmm_bind_job *bind_job = to_uvmm_bind_job(job); + struct bind_job_op *op; + int ret; + + list_for_each_op(op, &bind_job->ops) { + u64 op_addr = op->va.addr; + u64 op_range = op->va.range; + bool sparse = false; + + switch (op->op) { + case OP_MAP_SPARSE: + sparse = true; + bind_validate_map_sparse(job, op_addr, op_range); + fallthrough; + case OP_MAP: + ret = bind_validate_map_common(job, op_addr, op_range, + sparse); + if (ret) + return ret; + break; + default: + break; + } + } + + return 0; +} + +static int +uvmm_bind_job_submit(struct nouveau_job *job) +{ + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); + struct nouveau_uvmm_bind_job *bind_job = to_uvmm_bind_job(job); + struct nouveau_sched_entity *entity = job->entity; + struct drm_exec *exec = &job->exec; + struct drm_gem_object *obj; + struct bind_job_op *op; + unsigned long index; + int ret; + + list_for_each_op(op, &bind_job->ops) { + if (op->op == OP_MAP) { + op->gem.obj = drm_gem_object_lookup(job->file_priv, + op->gem.handle); + if (!op->gem.obj) + return -ENOENT; + } + + ret = bind_validate_op(job, op); + if (ret) + return ret; + } + + /* If a sparse region or mapping overlaps a dirty region, we need to + * wait for the region to complete the unbind process. This is due to + * how page table management is currently implemented. A future + * implementation might change this. + */ + ret = bind_validate_region(job); + if (ret) + return ret; + + /* Once we start modifying the GPU VA space we need to keep holding the + * uvmm lock until we can't fail anymore. This is due to the set of GPU + * VA space changes must appear atomically and we need to be able to + * unwind all GPU VA space changes on failure. + */ + nouveau_uvmm_lock(uvmm); + list_for_each_op(op, &bind_job->ops) { + switch (op->op) { + case OP_MAP_SPARSE: + ret = nouveau_uvma_region_create(uvmm, + op->va.addr, + op->va.range); + if (ret) + goto unwind_continue; + + break; + case OP_UNMAP_SPARSE: + op->reg = nouveau_uvma_region_find(uvmm, op->va.addr, + op->va.range); + if (!op->reg || op->reg->dirty) { + ret = -ENOENT; + goto unwind_continue; + } + + op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr, + op->va.addr, + op->va.range); + if (IS_ERR(op->ops)) { + ret = PTR_ERR(op->ops); + goto unwind_continue; + } + + ret = nouveau_uvmm_sm_unmap_prepare(uvmm, &op->new, + op->ops); + if (ret) { + drm_gpuva_ops_free(&uvmm->umgr, op->ops); + op->ops = NULL; + op->reg = NULL; + goto unwind_continue; + } + + nouveau_uvma_region_dirty(op->reg); + + break; + case OP_MAP: { + struct nouveau_uvma_region *reg; + + reg = nouveau_uvma_region_find_first(uvmm, + op->va.addr, + op->va.range); + if (reg) { + u64 reg_addr = reg->va.addr; + u64 reg_end = reg_addr + reg->va.range; + u64 op_addr = op->va.addr; + u64 op_end = op_addr + op->va.range; + + if (unlikely(reg->dirty)) { + ret = -EINVAL; + goto unwind_continue; + } + + /* Make sure the mapping is either outside of a + * region or fully enclosed by a region. + */ + if (reg_addr > op_addr || reg_end < op_end) { + ret = -ENOSPC; + goto unwind_continue; + } + } + + op->ops = drm_gpuva_sm_map_ops_create(&uvmm->umgr, + op->va.addr, + op->va.range, + op->gem.obj, + op->gem.offset); + if (IS_ERR(op->ops)) { + ret = PTR_ERR(op->ops); + goto unwind_continue; + } + + ret = nouveau_uvmm_sm_map_prepare(uvmm, &op->new, + reg, op->ops, + op->va.addr, + op->va.range, + op->flags & 0xff); + if (ret) { + drm_gpuva_ops_free(&uvmm->umgr, op->ops); + op->ops = NULL; + goto unwind_continue; + } + + break; + } + case OP_UNMAP: + op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr, + op->va.addr, + op->va.range); + if (IS_ERR(op->ops)) { + ret = PTR_ERR(op->ops); + goto unwind_continue; + } + + ret = nouveau_uvmm_sm_unmap_prepare(uvmm, &op->new, + op->ops); + if (ret) { + drm_gpuva_ops_free(&uvmm->umgr, op->ops); + op->ops = NULL; + goto unwind_continue; + } + + break; + default: + ret = -EINVAL; + goto unwind_continue; + } + } + + drm_exec_while_not_all_locked(exec) { + list_for_each_op(op, &bind_job->ops) { + if (op->op != OP_MAP) + continue; + + ret = drm_exec_prepare_obj(exec, op->gem.obj, 1); + drm_exec_break_on_contention(exec); + if (ret == -EALREADY) + continue; + else if (ret) { + op = list_last_op(&bind_job->ops); + goto unwind; + } + } + } + + drm_exec_for_each_locked_object(exec, index, obj) { + struct nouveau_bo *nvbo = nouveau_gem_object(obj); + + ret = nouveau_bo_validate(nvbo, true, false); + if (ret) { + op = list_last_op(&bind_job->ops); + goto unwind; + } + } + nouveau_uvmm_unlock(uvmm); + + spin_lock(&entity->job.list.lock); + list_add(&bind_job->entry, &entity->job.list.head); + spin_unlock(&entity->job.list.lock); + + return 0; + +unwind_continue: + op = list_prev_op(op); +unwind: + list_for_each_op_from_reverse(op, &bind_job->ops) { + switch (op->op) { + case OP_MAP_SPARSE: + nouveau_uvma_region_destroy(uvmm, op->va.addr, + op->va.range); + break; + case OP_UNMAP_SPARSE: + __nouveau_uvma_region_insert(uvmm, op->reg); + nouveau_uvmm_sm_unmap_prepare_unwind(uvmm, &op->new, + op->ops); + break; + case OP_MAP: + nouveau_uvmm_sm_map_prepare_unwind(uvmm, &op->new, + op->ops, + op->va.addr, + op->va.range); + break; + case OP_UNMAP: + nouveau_uvmm_sm_unmap_prepare_unwind(uvmm, &op->new, + op->ops); + break; + } + + drm_gpuva_ops_free(&uvmm->umgr, op->ops); + op->ops = NULL; + op->reg = NULL; + } + + nouveau_uvmm_unlock(uvmm); + return ret; +} + +static struct dma_fence * +uvmm_bind_job_run(struct nouveau_job *job) +{ + struct nouveau_uvmm_bind_job *bind_job = to_uvmm_bind_job(job); + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); + struct bind_job_op *op; + int ret = 0; + + list_for_each_op(op, &bind_job->ops) { + switch (op->op) { + case OP_MAP_SPARSE: + /* noop */ + break; + case OP_MAP: + ret = nouveau_uvmm_sm_map(uvmm, &op->new, op->ops); + if (ret) + goto out; + break; + case OP_UNMAP_SPARSE: + fallthrough; + case OP_UNMAP: + ret = nouveau_uvmm_sm_unmap(uvmm, &op->new, op->ops); + if (ret) + goto out; + break; + } + } + +out: + if (ret) + NV_PRINTK(err, job->cli, "bind job failed: %d\n", ret); + return ERR_PTR(ret); +} + +static void +uvmm_bind_job_free_work_fn(struct work_struct *work) +{ + struct nouveau_uvmm_bind_job *bind_job = + container_of(work, struct nouveau_uvmm_bind_job, work); + struct nouveau_job *job = &bind_job->base; + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli); + struct nouveau_sched_entity *entity = job->entity; + struct bind_job_op *op, *next; + + list_for_each_op_safe(op, next, &bind_job->ops) { + struct drm_gem_object *obj = op->gem.obj; + + /* When uvmm_bind_job_submit() failed op->ops and op->reg will + * be NULL, hence skip the cleanup. + */ + switch (op->op) { + case OP_MAP_SPARSE: + /* noop */ + break; + case OP_UNMAP_SPARSE: + if (!IS_ERR_OR_NULL(op->ops)) + nouveau_uvmm_sm_unmap_cleanup(uvmm, &op->new, + op->ops); + + if (op->reg) { + nouveau_uvma_region_sparse_unref(op->reg); + nouveau_uvmm_lock(uvmm); + nouveau_uvma_region_remove(op->reg); + nouveau_uvmm_unlock(uvmm); + nouveau_uvma_region_complete(op->reg); + nouveau_uvma_region_put(op->reg); + } + + break; + case OP_MAP: + if (!IS_ERR_OR_NULL(op->ops)) + nouveau_uvmm_sm_map_cleanup(uvmm, &op->new, + op->ops); + break; + case OP_UNMAP: + if (!IS_ERR_OR_NULL(op->ops)) + nouveau_uvmm_sm_unmap_cleanup(uvmm, &op->new, + op->ops); + break; + } + + if (!IS_ERR_OR_NULL(op->ops)) + drm_gpuva_ops_free(&uvmm->umgr, op->ops); + + if (obj) + drm_gem_object_put(obj); + + list_del(&op->entry); + kfree(op); + } + + spin_lock(&entity->job.list.lock); + list_del(&bind_job->entry); + spin_unlock(&entity->job.list.lock); + + complete_all(&bind_job->complete); + wake_up(&entity->job.wq); + + nouveau_base_job_free(job); + nouveau_uvmm_bind_job_put(bind_job); +} + +static void +uvmm_bind_job_free(struct nouveau_job *job) +{ + struct nouveau_uvmm_bind_job *bind_job = to_uvmm_bind_job(job); + struct nouveau_sched_entity *entity = job->entity; + + nouveau_sched_entity_qwork(entity, &bind_job->work); +} + +static struct nouveau_job_ops nouveau_bind_job_ops = { + .submit = uvmm_bind_job_submit, + .run = uvmm_bind_job_run, + .free = uvmm_bind_job_free, +}; + +static int +bind_job_op_from_uop(struct bind_job_op **pop, + struct drm_nouveau_vm_bind_op *uop) +{ + struct bind_job_op *op; + + op = *pop = kzalloc(sizeof(*op), GFP_KERNEL); + if (!op) + return -ENOMEM; + + switch (uop->op) { + case OP_MAP: + op->op = uop->flags & DRM_NOUVEAU_VM_BIND_SPARSE ? + OP_MAP_SPARSE : OP_MAP; + break; + case OP_UNMAP: + op->op = uop->flags & DRM_NOUVEAU_VM_BIND_SPARSE ? + OP_UNMAP_SPARSE : OP_UNMAP; + break; + default: + op->op = uop->op; + break; + } + + op->flags = uop->flags; + op->va.addr = uop->addr; + op->va.range = uop->range; + op->gem.handle = uop->handle; + op->gem.offset = uop->bo_offset; + + return 0; +} + +static void +bind_job_ops_free(struct list_head *ops) +{ + struct bind_job_op *op, *next; + + list_for_each_op_safe(op, next, ops) { + list_del(&op->entry); + kfree(op); + } +} + +int +nouveau_uvmm_bind_job_init(struct nouveau_uvmm_bind_job **pjob, + struct nouveau_uvmm_bind_job_args *args) +{ + struct nouveau_uvmm_bind_job *job; + struct bind_job_op *op; + int i, ret; + + ret = nouveau_uvmm_bind_job_alloc(&job); + if (ret) + return ret; + + INIT_LIST_HEAD(&job->ops); + INIT_LIST_HEAD(&job->entry); + + for (i = 0; i < args->op.count; i++) { + ret = bind_job_op_from_uop(&op, &args->op.s[i]); + if (ret) + goto err_free; + + list_add_tail(&op->entry, &job->ops); + } + + init_completion(&job->complete); + INIT_WORK(&job->work, uvmm_bind_job_free_work_fn); + + job->base.sync = !(args->flags & DRM_NOUVEAU_VM_BIND_RUN_ASYNC); + job->base.ops = &nouveau_bind_job_ops; + job->base.resv_usage = DMA_RESV_USAGE_BOOKKEEP; + + ret = nouveau_base_job_init(&job->base, &args->base); + if (ret) + goto err_free; + + *pjob = job; + return 0; + +err_free: + bind_job_ops_free(&job->ops); + kfree(job); + *pjob = NULL; + + return ret; +} + +int +nouveau_uvmm_ioctl_vm_init(struct drm_device *dev, + void *data, + struct drm_file *file_priv) +{ + struct nouveau_cli *cli = nouveau_cli(file_priv); + struct drm_nouveau_vm_init *init = data; + + return nouveau_uvmm_init(&cli->uvmm, cli, init->unmanaged_addr, + init->unmanaged_size); +} + +static int +nouveau_uvmm_vm_bind(struct nouveau_uvmm_bind_job_args *args) +{ + struct nouveau_uvmm_bind_job *job; + int ret; + + ret = nouveau_uvmm_bind_job_init(&job, args); + if (ret) + return ret; + + ret = nouveau_job_submit(&job->base); + if (ret) + goto err_job_fini; + + return 0; + +err_job_fini: + nouveau_job_fini(&job->base); + return ret; +} + +int +nouveau_uvmm_ioctl_vm_bind(struct drm_device *dev, + void *data, + struct drm_file *file_priv) +{ + struct nouveau_cli *cli = nouveau_cli(file_priv); + struct nouveau_uvmm_bind_job_args args = {}; + struct drm_nouveau_vm_bind *req = data; + int ret = 0; + + if (unlikely(!nouveau_cli_uvmm_locked(cli))) + return -ENOSYS; + + args.flags = req->flags; + + args.op.count = req->op_count; + args.op.s = u_memcpya(req->op_ptr, req->op_count, + sizeof(*args.op.s)); + if (IS_ERR(args.op.s)) + return PTR_ERR(args.op.s); + + ret = nouveau_job_ucopy_syncs(&args.base, + req->wait_count, req->wait_ptr, + req->sig_count, req->sig_ptr); + if (ret) + goto out_free_ops; + + args.base.sched_entity = &cli->sched_entity; + args.base.file_priv = file_priv; + + ret = nouveau_uvmm_vm_bind(&args); + if (ret) + goto out_free_syncs; + +out_free_syncs: + u_free(args.base.out_sync.s); + u_free(args.base.in_sync.s); +out_free_ops: + u_free(args.op.s); + return ret; +} + +void +nouveau_uvmm_bo_map_all(struct nouveau_bo *nvbo, struct nouveau_mem *mem) +{ + struct drm_gem_object *obj = &nvbo->bo.base; + struct drm_gpuva *va; + + drm_gem_gpuva_lock(obj); + drm_gem_for_each_gpuva(va, obj) { + struct nouveau_uvma *uvma = uvma_from_va(va); + + nouveau_uvma_map(uvma, mem); + drm_gpuva_evict(va, false); + } + drm_gem_gpuva_unlock(obj); +} + +void +nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo) +{ + struct drm_gem_object *obj = &nvbo->bo.base; + struct drm_gpuva *va; + + drm_gem_gpuva_lock(obj); + drm_gem_for_each_gpuva(va, obj) { + struct nouveau_uvma *uvma = uvma_from_va(va); + + nouveau_uvma_unmap(uvma); + drm_gpuva_evict(va, true); + } + drm_gem_gpuva_unlock(obj); +} + +int +nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, + u64 unmanaged_addr, u64 unmanaged_size) +{ + int ret; + u64 unmanaged_end = unmanaged_addr + unmanaged_size; + + mutex_init(&uvmm->mutex); + mt_init_flags(&uvmm->region_mt, MT_FLAGS_LOCK_EXTERN); + mt_set_external_lock(&uvmm->region_mt, &uvmm->mutex); + + mutex_lock(&cli->mutex); + + if (unlikely(cli->uvmm.disabled)) { + ret = -ENOSYS; + goto out_unlock; + } + + if (unmanaged_end <= unmanaged_addr) { + ret = -EINVAL; + goto out_unlock; + } + + if (unmanaged_end > NOUVEAU_VA_SPACE_END) { + ret = -EINVAL; + goto out_unlock; + } + + uvmm->unmanaged_addr = unmanaged_addr; + uvmm->unmanaged_size = unmanaged_size; + + drm_gpuva_manager_init(&uvmm->umgr, cli->name, + NOUVEAU_VA_SPACE_START, + NOUVEAU_VA_SPACE_END, + unmanaged_addr, unmanaged_size, + NULL); + + ret = nvif_vmm_ctor(&cli->mmu, "uvmm", + cli->vmm.vmm.object.oclass, RAW, + unmanaged_addr, unmanaged_size, + NULL, 0, &cli->uvmm.vmm.vmm); + if (ret) + goto out_free_gpuva_mgr; + + cli->uvmm.vmm.cli = cli; + mutex_unlock(&cli->mutex); + + return 0; + +out_free_gpuva_mgr: + drm_gpuva_manager_destroy(&uvmm->umgr); +out_unlock: + mutex_unlock(&cli->mutex); + return ret; +} + +void +nouveau_uvmm_fini(struct nouveau_uvmm *uvmm) +{ + DRM_GPUVA_ITER(it, &uvmm->umgr, 0); + MA_STATE(mas, &uvmm->region_mt, 0, 0); + struct nouveau_uvma_region *reg; + struct nouveau_cli *cli = uvmm->vmm.cli; + struct nouveau_sched_entity *entity = &cli->sched_entity; + struct drm_gpuva *va; + + if (!cli) + return; + + rmb(); /* for list_empty to work without lock */ + wait_event(entity->job.wq, list_empty(&entity->job.list.head)); + + nouveau_uvmm_lock(uvmm); + drm_gpuva_iter_for_each(va, it) { + struct nouveau_uvma *uvma = uvma_from_va(va); + struct drm_gem_object *obj = va->gem.obj; + + if (unlikely(va == &uvmm->umgr.kernel_alloc_node)) + continue; + + drm_gpuva_iter_remove(&it); + + drm_gem_gpuva_lock(obj); + nouveau_uvma_unmap(uvma); + drm_gpuva_unlink(va); + drm_gem_gpuva_unlock(obj); + + nouveau_uvma_vmm_put(uvma); + + nouveau_uvma_gem_put(uvma); + nouveau_uvma_free(uvma); + } + + mas_for_each(&mas, reg, ULONG_MAX) { + mas_erase(&mas); + nouveau_uvma_region_sparse_unref(reg); + nouveau_uvma_region_put(reg); + } + + WARN(!mtree_empty(&uvmm->region_mt), + "nouveau_uvma_region tree not empty, potentially leaking memory."); + __mt_destroy(&uvmm->region_mt); + nouveau_uvmm_unlock(uvmm); + + mutex_lock(&cli->mutex); + nouveau_vmm_fini(&uvmm->vmm); + drm_gpuva_manager_destroy(&uvmm->umgr); + mutex_unlock(&cli->mutex); +} diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h new file mode 100644 index 000000000000..2b789c908a04 --- /dev/null +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: MIT */ + +#ifndef __NOUVEAU_UVMM_H__ +#define __NOUVEAU_UVMM_H__ + +#include + +#include "nouveau_drv.h" + +struct nouveau_uvmm { + struct nouveau_vmm vmm; + struct drm_gpuva_manager umgr; + struct maple_tree region_mt; + struct mutex mutex; + + u64 unmanaged_addr; + u64 unmanaged_size; + + bool disabled; +}; + +struct nouveau_uvma_region { + struct nouveau_uvmm *uvmm; + + struct { + u64 addr; + u64 range; + } va; + + struct kref kref; + + struct completion complete; + bool dirty; +}; + +struct nouveau_uvma { + struct drm_gpuva va; + + struct nouveau_uvmm *uvmm; + struct nouveau_uvma_region *region; + + u8 kind; +}; + +struct nouveau_uvmm_bind_job { + struct nouveau_job base; + + struct kref kref; + struct list_head entry; + struct work_struct work; + struct completion complete; + + /* struct bind_job_op */ + struct list_head ops; +}; + +struct nouveau_uvmm_bind_job_args { + struct nouveau_job_args base; + unsigned int flags; + + struct { + struct drm_nouveau_vm_bind_op *s; + u32 count; + } op; +}; + +#define to_uvmm_bind_job(job) container_of((job), struct nouveau_uvmm_bind_job, base) + +#define uvmm_from_mgr(x) container_of((x), struct nouveau_uvmm, umgr) +#define uvma_from_va(x) container_of((x), struct nouveau_uvma, va) + +int nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, + u64 unmanaged_addr, u64 unmanaged_size); +void nouveau_uvmm_fini(struct nouveau_uvmm *uvmm); + +void nouveau_uvmm_bo_map_all(struct nouveau_bo *nvbov, struct nouveau_mem *mem); +void nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo); + +int nouveau_uvmm_bind_job_init(struct nouveau_uvmm_bind_job **pjob, + struct nouveau_uvmm_bind_job_args *args); + +int nouveau_uvmm_ioctl_vm_init(struct drm_device *dev, void *data, + struct drm_file *file_priv); + +int nouveau_uvmm_ioctl_vm_bind(struct drm_device *dev, void *data, + struct drm_file *file_priv); + +static inline void nouveau_uvmm_lock(struct nouveau_uvmm *uvmm) +{ + mutex_lock(&uvmm->mutex); +} + +static inline void nouveau_uvmm_unlock(struct nouveau_uvmm *uvmm) +{ + mutex_unlock(&uvmm->mutex); +} + +#endif From patchwork Tue Apr 4 01:27:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Danilo Krummrich X-Patchwork-Id: 78819 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2708551vqo; Mon, 3 Apr 2023 18:53:35 -0700 (PDT) X-Google-Smtp-Source: AKy350ZMWZwq+vJrIbCym7nKt0itanT4jQwXv2h7+6Sk4kiTfVzAFyJIKNvjYpjvDzucAyp7T70l X-Received: by 2002:aa7:97ac:0:b0:626:166f:6db1 with SMTP id d12-20020aa797ac000000b00626166f6db1mr520529pfq.13.1680573215266; Mon, 03 Apr 2023 18:53:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680573215; cv=none; d=google.com; s=arc-20160816; b=UYU3FzrHcQzIcmFcY+xVH9DixzfrgrQoUz5RI0O2lmfTLvNQF968Lemh2yUSiViht0 N4LIbMUM8+khkYL1AYV1YBp8YoaVdI2jS6ZjQcNUlEyXgl7r4APF8QEFxKA6Dzz063kj njOC8j7U17CMTQ4tga4t5lGUmL6Iix7caN1LIspFOhCXHyFynuLRgZXXCHaf9Byy4AJ9 lUrGO8Diycctpy7y7+3aT+STQHkxuF++L4jeuBxq+jMGgSKuaDhmV/hsHIybv5pWO9lW KdtXtWMdDCO06hz3olyVo3NzQNQG6Oyx3YcOF8gQcSyoem2F5nrn62xtqDJi/eNq9GJx LKrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=o823AL8VZ1LMNlMTP6aYg3xJJTWgQdWRn/9lqJMAMBk=; b=hgh4zM41YMXEjcSfQXZY+IxdxW68WP9QMUBJ510tnLUvTXTEtHSiNYQhtSTxfREfNU WhYD2EOkrQLm2M+IknnCtcu6zxw5Ko9qKyG6jMt1p/31iIyaSCbnbv0tMQMemPZGjqAe 7vf6lithepnjcQcPbJ62kVejN5AKpjTltLrGt9tDbdGsn4Lr/5BvrP4CSm8/sW6ocGBP AsuukuvZAfRq4c5v/ytYw63a41hOEb0qEO0uCwpJ7Gfeq55voNbROz//Ueh33g4FOiyw Dd7lMjRsZndhbL3WPf9rLQeXV4mtAd54EtTV2hauNT6ZIEP47sO/RAma4qR2brTNKVJb 8yMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PQPpiVyY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 190-20020a6206c7000000b005a8ac319433si9352835pfg.178.2023.04.03.18.53.23; Mon, 03 Apr 2023 18:53:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PQPpiVyY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233235AbjDDBae (ORCPT + 99 others); Mon, 3 Apr 2023 21:30:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233114AbjDDB3s (ORCPT ); Mon, 3 Apr 2023 21:29:48 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6E6D40E1 for ; Mon, 3 Apr 2023 18:28:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680571726; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=o823AL8VZ1LMNlMTP6aYg3xJJTWgQdWRn/9lqJMAMBk=; b=PQPpiVyYP6E92Z91TSr0GITM3E3MEghDF/ZkXN7HzAnrCzzGYH349USVcmwSe/NoE8AuoY Xw4JRXklDa1uZ1sjlhm9ZSsSyP3tNQkLmGVzamwM+wXx5RLN3OCCqpWO8nb+2fgpyZw4UX huDS915gXmRwL9PFjrrhHJsJuobcmIU= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-554-W9GEdfUpNxaX6y_HAslcKw-1; Mon, 03 Apr 2023 21:28:45 -0400 X-MC-Unique: W9GEdfUpNxaX6y_HAslcKw-1 Received: by mail-ed1-f72.google.com with SMTP id t14-20020a056402240e00b004fb36e6d670so43352845eda.5 for ; Mon, 03 Apr 2023 18:28:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680571724; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=o823AL8VZ1LMNlMTP6aYg3xJJTWgQdWRn/9lqJMAMBk=; b=yFvQbNLdEZFOsdk105bCzM9WUbNJY4YeKU44BwbBK6Q91gfomBWHw+pfsa/FaHlEJb 0ZxP3aywJHocwzUTPfz5HEjbLW427lh0LSldzeb3VGboEwQFyHkNuIrUkveQuqhtXg5V Og/eFh3bwQg2SJZ8lowKECanXroL+zxu3+sedyMQ/tLNh+yRQ5aaWq1riT4fUItxC3SX aaJtD5O3PVYGJgF4snsYd9PTHAiSJaRBGZbkD9MQSzmjHYsL9WqSrXJ9VO3WVSsDO13I 3mZ1HXEMp6KAPMHFLmAj4stwJ3IidGvRIu6eNOzf2DfWuodDQYkMG75URxuDhk3niGuN hl8Q== X-Gm-Message-State: AAQBX9eJw2weyiJVec5SiKqEOHYNVvZ+33qwWDV0XjQ6KzFZ7HeFwihZ JB3mHDf3an9NjBUW/uSyJeu7km+py5LipoCl23GrmuapOZx8CkoTgIWz2m/sQiyrWv4t9tdllfV dIwRN3SRgKjJAlmFp3IYBkJes X-Received: by 2002:a17:906:9c96:b0:92f:b290:78c with SMTP id fj22-20020a1709069c9600b0092fb290078cmr618018ejc.21.1680571724453; Mon, 03 Apr 2023 18:28:44 -0700 (PDT) X-Received: by 2002:a17:906:9c96:b0:92f:b290:78c with SMTP id fj22-20020a1709069c9600b0092fb290078cmr618007ejc.21.1680571724265; Mon, 03 Apr 2023 18:28:44 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de78:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id gl25-20020a170906e0d900b00929fc8d264dsm5310643ejb.17.2023.04.03.18.28.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 18:28:43 -0700 (PDT) From: Danilo Krummrich To: airlied@gmail.com, daniel@ffwll.ch, tzimmermann@suse.de, mripard@kernel.org, corbet@lwn.net, christian.koenig@amd.com, bskeggs@redhat.com, Liam.Howlett@oracle.com, matthew.brost@intel.com, boris.brezillon@collabora.com, alexdeucher@gmail.com, ogabbay@kernel.org, bagasdotme@gmail.com, willy@infradead.org, jason@jlekstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Danilo Krummrich Subject: [PATCH drm-next v3 15/15] drm/nouveau: debugfs: implement DRM GPU VA debugfs Date: Tue, 4 Apr 2023 03:27:41 +0200 Message-Id: <20230404012741.116502-16-dakr@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230404012741.116502-1-dakr@redhat.com> References: <20230404012741.116502-1-dakr@redhat.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762208740019995920?= X-GMAIL-MSGID: =?utf-8?q?1762208740019995920?= Provide the driver indirection iterating over all DRM GPU VA spaces to enable the common 'gpuvas' debugfs file for dumping DRM GPU VA spaces. Signed-off-by: Danilo Krummrich --- drivers/gpu/drm/nouveau/nouveau_debugfs.c | 39 +++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c index 2a36d1ca8fda..d5487e655b0c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c +++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c @@ -202,6 +202,44 @@ nouveau_debugfs_pstate_open(struct inode *inode, struct file *file) return single_open(file, nouveau_debugfs_pstate_get, inode->i_private); } +static void +nouveau_debugfs_gpuva_regions(struct seq_file *m, struct nouveau_uvmm *uvmm) +{ + MA_STATE(mas, &uvmm->region_mt, 0, 0); + struct nouveau_uvma_region *reg; + + seq_puts (m, " VA regions | start | range | end \n"); + seq_puts (m, "----------------------------------------------------------------------------\n"); + mas_for_each(&mas, reg, ULONG_MAX) + seq_printf(m, " | 0x%016llx | 0x%016llx | 0x%016llx\n", + reg->va.addr, reg->va.range, reg->va.addr + reg->va.range); +} + +static int +nouveau_debugfs_gpuva(struct seq_file *m, void *data) +{ + struct drm_info_node *node = (struct drm_info_node *) m->private; + struct nouveau_drm *drm = nouveau_drm(node->minor->dev); + struct nouveau_cli *cli; + + mutex_lock(&drm->clients_lock); + list_for_each_entry(cli, &drm->clients, head) { + struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(cli); + + if (!uvmm) + continue; + + nouveau_uvmm_lock(uvmm); + drm_debugfs_gpuva_info(m, &uvmm->umgr); + seq_puts(m, "\n"); + nouveau_debugfs_gpuva_regions(m, uvmm); + nouveau_uvmm_unlock(uvmm); + } + mutex_unlock(&drm->clients_lock); + + return 0; +} + static const struct file_operations nouveau_pstate_fops = { .owner = THIS_MODULE, .open = nouveau_debugfs_pstate_open, @@ -213,6 +251,7 @@ static const struct file_operations nouveau_pstate_fops = { static struct drm_info_list nouveau_debugfs_list[] = { { "vbios.rom", nouveau_debugfs_vbios_image, 0, NULL }, { "strap_peek", nouveau_debugfs_strap_peek, 0, NULL }, + DRM_DEBUGFS_GPUVA_INFO(nouveau_debugfs_gpuva, NULL), }; #define NOUVEAU_DEBUGFS_ENTRIES ARRAY_SIZE(nouveau_debugfs_list)