Message ID | 20231023201659.25332-2-dakr@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ce89:0:b0:403:3b70:6f57 with SMTP id p9csp1531407vqx; Mon, 23 Oct 2023 13:18:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHw9XVaH2T1AUwc1tzJSa39Q7y7NVAm0l1IMVPtxjnmWnN90zAn1vXefw92Ll26ksqnGK+h X-Received: by 2002:a17:902:a417:b0:1bb:b855:db3c with SMTP id p23-20020a170902a41700b001bbb855db3cmr6526332plq.41.1698092303664; Mon, 23 Oct 2023 13:18:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698092303; cv=none; d=google.com; s=arc-20160816; b=hJJt2QKS8c6V0K7ZZSwLwhC6EVy+jUn1a8Mnpc71NoB9kyikD6tfC0lRydYFkYVwjX Sx1T+fXy+9hw4+kWJbEOGpIpod9brEKuRA0O/U1gbN5iXjqBFw9eO58odTB4PyVS+P8t ZX7YfzxnH/+kx02h5muLWtXzPg270t5nWdWC0cW3e6m2hS9FNS6b/qqcWG8N+E+IeRR+ rO7omiyOe21gdMGgL0DJuWzCW0gcq+ipDOdIEwJDJ2948TPO2Ue63ii//95NpuyMaw+5 3GqF8+/iZ08Mxja1fOB3Jsb3KmIlpc4TMcSljJYPUlLPVDoWyBu2Kx5OXqngaGmjev2f C+3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Dl1QUMiwke7YPVfNkVdbN9szUlQ85JjU5x5y/2sjjXc=; fh=8dtrvlz6eC1As92HJPwYWdp8qGhbMzdOhqcu6vG4iUM=; b=FzPBMl7mwWO56vxzbmxIbc/Ovqg1SB4THv1QAcUxyRLLQ0u6dgh25zHoX1miOaEjt8 3XJbDKgme2MainxqfalcRewb6vcQhMj21pX8jrlu0luoIicGMFL5VT0BTD7/7ZRrQdxp vdYRKQ4+5J+AbXdDUUHKU6vZYGUeHLfVdbYpFajM7sq+E21W+E5nfsux5vBomvUqXyF2 kEa7zu+1QrxmrTDkvS2Xu024/loTlIHXZeGehV3H1o/1CDUjs3pVkP/EYowGAbrh4Ia7 EMLD4P+27GaDC7+akO2YGwUlBVPpp+sn5783ARtpkWBJKaMGQ/y2TJjMy0BkVxaGNkAg 0hFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GDg+lg6Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id j1-20020a170902690100b001c589ba4a0bsi6978210plk.195.2023.10.23.13.18.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Oct 2023 13:18:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GDg+lg6Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 2ED6D80B81DC; Mon, 23 Oct 2023 13:18:22 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231194AbjJWUSP (ORCPT <rfc822;aposhian.dev@gmail.com> + 27 others); Mon, 23 Oct 2023 16:18:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230202AbjJWUSK (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 23 Oct 2023 16:18:10 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB4E210E for <linux-kernel@vger.kernel.org>; Mon, 23 Oct 2023 13:17:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1698092241; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Dl1QUMiwke7YPVfNkVdbN9szUlQ85JjU5x5y/2sjjXc=; b=GDg+lg6QrU4WOBjKRs8T25qntJOweqYN0gzlApIBoj+6UiYxQdVH6S0i4Y0fOrNcenypiG gRAWy+IoisSSk+bykcQHTQYm+DO2Zyafi1dSZy1qqIWaW2mhzCMSB+HwHTTeOUmVhpef2o Epq9gerGbuhgJ+aubimUZLa/nVPag7c= Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-639-HlNR1yMXNMWrgjlPUNTgNA-1; Mon, 23 Oct 2023 16:17:09 -0400 X-MC-Unique: HlNR1yMXNMWrgjlPUNTgNA-1 Received: by mail-lf1-f71.google.com with SMTP id 2adb3069b0e04-5079630993dso3643166e87.1 for <linux-kernel@vger.kernel.org>; Mon, 23 Oct 2023 13:17:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698092228; x=1698697028; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Dl1QUMiwke7YPVfNkVdbN9szUlQ85JjU5x5y/2sjjXc=; b=xUyxmNeXoiN7P7vOuO/to43HBmeEM6vX3JZswIH3DWd8UI/y2IBEjQy/Ovm0a4+MoE Q4OX/QflsLkRfbEkF0YTRH5Dx/bIEiI1hvqqm8VXxgj0eLizZz1g1UudzMpHulD9RVMc YQJy/1qZoawxJpDgeRTxE0MLWYjuiLz+W5lOqB8mG5rLoXAsx1ai2v7j1HgUp+M0xUmV PH7jFBiq5I/a2G5TAqlB0EW7I9kVAoQCcPCekM3yDErTickx5X/cV7v96LYd00HbL1k9 FI8VJ2g55JGoSfM24se+FZMwui2oMounbVbxmvcF3G/IEbho8BP2J6LeY1NofBeNTjWd jgGQ== X-Gm-Message-State: AOJu0YxuaFbwVQG1yAu7pPe+XCrwUlef1UcBqGpU71yOz98Og2cVJk9j 0czYxyfgcSyqaVVyDurRsbIhKb52CMXqvTR1n2O3gLW4l0q7chgZtTp0nRkUqNAJuq3xco8KenW r87QvuvwIyripfmDH0nqAEPjq X-Received: by 2002:ac2:4113:0:b0:503:95b:db0a with SMTP id b19-20020ac24113000000b00503095bdb0amr7241726lfi.12.1698092228042; Mon, 23 Oct 2023 13:17:08 -0700 (PDT) X-Received: by 2002:ac2:4113:0:b0:503:95b:db0a with SMTP id b19-20020ac24113000000b00503095bdb0amr7241717lfi.12.1698092227628; Mon, 23 Oct 2023 13:17:07 -0700 (PDT) Received: from cassiopeiae.. ([2a02:810d:4b3f:de9c:642:1aff:fe31:a19f]) by smtp.gmail.com with ESMTPSA id i7-20020a50fc07000000b00534e791296bsm6734384edr.37.2023.10.23.13.17.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Oct 2023 13:17:07 -0700 (PDT) From: Danilo Krummrich <dakr@redhat.com> To: airlied@gmail.com, daniel@ffwll.ch, matthew.brost@intel.com, thomas.hellstrom@linux.intel.com, sarah.walker@imgtec.com, donald.robson@imgtec.com, boris.brezillon@collabora.com, christian.koenig@amd.com, faith@gfxstrand.net Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org, Danilo Krummrich <dakr@redhat.com> Subject: [PATCH drm-misc-next v7 1/7] drm/gpuvm: convert WARN() to drm_WARN() variants Date: Mon, 23 Oct 2023 22:16:47 +0200 Message-ID: <20231023201659.25332-2-dakr@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231023201659.25332-1-dakr@redhat.com> References: <20231023201659.25332-1-dakr@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_FILL_THIS_FORM_SHORT autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 23 Oct 2023 13:18:22 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780578835432752520 X-GMAIL-MSGID: 1780578835432752520 |
Series |
DRM GPUVM features
|
|
Commit Message
Danilo Krummrich
Oct. 23, 2023, 8:16 p.m. UTC
Use drm_WARN() and drm_WARN_ON() variants to indicate drivers the
context the failing VM resides in.
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/drm_gpuvm.c | 32 ++++++++++++++------------
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 3 ++-
include/drm/drm_gpuvm.h | 7 ++++++
3 files changed, 26 insertions(+), 16 deletions(-)
Comments
Am 23.10.23 um 22:16 schrieb Danilo Krummrich: > Use drm_WARN() and drm_WARN_ON() variants to indicate drivers the > context the failing VM resides in. > > Signed-off-by: Danilo Krummrich <dakr@redhat.com> > --- > drivers/gpu/drm/drm_gpuvm.c | 32 ++++++++++++++------------ > drivers/gpu/drm/nouveau/nouveau_uvmm.c | 3 ++- > include/drm/drm_gpuvm.h | 7 ++++++ > 3 files changed, 26 insertions(+), 16 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c > index 08c088319652..d7367a202fee 100644 > --- a/drivers/gpu/drm/drm_gpuvm.c > +++ b/drivers/gpu/drm/drm_gpuvm.c > @@ -614,12 +614,12 @@ static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, > static void __drm_gpuva_remove(struct drm_gpuva *va); > > static bool > -drm_gpuvm_check_overflow(u64 addr, u64 range) > +drm_gpuvm_check_overflow(struct drm_gpuvm *gpuvm, u64 addr, u64 range) > { > u64 end; > > - return WARN(check_add_overflow(addr, range, &end), > - "GPUVA address limited to %zu bytes.\n", sizeof(end)); > + return drm_WARN(gpuvm->drm, check_add_overflow(addr, range, &end), > + "GPUVA address limited to %zu bytes.\n", sizeof(end)); > } > > static bool > @@ -647,7 +647,7 @@ static bool > drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, > u64 addr, u64 range) > { > - return !drm_gpuvm_check_overflow(addr, range) && > + return !drm_gpuvm_check_overflow(gpuvm, addr, range) && > drm_gpuvm_in_mm_range(gpuvm, addr, range) && > !drm_gpuvm_in_kernel_node(gpuvm, addr, range); When those parameters come from userspace you don't really want a warning in the system log in the first place. Otherwise userspace can trivially spam the system log with warnings. The usual approach is to make this debug level severity instead. Regards, Christian. > } > @@ -656,6 +656,7 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, > * drm_gpuvm_init() - initialize a &drm_gpuvm > * @gpuvm: pointer to the &drm_gpuvm to initialize > * @name: the name of the GPU VA space > + * @drm: the &drm_device this VM resides in > * @start_offset: the start offset of the GPU VA space > * @range: the size of the GPU VA space > * @reserve_offset: the start of the kernel reserved GPU VA area > @@ -668,8 +669,8 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, > * &name is expected to be managed by the surrounding driver structures. > */ > void > -drm_gpuvm_init(struct drm_gpuvm *gpuvm, > - const char *name, > +drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, > + struct drm_device *drm, > u64 start_offset, u64 range, > u64 reserve_offset, u64 reserve_range, > const struct drm_gpuvm_ops *ops) > @@ -677,20 +678,20 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, > gpuvm->rb.tree = RB_ROOT_CACHED; > INIT_LIST_HEAD(&gpuvm->rb.list); > > - drm_gpuvm_check_overflow(start_offset, range); > - gpuvm->mm_start = start_offset; > - gpuvm->mm_range = range; > - > gpuvm->name = name ? name : "unknown"; > gpuvm->ops = ops; > + gpuvm->drm = drm; > > - memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); > + drm_gpuvm_check_overflow(gpuvm, start_offset, range); > + gpuvm->mm_start = start_offset; > + gpuvm->mm_range = range; > > + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); > if (reserve_range) { > gpuvm->kernel_alloc_node.va.addr = reserve_offset; > gpuvm->kernel_alloc_node.va.range = reserve_range; > > - if (likely(!drm_gpuvm_check_overflow(reserve_offset, > + if (likely(!drm_gpuvm_check_overflow(gpuvm, reserve_offset, > reserve_range))) > __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node); > } > @@ -712,8 +713,8 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) > if (gpuvm->kernel_alloc_node.va.range) > __drm_gpuva_remove(&gpuvm->kernel_alloc_node); > > - WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), > - "GPUVA tree is not empty, potentially leaking memory."); > + drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), > + "GPUVA tree is not empty, potentially leaking memory.\n"); > } > EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); > > @@ -795,7 +796,8 @@ drm_gpuva_remove(struct drm_gpuva *va) > struct drm_gpuvm *gpuvm = va->vm; > > if (unlikely(va == &gpuvm->kernel_alloc_node)) { > - WARN(1, "Can't destroy kernel reserved node.\n"); > + drm_WARN(gpuvm->drm, 1, > + "Can't destroy kernel reserved node.\n"); > return; > } > > diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c > index 5cf892c50f43..aaf5d28bd587 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c > +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c > @@ -1808,6 +1808,7 @@ int > nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, > u64 kernel_managed_addr, u64 kernel_managed_size) > { > + struct drm_device *drm = cli->drm->dev; > int ret; > u64 kernel_managed_end = kernel_managed_addr + kernel_managed_size; > > @@ -1836,7 +1837,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, > uvmm->kernel_managed_addr = kernel_managed_addr; > uvmm->kernel_managed_size = kernel_managed_size; > > - drm_gpuvm_init(&uvmm->base, cli->name, > + drm_gpuvm_init(&uvmm->base, cli->name, drm, > NOUVEAU_VA_SPACE_START, > NOUVEAU_VA_SPACE_END, > kernel_managed_addr, kernel_managed_size, > diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h > index bdfafc4a7705..687fd5893624 100644 > --- a/include/drm/drm_gpuvm.h > +++ b/include/drm/drm_gpuvm.h > @@ -29,6 +29,7 @@ > #include <linux/rbtree.h> > #include <linux/types.h> > > +#include <drm/drm_device.h> > #include <drm/drm_gem.h> > > struct drm_gpuvm; > @@ -201,6 +202,11 @@ struct drm_gpuvm { > */ > const char *name; > > + /** > + * @drm: the &drm_device this VM lives in > + */ > + struct drm_device *drm; > + > /** > * @mm_start: start of the VA space > */ > @@ -241,6 +247,7 @@ struct drm_gpuvm { > }; > > void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, > + struct drm_device *drm, > u64 start_offset, u64 range, > u64 reserve_offset, u64 reserve_range, > const struct drm_gpuvm_ops *ops);
On 10/24/23 10:45, Christian König wrote: > > > Am 23.10.23 um 22:16 schrieb Danilo Krummrich: >> Use drm_WARN() and drm_WARN_ON() variants to indicate drivers the >> context the failing VM resides in. >> >> Signed-off-by: Danilo Krummrich <dakr@redhat.com> >> --- >> drivers/gpu/drm/drm_gpuvm.c | 32 ++++++++++++++------------ >> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 3 ++- >> include/drm/drm_gpuvm.h | 7 ++++++ >> 3 files changed, 26 insertions(+), 16 deletions(-) >> >> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c >> index 08c088319652..d7367a202fee 100644 >> --- a/drivers/gpu/drm/drm_gpuvm.c >> +++ b/drivers/gpu/drm/drm_gpuvm.c >> @@ -614,12 +614,12 @@ static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, >> static void __drm_gpuva_remove(struct drm_gpuva *va); >> static bool >> -drm_gpuvm_check_overflow(u64 addr, u64 range) >> +drm_gpuvm_check_overflow(struct drm_gpuvm *gpuvm, u64 addr, u64 range) >> { >> u64 end; >> - return WARN(check_add_overflow(addr, range, &end), >> - "GPUVA address limited to %zu bytes.\n", sizeof(end)); >> + return drm_WARN(gpuvm->drm, check_add_overflow(addr, range, &end), >> + "GPUVA address limited to %zu bytes.\n", sizeof(end)); >> } >> static bool >> @@ -647,7 +647,7 @@ static bool >> drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, >> u64 addr, u64 range) >> { >> - return !drm_gpuvm_check_overflow(addr, range) && >> + return !drm_gpuvm_check_overflow(gpuvm, addr, range) && >> drm_gpuvm_in_mm_range(gpuvm, addr, range) && >> !drm_gpuvm_in_kernel_node(gpuvm, addr, range); > > When those parameters come from userspace you don't really want a warning in the system log in the first place. > > Otherwise userspace can trivially spam the system log with warnings. The usual approach is to make this debug level severity instead. Currently, this function isn't exported and hence the driver should do the relevant sanity checks before attempting to insert the mapping. However, I think it would make sense to export this function and remove the WARN() and instead WARN() in drm_gpuvm_init() explicitly. > > Regards, > Christian. > >> } >> @@ -656,6 +656,7 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, >> * drm_gpuvm_init() - initialize a &drm_gpuvm >> * @gpuvm: pointer to the &drm_gpuvm to initialize >> * @name: the name of the GPU VA space >> + * @drm: the &drm_device this VM resides in >> * @start_offset: the start offset of the GPU VA space >> * @range: the size of the GPU VA space >> * @reserve_offset: the start of the kernel reserved GPU VA area >> @@ -668,8 +669,8 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, >> * &name is expected to be managed by the surrounding driver structures. >> */ >> void >> -drm_gpuvm_init(struct drm_gpuvm *gpuvm, >> - const char *name, >> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, >> + struct drm_device *drm, >> u64 start_offset, u64 range, >> u64 reserve_offset, u64 reserve_range, >> const struct drm_gpuvm_ops *ops) >> @@ -677,20 +678,20 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, >> gpuvm->rb.tree = RB_ROOT_CACHED; >> INIT_LIST_HEAD(&gpuvm->rb.list); >> - drm_gpuvm_check_overflow(start_offset, range); >> - gpuvm->mm_start = start_offset; >> - gpuvm->mm_range = range; >> - >> gpuvm->name = name ? name : "unknown"; >> gpuvm->ops = ops; >> + gpuvm->drm = drm; >> - memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); >> + drm_gpuvm_check_overflow(gpuvm, start_offset, range); >> + gpuvm->mm_start = start_offset; >> + gpuvm->mm_range = range; >> + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); >> if (reserve_range) { >> gpuvm->kernel_alloc_node.va.addr = reserve_offset; >> gpuvm->kernel_alloc_node.va.range = reserve_range; >> - if (likely(!drm_gpuvm_check_overflow(reserve_offset, >> + if (likely(!drm_gpuvm_check_overflow(gpuvm, reserve_offset, >> reserve_range))) >> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node); >> } >> @@ -712,8 +713,8 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) >> if (gpuvm->kernel_alloc_node.va.range) >> __drm_gpuva_remove(&gpuvm->kernel_alloc_node); >> - WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), >> - "GPUVA tree is not empty, potentially leaking memory."); >> + drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), >> + "GPUVA tree is not empty, potentially leaking memory.\n"); >> } >> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); >> @@ -795,7 +796,8 @@ drm_gpuva_remove(struct drm_gpuva *va) >> struct drm_gpuvm *gpuvm = va->vm; >> if (unlikely(va == &gpuvm->kernel_alloc_node)) { >> - WARN(1, "Can't destroy kernel reserved node.\n"); >> + drm_WARN(gpuvm->drm, 1, >> + "Can't destroy kernel reserved node.\n"); >> return; >> } >> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c >> index 5cf892c50f43..aaf5d28bd587 100644 >> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c >> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c >> @@ -1808,6 +1808,7 @@ int >> nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, >> u64 kernel_managed_addr, u64 kernel_managed_size) >> { >> + struct drm_device *drm = cli->drm->dev; >> int ret; >> u64 kernel_managed_end = kernel_managed_addr + kernel_managed_size; >> @@ -1836,7 +1837,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, >> uvmm->kernel_managed_addr = kernel_managed_addr; >> uvmm->kernel_managed_size = kernel_managed_size; >> - drm_gpuvm_init(&uvmm->base, cli->name, >> + drm_gpuvm_init(&uvmm->base, cli->name, drm, >> NOUVEAU_VA_SPACE_START, >> NOUVEAU_VA_SPACE_END, >> kernel_managed_addr, kernel_managed_size, >> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h >> index bdfafc4a7705..687fd5893624 100644 >> --- a/include/drm/drm_gpuvm.h >> +++ b/include/drm/drm_gpuvm.h >> @@ -29,6 +29,7 @@ >> #include <linux/rbtree.h> >> #include <linux/types.h> >> +#include <drm/drm_device.h> >> #include <drm/drm_gem.h> >> struct drm_gpuvm; >> @@ -201,6 +202,11 @@ struct drm_gpuvm { >> */ >> const char *name; >> + /** >> + * @drm: the &drm_device this VM lives in >> + */ >> + struct drm_device *drm; >> + >> /** >> * @mm_start: start of the VA space >> */ >> @@ -241,6 +247,7 @@ struct drm_gpuvm { >> }; >> void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, >> + struct drm_device *drm, >> u64 start_offset, u64 range, >> u64 reserve_offset, u64 reserve_range, >> const struct drm_gpuvm_ops *ops); >
On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote: > Use drm_WARN() and drm_WARN_ON() variants to indicate drivers the > context the failing VM resides in. > > Signed-off-by: Danilo Krummrich <dakr@redhat.com> > --- > drivers/gpu/drm/drm_gpuvm.c | 32 ++++++++++++++---------- > -- > drivers/gpu/drm/nouveau/nouveau_uvmm.c | 3 ++- > include/drm/drm_gpuvm.h | 7 ++++++ > 3 files changed, 26 insertions(+), 16 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gpuvm.c > b/drivers/gpu/drm/drm_gpuvm.c > index 08c088319652..d7367a202fee 100644 > --- a/drivers/gpu/drm/drm_gpuvm.c > +++ b/drivers/gpu/drm/drm_gpuvm.c > @@ -614,12 +614,12 @@ static int __drm_gpuva_insert(struct drm_gpuvm > *gpuvm, > static void __drm_gpuva_remove(struct drm_gpuva *va); > > static bool > -drm_gpuvm_check_overflow(u64 addr, u64 range) > +drm_gpuvm_check_overflow(struct drm_gpuvm *gpuvm, u64 addr, u64 > range) > { > u64 end; > > - return WARN(check_add_overflow(addr, range, &end), > - "GPUVA address limited to %zu bytes.\n", > sizeof(end)); > + return drm_WARN(gpuvm->drm, check_add_overflow(addr, range, > &end), > + "GPUVA address limited to %zu bytes.\n", > sizeof(end)); > } > > static bool > @@ -647,7 +647,7 @@ static bool > drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, > u64 addr, u64 range) > { > - return !drm_gpuvm_check_overflow(addr, range) && > + return !drm_gpuvm_check_overflow(gpuvm, addr, range) && > drm_gpuvm_in_mm_range(gpuvm, addr, range) && > !drm_gpuvm_in_kernel_node(gpuvm, addr, range); > } > @@ -656,6 +656,7 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, > * drm_gpuvm_init() - initialize a &drm_gpuvm > * @gpuvm: pointer to the &drm_gpuvm to initialize > * @name: the name of the GPU VA space > + * @drm: the &drm_device this VM resides in > * @start_offset: the start offset of the GPU VA space > * @range: the size of the GPU VA space > * @reserve_offset: the start of the kernel reserved GPU VA area > @@ -668,8 +669,8 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, > * &name is expected to be managed by the surrounding driver > structures. > */ > void > -drm_gpuvm_init(struct drm_gpuvm *gpuvm, > - const char *name, > +drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, > + struct drm_device *drm, > u64 start_offset, u64 range, > u64 reserve_offset, u64 reserve_range, > const struct drm_gpuvm_ops *ops) > @@ -677,20 +678,20 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, > gpuvm->rb.tree = RB_ROOT_CACHED; > INIT_LIST_HEAD(&gpuvm->rb.list); > > - drm_gpuvm_check_overflow(start_offset, range); > - gpuvm->mm_start = start_offset; > - gpuvm->mm_range = range; > - > gpuvm->name = name ? name : "unknown"; > gpuvm->ops = ops; > + gpuvm->drm = drm; > > - memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct > drm_gpuva)); > + drm_gpuvm_check_overflow(gpuvm, start_offset, range); > + gpuvm->mm_start = start_offset; > + gpuvm->mm_range = range; > > + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct > drm_gpuva)); > if (reserve_range) { > gpuvm->kernel_alloc_node.va.addr = reserve_offset; > gpuvm->kernel_alloc_node.va.range = reserve_range; > > - if (likely(!drm_gpuvm_check_overflow(reserve_offset, > + if (likely(!drm_gpuvm_check_overflow(gpuvm, > reserve_offset, > reserve_range))) > __drm_gpuva_insert(gpuvm, &gpuvm- > >kernel_alloc_node); > } > @@ -712,8 +713,8 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) > if (gpuvm->kernel_alloc_node.va.range) > __drm_gpuva_remove(&gpuvm->kernel_alloc_node); > > - WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), > - "GPUVA tree is not empty, potentially leaking memory."); > + drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), > + "GPUVA tree is not empty, potentially leaking > memory.\n"); > } > EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); > > @@ -795,7 +796,8 @@ drm_gpuva_remove(struct drm_gpuva *va) > struct drm_gpuvm *gpuvm = va->vm; > > if (unlikely(va == &gpuvm->kernel_alloc_node)) { > - WARN(1, "Can't destroy kernel reserved node.\n"); > + drm_WARN(gpuvm->drm, 1, > + "Can't destroy kernel reserved node.\n"); > return; > } > > diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c > b/drivers/gpu/drm/nouveau/nouveau_uvmm.c > index 5cf892c50f43..aaf5d28bd587 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c > +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c > @@ -1808,6 +1808,7 @@ int > nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli > *cli, > u64 kernel_managed_addr, u64 kernel_managed_size) > { > + struct drm_device *drm = cli->drm->dev; > int ret; > u64 kernel_managed_end = kernel_managed_addr + > kernel_managed_size; > > @@ -1836,7 +1837,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, > struct nouveau_cli *cli, > uvmm->kernel_managed_addr = kernel_managed_addr; > uvmm->kernel_managed_size = kernel_managed_size; > > - drm_gpuvm_init(&uvmm->base, cli->name, > + drm_gpuvm_init(&uvmm->base, cli->name, drm, > NOUVEAU_VA_SPACE_START, > NOUVEAU_VA_SPACE_END, > kernel_managed_addr, kernel_managed_size, > diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h > index bdfafc4a7705..687fd5893624 100644 > --- a/include/drm/drm_gpuvm.h > +++ b/include/drm/drm_gpuvm.h > @@ -29,6 +29,7 @@ > #include <linux/rbtree.h> > #include <linux/types.h> > > +#include <drm/drm_device.h> > #include <drm/drm_gem.h> > > struct drm_gpuvm; > @@ -201,6 +202,11 @@ struct drm_gpuvm { > */ > const char *name; > > + /** > + * @drm: the &drm_device this VM lives in > + */ Could a one-liner do? /** <comment> */ > + struct drm_device *drm; > + > /** > * @mm_start: start of the VA space > */ > @@ -241,6 +247,7 @@ struct drm_gpuvm { > }; > > void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, > + struct drm_device *drm, > u64 start_offset, u64 range, > u64 reserve_offset, u64 reserve_range, > const struct drm_gpuvm_ops *ops); I figure Christian's commend can be addressed in a follow-up patch if neeed. Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
On 10/31/23 11:08, Thomas Hellström wrote: > On Mon, 2023-10-23 at 22:16 +0200, Danilo Krummrich wrote: >> Use drm_WARN() and drm_WARN_ON() variants to indicate drivers the >> context the failing VM resides in. >> >> Signed-off-by: Danilo Krummrich <dakr@redhat.com> >> --- >> drivers/gpu/drm/drm_gpuvm.c | 32 ++++++++++++++---------- >> -- >> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 3 ++- >> include/drm/drm_gpuvm.h | 7 ++++++ >> 3 files changed, 26 insertions(+), 16 deletions(-) >> >> diff --git a/drivers/gpu/drm/drm_gpuvm.c >> b/drivers/gpu/drm/drm_gpuvm.c >> index 08c088319652..d7367a202fee 100644 >> --- a/drivers/gpu/drm/drm_gpuvm.c >> +++ b/drivers/gpu/drm/drm_gpuvm.c >> @@ -614,12 +614,12 @@ static int __drm_gpuva_insert(struct drm_gpuvm >> *gpuvm, >> static void __drm_gpuva_remove(struct drm_gpuva *va); >> >> static bool >> -drm_gpuvm_check_overflow(u64 addr, u64 range) >> +drm_gpuvm_check_overflow(struct drm_gpuvm *gpuvm, u64 addr, u64 >> range) >> { >> u64 end; >> >> - return WARN(check_add_overflow(addr, range, &end), >> - "GPUVA address limited to %zu bytes.\n", >> sizeof(end)); >> + return drm_WARN(gpuvm->drm, check_add_overflow(addr, range, >> &end), >> + "GPUVA address limited to %zu bytes.\n", >> sizeof(end)); >> } >> >> static bool >> @@ -647,7 +647,7 @@ static bool >> drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, >> u64 addr, u64 range) >> { >> - return !drm_gpuvm_check_overflow(addr, range) && >> + return !drm_gpuvm_check_overflow(gpuvm, addr, range) && >> drm_gpuvm_in_mm_range(gpuvm, addr, range) && >> !drm_gpuvm_in_kernel_node(gpuvm, addr, range); > > >> } >> @@ -656,6 +656,7 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, >> * drm_gpuvm_init() - initialize a &drm_gpuvm >> * @gpuvm: pointer to the &drm_gpuvm to initialize >> * @name: the name of the GPU VA space >> + * @drm: the &drm_device this VM resides in >> * @start_offset: the start offset of the GPU VA space >> * @range: the size of the GPU VA space >> * @reserve_offset: the start of the kernel reserved GPU VA area >> @@ -668,8 +669,8 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, >> * &name is expected to be managed by the surrounding driver >> structures. >> */ >> void >> -drm_gpuvm_init(struct drm_gpuvm *gpuvm, >> - const char *name, >> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, >> + struct drm_device *drm, >> u64 start_offset, u64 range, >> u64 reserve_offset, u64 reserve_range, >> const struct drm_gpuvm_ops *ops) >> @@ -677,20 +678,20 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, >> gpuvm->rb.tree = RB_ROOT_CACHED; >> INIT_LIST_HEAD(&gpuvm->rb.list); >> >> - drm_gpuvm_check_overflow(start_offset, range); >> - gpuvm->mm_start = start_offset; >> - gpuvm->mm_range = range; >> - >> gpuvm->name = name ? name : "unknown"; >> gpuvm->ops = ops; >> + gpuvm->drm = drm; >> >> - memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct >> drm_gpuva)); >> + drm_gpuvm_check_overflow(gpuvm, start_offset, range); >> + gpuvm->mm_start = start_offset; >> + gpuvm->mm_range = range; >> >> + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct >> drm_gpuva)); >> if (reserve_range) { >> gpuvm->kernel_alloc_node.va.addr = reserve_offset; >> gpuvm->kernel_alloc_node.va.range = reserve_range; >> >> - if (likely(!drm_gpuvm_check_overflow(reserve_offset, >> + if (likely(!drm_gpuvm_check_overflow(gpuvm, >> reserve_offset, >> reserve_range))) >> __drm_gpuva_insert(gpuvm, &gpuvm- >>> kernel_alloc_node); >> } >> @@ -712,8 +713,8 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) >> if (gpuvm->kernel_alloc_node.va.range) >> __drm_gpuva_remove(&gpuvm->kernel_alloc_node); >> >> - WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), >> - "GPUVA tree is not empty, potentially leaking memory."); >> + drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), >> + "GPUVA tree is not empty, potentially leaking >> memory.\n"); >> } >> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); >> >> @@ -795,7 +796,8 @@ drm_gpuva_remove(struct drm_gpuva *va) >> struct drm_gpuvm *gpuvm = va->vm; >> >> if (unlikely(va == &gpuvm->kernel_alloc_node)) { >> - WARN(1, "Can't destroy kernel reserved node.\n"); >> + drm_WARN(gpuvm->drm, 1, >> + "Can't destroy kernel reserved node.\n"); >> return; >> } >> >> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c >> b/drivers/gpu/drm/nouveau/nouveau_uvmm.c >> index 5cf892c50f43..aaf5d28bd587 100644 >> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c >> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c >> @@ -1808,6 +1808,7 @@ int >> nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli >> *cli, >> u64 kernel_managed_addr, u64 kernel_managed_size) >> { >> + struct drm_device *drm = cli->drm->dev; >> int ret; >> u64 kernel_managed_end = kernel_managed_addr + >> kernel_managed_size; >> >> @@ -1836,7 +1837,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, >> struct nouveau_cli *cli, >> uvmm->kernel_managed_addr = kernel_managed_addr; >> uvmm->kernel_managed_size = kernel_managed_size; >> >> - drm_gpuvm_init(&uvmm->base, cli->name, >> + drm_gpuvm_init(&uvmm->base, cli->name, drm, >> NOUVEAU_VA_SPACE_START, >> NOUVEAU_VA_SPACE_END, >> kernel_managed_addr, kernel_managed_size, >> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h >> index bdfafc4a7705..687fd5893624 100644 >> --- a/include/drm/drm_gpuvm.h >> +++ b/include/drm/drm_gpuvm.h >> @@ -29,6 +29,7 @@ >> #include <linux/rbtree.h> >> #include <linux/types.h> >> >> +#include <drm/drm_device.h> >> #include <drm/drm_gem.h> >> >> struct drm_gpuvm; >> @@ -201,6 +202,11 @@ struct drm_gpuvm { >> */ >> const char *name; >> >> + /** >> + * @drm: the &drm_device this VM lives in >> + */ > > Could a one-liner do? > /** <comment> */ There are a few more existing ones that could be a one-liner as well and I like consistency. If you think it's preferrable to keep those ones in one line, I'd probably do it for all in a follow-up patch. > >> + struct drm_device *drm; >> + >> /** >> * @mm_start: start of the VA space >> */ >> @@ -241,6 +247,7 @@ struct drm_gpuvm { >> }; >> >> void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, >> + struct drm_device *drm, >> u64 start_offset, u64 range, >> u64 reserve_offset, u64 reserve_range, >> const struct drm_gpuvm_ops *ops); > > I figure Christian's commend can be addressed in a follow-up patch if > neeed. I already addressed his comment in a local branch, I can also just add the patch to this series. > > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> >
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 08c088319652..d7367a202fee 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -614,12 +614,12 @@ static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm, static void __drm_gpuva_remove(struct drm_gpuva *va); static bool -drm_gpuvm_check_overflow(u64 addr, u64 range) +drm_gpuvm_check_overflow(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { u64 end; - return WARN(check_add_overflow(addr, range, &end), - "GPUVA address limited to %zu bytes.\n", sizeof(end)); + return drm_WARN(gpuvm->drm, check_add_overflow(addr, range, &end), + "GPUVA address limited to %zu bytes.\n", sizeof(end)); } static bool @@ -647,7 +647,7 @@ static bool drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, u64 addr, u64 range) { - return !drm_gpuvm_check_overflow(addr, range) && + return !drm_gpuvm_check_overflow(gpuvm, addr, range) && drm_gpuvm_in_mm_range(gpuvm, addr, range) && !drm_gpuvm_in_kernel_node(gpuvm, addr, range); } @@ -656,6 +656,7 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, * drm_gpuvm_init() - initialize a &drm_gpuvm * @gpuvm: pointer to the &drm_gpuvm to initialize * @name: the name of the GPU VA space + * @drm: the &drm_device this VM resides in * @start_offset: the start offset of the GPU VA space * @range: the size of the GPU VA space * @reserve_offset: the start of the kernel reserved GPU VA area @@ -668,8 +669,8 @@ drm_gpuvm_range_valid(struct drm_gpuvm *gpuvm, * &name is expected to be managed by the surrounding driver structures. */ void -drm_gpuvm_init(struct drm_gpuvm *gpuvm, - const char *name, +drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, + struct drm_device *drm, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuvm_ops *ops) @@ -677,20 +678,20 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, gpuvm->rb.tree = RB_ROOT_CACHED; INIT_LIST_HEAD(&gpuvm->rb.list); - drm_gpuvm_check_overflow(start_offset, range); - gpuvm->mm_start = start_offset; - gpuvm->mm_range = range; - gpuvm->name = name ? name : "unknown"; gpuvm->ops = ops; + gpuvm->drm = drm; - memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); + drm_gpuvm_check_overflow(gpuvm, start_offset, range); + gpuvm->mm_start = start_offset; + gpuvm->mm_range = range; + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva)); if (reserve_range) { gpuvm->kernel_alloc_node.va.addr = reserve_offset; gpuvm->kernel_alloc_node.va.range = reserve_range; - if (likely(!drm_gpuvm_check_overflow(reserve_offset, + if (likely(!drm_gpuvm_check_overflow(gpuvm, reserve_offset, reserve_range))) __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node); } @@ -712,8 +713,8 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm) if (gpuvm->kernel_alloc_node.va.range) __drm_gpuva_remove(&gpuvm->kernel_alloc_node); - WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), - "GPUVA tree is not empty, potentially leaking memory."); + drm_WARN(gpuvm->drm, !RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root), + "GPUVA tree is not empty, potentially leaking memory.\n"); } EXPORT_SYMBOL_GPL(drm_gpuvm_destroy); @@ -795,7 +796,8 @@ drm_gpuva_remove(struct drm_gpuva *va) struct drm_gpuvm *gpuvm = va->vm; if (unlikely(va == &gpuvm->kernel_alloc_node)) { - WARN(1, "Can't destroy kernel reserved node.\n"); + drm_WARN(gpuvm->drm, 1, + "Can't destroy kernel reserved node.\n"); return; } diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c index 5cf892c50f43..aaf5d28bd587 100644 --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c @@ -1808,6 +1808,7 @@ int nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, u64 kernel_managed_addr, u64 kernel_managed_size) { + struct drm_device *drm = cli->drm->dev; int ret; u64 kernel_managed_end = kernel_managed_addr + kernel_managed_size; @@ -1836,7 +1837,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli, uvmm->kernel_managed_addr = kernel_managed_addr; uvmm->kernel_managed_size = kernel_managed_size; - drm_gpuvm_init(&uvmm->base, cli->name, + drm_gpuvm_init(&uvmm->base, cli->name, drm, NOUVEAU_VA_SPACE_START, NOUVEAU_VA_SPACE_END, kernel_managed_addr, kernel_managed_size, diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index bdfafc4a7705..687fd5893624 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -29,6 +29,7 @@ #include <linux/rbtree.h> #include <linux/types.h> +#include <drm/drm_device.h> #include <drm/drm_gem.h> struct drm_gpuvm; @@ -201,6 +202,11 @@ struct drm_gpuvm { */ const char *name; + /** + * @drm: the &drm_device this VM lives in + */ + struct drm_device *drm; + /** * @mm_start: start of the VA space */ @@ -241,6 +247,7 @@ struct drm_gpuvm { }; void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name, + struct drm_device *drm, u64 start_offset, u64 range, u64 reserve_offset, u64 reserve_range, const struct drm_gpuvm_ops *ops);