From patchwork Thu Jan 12 16:55:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42671 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4013002wrt; Thu, 12 Jan 2023 09:37:47 -0800 (PST) X-Google-Smtp-Source: AMrXdXsFx3PbSzmncWB7Vu2Vsi5dlSxVEjk1NR884kXp9WkBYucMjoqU1Kx1vohQFwPvuUFzxwBC X-Received: by 2002:a17:907:8e93:b0:7c1:7226:c936 with SMTP id tx19-20020a1709078e9300b007c17226c936mr69652589ejc.64.1673545067330; Thu, 12 Jan 2023 09:37:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545067; cv=none; d=google.com; s=arc-20160816; b=fZlUcK/8chQEt/C77IV2NdU/23RZ02WvttHEQZHDcd0P747AWv0sislsWwKGdhdDir weBZDhktuo5Yef1HIhSQmZb1smlPpJssZxhd7WWuS3Vtm3RBJsM6eORqCJzf/da/2do/ 8/ACELr1MX3A9aj5dopOxZ9ojcox+PN7cC3JjuktpvwEe48KUz/qVhWXLPcBC5QpU14z euyvl/Rol/Bq+hoJvMm07HwlU8CrNba3PVB9yrNSUxssc5w5OLfJ4diYvCniG4FzJ+6Q j53e4GWF6MZzIC+605DoOhDTyojkkggU1kzcUTC94BTlzIq+UwddofSnzkg9BJ0g3nMm ZjDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=SJlLAFid53rfivn7dSb1qHLPCvRvd8AzvS1qJmKljkI=; b=cWjbsucLHSrbXMSLlhoNErvF9mBoXgiyAqKt7ANKU07C0EO4gDYD9VJZTSa4cgdFd9 iop1/YUYNWmDbhyZrCzcIn22VA/KCcR2RehH874iUnoEJay/ydkocR9BJOwD4B0q5dyi aIGRvnmpKrXNHS1V3rzwMtbqkwtjHyuCOxh3zFdKicpDAXWZvmNf9qkOtX+zLr3rioaC LK4egOPO7NXZsZgx0HPz/D2haUqFg9lGy0RZeMklt+dPORoOQLJaFGSN8HJzlqgyO0RD +E+oCke3B65E0Qh3KbpB2QIfDjanh4wXYzchUmz6ihW2I1HmHbYlqMaKcOJDn7W2D/cq tRWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Fv8J3bfS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dt17-20020a170907729100b0084d1d7fbd27si19812817ejc.205.2023.01.12.09.37.23; Thu, 12 Jan 2023 09:37:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Fv8J3bfS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232625AbjALRgA (ORCPT + 99 others); Thu, 12 Jan 2023 12:36:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229483AbjALRfT (ORCPT ); Thu, 12 Jan 2023 12:35:19 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2A76C9707; Thu, 12 Jan 2023 08:57:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542664; x=1705078664; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aC20zrLtbmafnfXUOIpvUqX1idswNb59e1zqrjldwBE=; b=Fv8J3bfSQaMOklkkJ+9vB7zuAKImSWTmKlfCQQtwzJuSX+2cDp9508sR Jd3zGBIqOIJMLo69Aw5qS3G0YFP7v7F9f+g9Cun/2lIKSktjyaq+vfu1C yFhtJqrhhWb/bdDR45QwAENIqtJCJHU42sNj/OE65lRCnztAc2btroLsJ Cl+ueY4Jm6OmfWG2MlsV23A45uffvRYZyhS7JUIlkHuFwvPYx7cvI+C96 wGoYE5aDrwFkT/0l6qD/Knhr1ZrWRF3biVCOPslilnqqRvisyitC2WjEi Jg7DOi9Fe9zYsrXSKRxe8+zTM8DSDbAw7Yk8DsCY+OjUumx5RaE7Bmxpl Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016206" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016206" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:30 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232540" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232540" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:25 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin , Zack Rusin , linux-graphics-maintainer@vmware.com, Alex Deucher Subject: [RFC 01/12] drm: Track clients by tgid and not tid Date: Thu, 12 Jan 2023 16:55:58 +0000 Message-Id: <20230112165609.1083270-2-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839192191679434?= X-GMAIL-MSGID: =?utf-8?q?1754839192191679434?= From: Tvrtko Ursulin Thread group id (aka pid from userspace point of view) is a more interesting thing to show as an owner of a DRM fd, so track and show that instead of the thread id. In the next patch we will make the owner updated post file descriptor handover, which will also be tgid based to avoid ping-pong when multiple threads access the fd. Signed-off-by: Tvrtko Ursulin Cc: Zack Rusin Cc: linux-graphics-maintainer@vmware.com Cc: Alex Deucher Cc: "Christian König" Reviewed-by: Zack Rusin --- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 2 +- drivers/gpu/drm/drm_debugfs.c | 4 ++-- drivers/gpu/drm/drm_file.c | 2 +- drivers/gpu/drm/vmwgfx/vmwgfx_gem.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index 00edc7002ee8..ca0181332578 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -969,7 +969,7 @@ static int amdgpu_debugfs_gem_info_show(struct seq_file *m, void *unused) * Therefore, we need to protect this ->comm access using RCU. */ rcu_read_lock(); - task = pid_task(file->pid, PIDTYPE_PID); + task = pid_task(file->pid, PIDTYPE_TGID); seq_printf(m, "pid %8d command %s:\n", pid_nr(file->pid), task ? task->comm : ""); rcu_read_unlock(); diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c index 4f643a490dc3..4855230ba2c6 100644 --- a/drivers/gpu/drm/drm_debugfs.c +++ b/drivers/gpu/drm/drm_debugfs.c @@ -80,7 +80,7 @@ static int drm_clients_info(struct seq_file *m, void *data) seq_printf(m, "%20s %5s %3s master a %5s %10s\n", "command", - "pid", + "tgid", "dev", "uid", "magic"); @@ -94,7 +94,7 @@ static int drm_clients_info(struct seq_file *m, void *data) bool is_current_master = drm_is_current_master(priv); rcu_read_lock(); /* locks pid_task()->comm */ - task = pid_task(priv->pid, PIDTYPE_PID); + task = pid_task(priv->pid, PIDTYPE_TGID); uid = task ? __task_cred(task)->euid : GLOBAL_ROOT_UID; seq_printf(m, "%20s %5d %3d %c %c %5d %10u\n", task ? task->comm : "", diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index a51ff8cee049..c1018c470047 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -156,7 +156,7 @@ struct drm_file *drm_file_alloc(struct drm_minor *minor) if (!file) return ERR_PTR(-ENOMEM); - file->pid = get_pid(task_pid(current)); + file->pid = get_pid(task_tgid(current)); file->minor = minor; /* for compatibility root is always authenticated */ diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c index ce609e7d758f..f2985337aa53 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c @@ -260,7 +260,7 @@ static int vmw_debugfs_gem_info_show(struct seq_file *m, void *unused) * Therefore, we need to protect this ->comm access using RCU. */ rcu_read_lock(); - task = pid_task(file->pid, PIDTYPE_PID); + task = pid_task(file->pid, PIDTYPE_TGID); seq_printf(m, "pid %8d command %s:\n", pid_nr(file->pid), task ? task->comm : ""); rcu_read_unlock(); From patchwork Thu Jan 12 16:55:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42676 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4013828wrt; Thu, 12 Jan 2023 09:39:22 -0800 (PST) X-Google-Smtp-Source: AMrXdXtrAXB32srkn/vNJgieY8yxKVCcRZWybPJ7jwbNyAMYfIrIRrmntdRpiA6dGQK7UbD8jZ8I X-Received: by 2002:a17:90a:d713:b0:229:188b:37f4 with SMTP id y19-20020a17090ad71300b00229188b37f4mr708451pju.24.1673545162631; Thu, 12 Jan 2023 09:39:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545162; cv=none; d=google.com; s=arc-20160816; b=fvAgoKKcAwPs3WvQY4nohU4RNJHbCtelt35LEmZsv2Y1DJqUFp16w2V5jr4JydeXMi dvviN5ws2HsVqlNT0KwwPvZSNTD8LNC1z1jGye3xarTvjpF1ri1/+82Bb83L3JelAlgK DkSbQCuUPfiwsO7iKCFAyYMzKIejCo1jdx7HMnjp0tUrDddBpKI/S8c66roAh0MO2sNF fuQj/sfh3iH1DnqFDfb4S+CWvmV+YHfUwuePKbxWdgUrn3ITqy4NYIdndQmf/NgyU3P6 eNnbiwL45bN533vFrUOrPOF6uQ1DtjV+Sz7bk3usZj3fNLW0ZDADTxJZjb3cxI6xOQD3 N5yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=QrfTU4Si+5PfWVvJZYben3kg5+Mjb8gL5LqKHhkSZeI=; b=xBvfrVfgpYszlTvG53LeoVSen1y+GbVLVyrUL225aHbi16BsrIFBykoS2me+vBYr4g x08JYyzVMayea91+/o82Dzz8qZSVVqBmByoaQLlaqRyIAUM13oHjEfYlt2CoCl8wpaPP 98imhfkU3t+rnnAM8Nv7t4cBN7jBKZ3b+9OcuZz0DN6UCCy03xxubWcwOmWyZS76hxWz LZarJY0r5wF5+Qjaz+bO1Q90mTNz/pMI8/1uC+8pA4QmcScQZBK+L7V94tKxqZf3KCO6 SQ6Y4Oe+HHXEIw+KVlsnts/JfTEupiZt/8rBovJD02uQ5FEpo3p/jkCjm9keJeJLXC12 lPWw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LTEKspY1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rj10-20020a17090b3e8a00b00225d9c7afe2si18465424pjb.74.2023.01.12.09.39.03; Thu, 12 Jan 2023 09:39:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LTEKspY1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231200AbjALRgX (ORCPT + 99 others); Thu, 12 Jan 2023 12:36:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229939AbjALRfn (ORCPT ); Thu, 12 Jan 2023 12:35:43 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D9376B594; Thu, 12 Jan 2023 08:57:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542668; x=1705078668; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=inYQImR7mBQXkRkPaV/iwttQVBsl3nT7zULSwdPpfX0=; b=LTEKspY1YxQ4pB5TM6K2Q+tPNss8SLUjdkcyd5Kqjn+WpG0NvYnzUdtZ OhQq00+DIIxUA4IB977MRrdijZNHnY/CtVr4BLLTkRo1isLLpPIh/hwYl 7ArdQRdT/7GMFarOMEJ25A6uOP0uB1q67dLRKHHTE9JdeK1EiUatb5AMw IFVmvbl+IeB9mMM2GFSaWjjdE4QSRzqrFmpPu5115WOS5dmkbGOy0Tl4u QK8gWaXhvm90ffpmdWhlx8qjh9LivXvMKU8SpbRF5bTGZoK0/PGRWUqJk LF+/WFW2Yku3resR8nq9i5g1GNccXRsNd5xbmx6m2wfwjjsQ4Na/tPOb/ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016305" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016305" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:34 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232581" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232581" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:30 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin , Daniel Vetter Subject: [RFC 02/12] drm: Update file owner during use Date: Thu, 12 Jan 2023 16:55:59 +0000 Message-Id: <20230112165609.1083270-3-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839292262173384?= X-GMAIL-MSGID: =?utf-8?q?1754839292262173384?= From: Tvrtko Ursulin With the typical model where the display server opends the file descriptor and then hands it over to the client we were showing stale data in debugfs. Fix it by updating the drm_file->pid on ioctl access from a different process. The field is also made RCU protected to allow for lockless readers. Update side is protected with dev->filelist_mutex. Before: $ cat /sys/kernel/debug/dri/0/clients command pid dev master a uid magic Xorg 2344 0 y y 0 0 Xorg 2344 0 n y 0 2 Xorg 2344 0 n y 0 3 Xorg 2344 0 n y 0 4 After: $ cat /sys/kernel/debug/dri/0/clients command tgid dev master a uid magic Xorg 830 0 y y 0 0 xfce4-session 880 0 n y 0 1 xfwm4 943 0 n y 0 2 neverball 1095 0 n y 0 3 Signed-off-by: Tvrtko Ursulin Cc: "Christian König" Cc: Daniel Vetter --- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 6 ++-- drivers/gpu/drm/drm_auth.c | 3 +- drivers/gpu/drm/drm_debugfs.c | 10 ++++--- drivers/gpu/drm/drm_file.c | 40 +++++++++++++++++++++++-- drivers/gpu/drm/drm_ioctl.c | 3 ++ drivers/gpu/drm/nouveau/nouveau_drm.c | 5 +++- drivers/gpu/drm/vmwgfx/vmwgfx_gem.c | 6 ++-- include/drm/drm_file.h | 13 ++++++-- 8 files changed, 71 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index ca0181332578..3f22520552bb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -960,6 +960,7 @@ static int amdgpu_debugfs_gem_info_show(struct seq_file *m, void *unused) list_for_each_entry(file, &dev->filelist, lhead) { struct task_struct *task; struct drm_gem_object *gobj; + struct pid *pid; int id; /* @@ -969,8 +970,9 @@ static int amdgpu_debugfs_gem_info_show(struct seq_file *m, void *unused) * Therefore, we need to protect this ->comm access using RCU. */ rcu_read_lock(); - task = pid_task(file->pid, PIDTYPE_TGID); - seq_printf(m, "pid %8d command %s:\n", pid_nr(file->pid), + pid = rcu_dereference(file->pid); + task = pid_task(pid, PIDTYPE_TGID); + seq_printf(m, "pid %8d command %s:\n", pid_nr(pid), task ? task->comm : ""); rcu_read_unlock(); diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c index cf92a9ae8034..2ed2585ded37 100644 --- a/drivers/gpu/drm/drm_auth.c +++ b/drivers/gpu/drm/drm_auth.c @@ -235,7 +235,8 @@ static int drm_new_set_master(struct drm_device *dev, struct drm_file *fpriv) static int drm_master_check_perm(struct drm_device *dev, struct drm_file *file_priv) { - if (file_priv->pid == task_pid(current) && file_priv->was_master) + if (file_priv->was_master && + rcu_access_pointer(file_priv->pid) == task_pid(current)) return 0; if (!capable(CAP_SYS_ADMIN)) diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c index 4855230ba2c6..b46f5ceb24c6 100644 --- a/drivers/gpu/drm/drm_debugfs.c +++ b/drivers/gpu/drm/drm_debugfs.c @@ -90,15 +90,17 @@ static int drm_clients_info(struct seq_file *m, void *data) */ mutex_lock(&dev->filelist_mutex); list_for_each_entry_reverse(priv, &dev->filelist, lhead) { - struct task_struct *task; bool is_current_master = drm_is_current_master(priv); + struct task_struct *task; + struct pid *pid; - rcu_read_lock(); /* locks pid_task()->comm */ - task = pid_task(priv->pid, PIDTYPE_TGID); + rcu_read_lock(); /* Locks priv->pid and pid_task()->comm! */ + pid = rcu_dereference(priv->pid); + task = pid_task(pid, PIDTYPE_TGID); uid = task ? __task_cred(task)->euid : GLOBAL_ROOT_UID; seq_printf(m, "%20s %5d %3d %c %c %5d %10u\n", task ? task->comm : "", - pid_vnr(priv->pid), + pid_vnr(pid), priv->minor->index, is_current_master ? 'y' : 'n', priv->authenticated ? 'y' : 'n', diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index c1018c470047..f2f8175ece15 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -156,7 +156,7 @@ struct drm_file *drm_file_alloc(struct drm_minor *minor) if (!file) return ERR_PTR(-ENOMEM); - file->pid = get_pid(task_tgid(current)); + rcu_assign_pointer(file->pid, get_pid(task_tgid(current))); file->minor = minor; /* for compatibility root is always authenticated */ @@ -196,7 +196,7 @@ struct drm_file *drm_file_alloc(struct drm_minor *minor) drm_syncobj_release(file); if (drm_core_check_feature(dev, DRIVER_GEM)) drm_gem_release(dev, file); - put_pid(file->pid); + put_pid(rcu_access_pointer(file->pid)); kfree(file); return ERR_PTR(ret); @@ -287,7 +287,7 @@ void drm_file_free(struct drm_file *file) WARN_ON(!list_empty(&file->event_list)); - put_pid(file->pid); + put_pid(rcu_access_pointer(file->pid)); kfree(file); } @@ -501,6 +501,40 @@ int drm_release(struct inode *inode, struct file *filp) } EXPORT_SYMBOL(drm_release); +void drm_file_update_pid(struct drm_file *filp) +{ + struct drm_device *dev; + struct pid *pid, *old; + + /* + * Master nodes need to keep the original ownership in order for + * drm_master_check_perm to keep working correctly. (See comment in + * drm_auth.c.) + */ + if (filp->was_master) + return; + + pid = task_tgid(current); + + /* + * Quick unlocked check since the model is a single handover followed by + * exclusive repeated use. + */ + if (pid == rcu_access_pointer(filp->pid)) + return; + + dev = filp->minor->dev; + mutex_lock(&dev->filelist_mutex); + old = rcu_replace_pointer(filp->pid, pid, 1); + mutex_unlock(&dev->filelist_mutex); + + if (pid != old) { + get_pid(pid); + synchronize_rcu(); + put_pid(old); + } +} + /** * drm_release_noglobal - release method for DRM file * @inode: device inode diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c index 7c9d66ee917d..305b18d9d7b6 100644 --- a/drivers/gpu/drm/drm_ioctl.c +++ b/drivers/gpu/drm/drm_ioctl.c @@ -775,6 +775,9 @@ long drm_ioctl_kernel(struct file *file, drm_ioctl_t *func, void *kdata, struct drm_device *dev = file_priv->minor->dev; int retcode; + /* Update drm_file owner if fd was passed along. */ + drm_file_update_pid(file_priv); + if (drm_dev_is_unplugged(dev)) return -ENODEV; diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c index 80f154b6adab..a763d3ee61fb 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -1097,7 +1097,10 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file *fpriv) } get_task_comm(tmpname, current); - snprintf(name, sizeof(name), "%s[%d]", tmpname, pid_nr(fpriv->pid)); + rcu_read_lock(); + snprintf(name, sizeof(name), "%s[%d]", + tmpname, pid_nr(rcu_dereference(fpriv->pid))); + rcu_read_unlock(); if (!(cli = kzalloc(sizeof(*cli), GFP_KERNEL))) { ret = -ENOMEM; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c index f2985337aa53..3853d9bb9ab8 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c @@ -251,6 +251,7 @@ static int vmw_debugfs_gem_info_show(struct seq_file *m, void *unused) list_for_each_entry(file, &dev->filelist, lhead) { struct task_struct *task; struct drm_gem_object *gobj; + struct pid *pid; int id; /* @@ -260,8 +261,9 @@ static int vmw_debugfs_gem_info_show(struct seq_file *m, void *unused) * Therefore, we need to protect this ->comm access using RCU. */ rcu_read_lock(); - task = pid_task(file->pid, PIDTYPE_TGID); - seq_printf(m, "pid %8d command %s:\n", pid_nr(file->pid), + pid = rcu_dereference(file->pid); + task = pid_task(pid, PIDTYPE_TGID); + seq_printf(m, "pid %8d command %s:\n", pid_nr(pid), task ? task->comm : ""); rcu_read_unlock(); diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h index 0d1f853092ab..27d545131d4a 100644 --- a/include/drm/drm_file.h +++ b/include/drm/drm_file.h @@ -255,8 +255,15 @@ struct drm_file { /** @master_lookup_lock: Serializes @master. */ spinlock_t master_lookup_lock; - /** @pid: Process that opened this file. */ - struct pid *pid; + /** + * @pid: Process that is using this file. + * + * Must only be dereferenced under a rcu_read_lock or equivalent. + * + * Updates are guarded with dev->filelist_mutex and reference must be + * dropped after a RCU grace period to accommodate lockless readers. + */ + struct pid __rcu *pid; /** @magic: Authentication magic, see @authenticated. */ drm_magic_t magic; @@ -415,6 +422,8 @@ static inline bool drm_is_accel_client(const struct drm_file *file_priv) return file_priv->minor->type == DRM_MINOR_ACCEL; } +void drm_file_update_pid(struct drm_file *); + int drm_open(struct inode *inode, struct file *filp); int drm_open_helper(struct file *filp, struct drm_minor *minor); ssize_t drm_read(struct file *filp, char __user *buffer, From patchwork Thu Jan 12 16:56:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42673 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4013250wrt; Thu, 12 Jan 2023 09:38:14 -0800 (PST) X-Google-Smtp-Source: AMrXdXteKBa6/HGGxpsLyqgLKaoGsjiAUi9wWnhqlBOUyTlJtAlP9mtYP2Gdj6K8k1h21T3wsmKv X-Received: by 2002:aa7:9796:0:b0:58a:66a8:edf5 with SMTP id o22-20020aa79796000000b0058a66a8edf5mr11501042pfp.3.1673545094380; Thu, 12 Jan 2023 09:38:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545094; cv=none; d=google.com; s=arc-20160816; b=YaeKFw8/qSw1rOzpXHIPs5awYcG9dseFDg5Kp6u4np0yXsZl3cOA4o0SDnHCD4nwqK ENUQNdhE6iE8RQJs3ax+WnB6HrCwZU6cl2mgGwgCJgJDtjkJsNpVwY6LcXKuN2kmsMU4 z+OBHwU3aFDTYKmOvLXckVvnz28IexAdW3wd3izTgZ6suTGzdbDR3cQ1kZPcQwz/laDr Jk7e7E1ylGa1tLnX2LRjY2l5NU4ARh9ogO+uwKyPLyik9R5BKPXc7GR3oIia6/w+TyD9 AV2/2Evw/oySSMr6+YsgI4iXVIkT8rBphQyWP8ATViwFF90KWTlLvKXNlM7SKL+Zkgup py2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1jz1M7xOjkWvKh7JC8SglbFcNBMG+/84syQFCczswJQ=; b=SbJ71LYr4bcolwXu9ug1DMrxdcJGT9AlK9esZlxZF8ZTdkmhIfLtqgHdYzVWyOjJoM lymSXoSAyiJj2eDPV1OHO6Iz89HxjUlwalq7LV1HZ1RGXmkxOPJJhM3WST6b3/2uVZ6L n41Hx/VV5xlxXwLaAnwQXpW6mk7HJdIL5ZrL7QGm7vc60YjXKpVwI19oZyMYiqZd0Rau BYyD14TfsVFe5OqCAmQJwv1ioI8X/w5V357szzyLFRDEFh+LVGvxn5StiTJRed1nZ7IM 5TTbyDectI35CHQXDanZRyRpv3dje7cQemUtt6LT5OkcBMHxyg6STgLbkdvqKAZWG9T7 FCRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=gr5YodDu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x19-20020aa78f13000000b0056ea2b1b0fesi17908384pfr.119.2023.01.12.09.38.01; Thu, 12 Jan 2023 09:38:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=gr5YodDu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231246AbjALRg6 (ORCPT + 99 others); Thu, 12 Jan 2023 12:36:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229680AbjALRgN (ORCPT ); Thu, 12 Jan 2023 12:36:13 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5E466B58B; Thu, 12 Jan 2023 08:57:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542675; x=1705078675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5tUTxI03m+NTLtcglmz0tTkp8uvuUcMX2wFEFmhnCZQ=; b=gr5YodDuerwBQpE/UJU3gArgEwRoR95cUhGIrKTfPRJSbXAZ9VyJhKLK YSsxEtfU2ghk+8vtG/4Ix0bppiM0Qn0HpbnkUG2xF2DY3+1Q7XaAfp9tZ T5uMKPiKacYagxDw5C+ZtW4LcBYOViaHbAfQTSMJCS21tOhJnwo2NAmpN Ua9mvST6mLSveY9ZCjbLAv/IKrMMEXO6+7BlXGdNUP5FlorYsAPUhnTnC mC5rUS0OPDKa+++egBp9syE3IIWygKDNYBqrewZrrq+c1y5qcctMaxuge ZedOwQ2PhCjZ7s2bcTmjjNPOqj52XBvOagFo3A6W37vMyoXR+zvL196pg w==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016371" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016371" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:38 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232620" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232620" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:34 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 03/12] cgroup: Add the DRM cgroup controller Date: Thu, 12 Jan 2023 16:56:00 +0000 Message-Id: <20230112165609.1083270-4-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839221025335946?= X-GMAIL-MSGID: =?utf-8?q?1754839221025335946?= From: Tvrtko Ursulin Skeleton controller without any functionality. Signed-off-by: Tvrtko Ursulin --- include/linux/cgroup_drm.h | 9 ++++++ include/linux/cgroup_subsys.h | 4 +++ init/Kconfig | 7 +++++ kernel/cgroup/Makefile | 1 + kernel/cgroup/drm.c | 59 +++++++++++++++++++++++++++++++++++ 5 files changed, 80 insertions(+) create mode 100644 include/linux/cgroup_drm.h create mode 100644 kernel/cgroup/drm.c diff --git a/include/linux/cgroup_drm.h b/include/linux/cgroup_drm.h new file mode 100644 index 000000000000..bf8abc6b8ebf --- /dev/null +++ b/include/linux/cgroup_drm.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2022 Intel Corporation + */ + +#ifndef _CGROUP_DRM_H +#define _CGROUP_DRM_H + +#endif /* _CGROUP_DRM_H */ diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h index 445235487230..49460494a010 100644 --- a/include/linux/cgroup_subsys.h +++ b/include/linux/cgroup_subsys.h @@ -65,6 +65,10 @@ SUBSYS(rdma) SUBSYS(misc) #endif +#if IS_ENABLED(CONFIG_CGROUP_DRM) +SUBSYS(drm) +#endif + /* * The following subsystems are not supported on the default hierarchy. */ diff --git a/init/Kconfig b/init/Kconfig index 7e5c3ddc341d..c5ace0d57007 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1089,6 +1089,13 @@ config CGROUP_RDMA Attaching processes with active RDMA resources to the cgroup hierarchy is allowed even if can cross the hierarchy's limit. +config CGROUP_DRM + bool "DRM controller" + help + Provides the DRM subsystem controller. + + ... + config CGROUP_FREEZER bool "Freezer controller" help diff --git a/kernel/cgroup/Makefile b/kernel/cgroup/Makefile index 12f8457ad1f9..849bd2917477 100644 --- a/kernel/cgroup/Makefile +++ b/kernel/cgroup/Makefile @@ -6,4 +6,5 @@ obj-$(CONFIG_CGROUP_PIDS) += pids.o obj-$(CONFIG_CGROUP_RDMA) += rdma.o obj-$(CONFIG_CPUSETS) += cpuset.o obj-$(CONFIG_CGROUP_MISC) += misc.o +obj-$(CONFIG_CGROUP_DRM) += drm.o obj-$(CONFIG_CGROUP_DEBUG) += debug.o diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c new file mode 100644 index 000000000000..48a8d646a094 --- /dev/null +++ b/kernel/cgroup/drm.c @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2022 Intel Corporation + */ + +#include +#include +#include +#include + +struct drm_cgroup_state { + struct cgroup_subsys_state css; +}; + +struct drm_root_cgroup_state { + struct drm_cgroup_state drmcs; +}; + +static inline struct drm_cgroup_state * +css_to_drmcs(struct cgroup_subsys_state *css) +{ + return container_of(css, struct drm_cgroup_state, css); +} + +static struct drm_root_cgroup_state root_drmcs; + +static void drmcs_free(struct cgroup_subsys_state *css) +{ + if (css != &root_drmcs.drmcs.css) + kfree(css_to_drmcs(css)); +} + +static struct cgroup_subsys_state * +drmcs_alloc(struct cgroup_subsys_state *parent_css) +{ + struct drm_cgroup_state *drmcs; + + if (!parent_css) { + drmcs = &root_drmcs.drmcs; + } else { + drmcs = kzalloc(sizeof(*drmcs), GFP_KERNEL); + if (!drmcs) + return ERR_PTR(-ENOMEM); + } + + return &drmcs->css; +} + +struct cftype files[] = { + { } /* Zero entry terminates. */ +}; + +struct cgroup_subsys drm_cgrp_subsys = { + .css_alloc = drmcs_alloc, + .css_free = drmcs_free, + .early_init = false, + .legacy_cftypes = files, + .dfl_cftypes = files, +}; From patchwork Thu Jan 12 16:56:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42674 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4013265wrt; Thu, 12 Jan 2023 09:38:17 -0800 (PST) X-Google-Smtp-Source: AMrXdXu0RqmVFQBLOaDPqvYV+3crErttd/IPkGWSVOlzKbqm535YedCpKLgcpBGX/WQFkTAC9g5b X-Received: by 2002:a05:6a21:32a1:b0:aa:6efd:1883 with SMTP id yt33-20020a056a2132a100b000aa6efd1883mr123601606pzb.37.1673545096883; Thu, 12 Jan 2023 09:38:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545096; cv=none; d=google.com; s=arc-20160816; b=kvF/1PdbQjuzEFVREhA3SzCfih3Ce/8V1hpurO/2SV6RgeOSCFTcRjDCzQNkWWztq1 9Q1h3ScVvFG8x4o1ES3f9n9YiQ+PnL/Bd/mKGZhj/luvfq29O1ydK9zP2oUGwgg9MjO+ zxRmMMtZsvPvCfcEBiy3eM5pd8DHcbbJFBUA0JrXzCSCpKTe5hkiXzsna60wePolgnwK Hhcf/QzEaBpfxJGwiUy8q8cNNY2dOlrxiBPrLkvIyv0oQ/197teSUX4KVv/muPB1H7XC 8/ZNyo7mjDh16J59K0/laLEwZSgusYzKw6LpLWuh0SO4zTnMr6zA0jOtT/+3W6ARlq08 prug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4aAHFakXQBoVGGBUeKMgizVazlAGgvBBF9F3klO4sbA=; b=xf3aqJ5TpqzwTXccMPy9Uklg0+6lcTrd3IE2jnv7vHiD6CZvXsLzhxYK1ar3XSwvKr 6eOFxYM+mYi/wRyZdGHEVyVeec1qRSMoUdfly+5oMlxDO8eYCU612AIRi40GIoNdOGyK bn9cGNkdKBOQFY5VEwbBP7P3quM001bCwzup++k8Rw74sn6W3StnBHNDiSuzesfPBvnF tQyL1wFMcSjJdvkv1VmFAM4EVs57Y2yNYRpJv2xXEmMSv9H6KXkDeL7WPXGmAe+DkCFd LSXUXvRqXAO1REHnm+xzg8aDlJtzq8Bc7OReOanok+8RwInC1+8JWN0slhjlGhNROq6t TuJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=koY1NszC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e13-20020a170902d38d00b00191001a1795si17020612pld.181.2023.01.12.09.38.04; Thu, 12 Jan 2023 09:38:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=koY1NszC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231787AbjALRhE (ORCPT + 99 others); Thu, 12 Jan 2023 12:37:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232449AbjALRgN (ORCPT ); Thu, 12 Jan 2023 12:36:13 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30E8FCA268; Thu, 12 Jan 2023 08:57:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542675; x=1705078675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LFWfaBaBSYjaapQ05IsoDGjzc+hjHAEDqfXfuRz8U04=; b=koY1NszC6Xx0lSaW+SSnGuUt/W/IiW2A1SAcCFT4rwDUZoWUqYk0gEWB RfRL/M4S2S/PKXmjCxQO+ZQjb4P0nmLKl7vpHfmO//2TAVx358gCp4GID b64aVdOfj16vKssbNrinIjorKpfQyVwmsmHmIApnCHxmiWq2ETJRLI5+z sbQRcEG3boT5Ia5IFAlBLJmA5A+r5gsI5s9mDtmfji+RCOjdYHcUOK+0a DfdaksDjDp0qNbsKFncZtmP9adlP9PBQBSwJw8NDkNmiy1LoIRIfcyMGT /0co6a15XoX9mGmdNuyNbef2DyYyyyq/w6NzIH8AFA7a6Oju1D9L1jrfh A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016400" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016400" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:42 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232669" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232669" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:38 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 04/12] drm/cgroup: Track clients per owning process Date: Thu, 12 Jan 2023 16:56:01 +0000 Message-Id: <20230112165609.1083270-5-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839223464576614?= X-GMAIL-MSGID: =?utf-8?q?1754839223464576614?= From: Tvrtko Ursulin To enable propagation of settings from the cgroup drm controller to drm we need to start tracking which processes own which drm clients. Implement that by tracking the struct pid pointer of the owning process in a new XArray, pointing to a structure containing a list of associated struct drm_file pointers. Clients are added and removed under the filelist mutex and RCU list operations are used below it to allow for lockless lookup. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/drm_cgroup.c | 133 +++++++++++++++++++++++++++++++++++ drivers/gpu/drm/drm_file.c | 20 ++++-- include/drm/drm_clients.h | 44 ++++++++++++ include/drm/drm_file.h | 4 ++ 5 files changed, 198 insertions(+), 4 deletions(-) create mode 100644 drivers/gpu/drm/drm_cgroup.c create mode 100644 include/drm/drm_clients.h diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 496fa5a6147a..c036b1b379c4 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -59,6 +59,7 @@ drm-$(CONFIG_DRM_LEGACY) += \ drm_scatter.o \ drm_vm.o drm-$(CONFIG_DRM_LIB_RANDOM) += lib/drm_random.o +drm-$(CONFIG_CGROUP_DRM) += drm_cgroup.o drm-$(CONFIG_COMPAT) += drm_ioc32.o drm-$(CONFIG_DRM_PANEL) += drm_panel.o drm-$(CONFIG_OF) += drm_of.o diff --git a/drivers/gpu/drm/drm_cgroup.c b/drivers/gpu/drm/drm_cgroup.c new file mode 100644 index 000000000000..d91512a560ff --- /dev/null +++ b/drivers/gpu/drm/drm_cgroup.c @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2022 Intel Corporation + */ + +#include +#include +#include + +static DEFINE_XARRAY(drm_pid_clients); + +static void +__del_clients(struct drm_pid_clients *clients, + struct drm_file *file_priv, + unsigned long pid) +{ + list_del_rcu(&file_priv->clink); + if (atomic_dec_and_test(&clients->num)) { + xa_erase(&drm_pid_clients, pid); + kfree_rcu(clients, rcu); + } +} + +void drm_clients_close(struct drm_file *file_priv) +{ + struct drm_device *dev = file_priv->minor->dev; + struct drm_pid_clients *clients; + struct pid *pid; + + lockdep_assert_held(&dev->filelist_mutex); + + pid = rcu_access_pointer(file_priv->pid); + clients = xa_load(&drm_pid_clients, (unsigned long)pid); + if (drm_WARN_ON_ONCE(dev, !clients)) + return; + + __del_clients(clients, file_priv, (unsigned long)pid); +} + +static struct drm_pid_clients *__alloc_clients(void) +{ + struct drm_pid_clients *clients; + + clients = kmalloc(sizeof(*clients), GFP_KERNEL); + if (clients) { + atomic_set(&clients->num, 0); + INIT_LIST_HEAD(&clients->file_list); + init_rcu_head(&clients->rcu); + } + + return clients; +} + +int drm_clients_open(struct drm_file *file_priv) +{ + struct drm_device *dev = file_priv->minor->dev; + struct drm_pid_clients *clients; + bool new_client = false; + unsigned long pid; + + lockdep_assert_held(&dev->filelist_mutex); + + pid = (unsigned long)rcu_access_pointer(file_priv->pid); + clients = xa_load(&drm_pid_clients, pid); + if (!clients) { + clients = __alloc_clients(); + if (!clients) + return -ENOMEM; + new_client = true; + } + atomic_inc(&clients->num); + list_add_tail_rcu(&file_priv->clink, &clients->file_list); + if (new_client) { + void *xret; + + xret = xa_store(&drm_pid_clients, pid, clients, GFP_KERNEL); + if (xa_err(xret)) { + list_del_init(&file_priv->clink); + kfree(clients); + return PTR_ERR(clients); + } + } + + return 0; +} + +void +drm_clients_migrate(struct drm_file *file_priv, + struct pid *old, + struct pid *new) +{ + struct drm_device *dev = file_priv->minor->dev; + struct drm_pid_clients *existing_clients; + struct drm_pid_clients *clients; + + lockdep_assert_held(&dev->filelist_mutex); + + existing_clients = xa_load(&drm_pid_clients, (unsigned long)new); + clients = xa_load(&drm_pid_clients, (unsigned long)old); + + if (drm_WARN_ON_ONCE(dev, !clients)) + return; + else if (drm_WARN_ON_ONCE(dev, clients == existing_clients)) + return; + + __del_clients(clients, file_priv, (unsigned long)old); + + if (!existing_clients) { + void *xret; + + clients = __alloc_clients(); + if (!clients) + goto err; + + xret = xa_store(&drm_pid_clients, (unsigned long)new, clients, + GFP_KERNEL); + if (xa_err(xret)) + goto err; + } else { + clients = existing_clients; + } + + atomic_inc(&clients->num); + list_add_tail_rcu(&file_priv->clink, &clients->file_list); + + return; + +err: + rcu_read_lock(); + drm_warn(dev, "Failed to migrate client from pid %u to %u!\n", + pid_nr(old), pid_nr(new)); + rcu_read_unlock(); +} diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index f2f8175ece15..5cf446f721f8 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -40,6 +40,7 @@ #include #include +#include #include #include #include @@ -298,6 +299,7 @@ static void drm_close_helper(struct file *filp) mutex_lock(&dev->filelist_mutex); list_del(&file_priv->lhead); + drm_clients_close(file_priv); mutex_unlock(&dev->filelist_mutex); drm_file_free(file_priv); @@ -349,10 +351,8 @@ int drm_open_helper(struct file *filp, struct drm_minor *minor) if (drm_is_primary_client(priv)) { ret = drm_master_open(priv); - if (ret) { - drm_file_free(priv); - return ret; - } + if (ret) + goto err_free; } filp->private_data = priv; @@ -360,6 +360,9 @@ int drm_open_helper(struct file *filp, struct drm_minor *minor) priv->filp = filp; mutex_lock(&dev->filelist_mutex); + ret = drm_clients_open(priv); + if (ret) + goto err_unlock; list_add(&priv->lhead, &dev->filelist); mutex_unlock(&dev->filelist_mutex); @@ -387,6 +390,13 @@ int drm_open_helper(struct file *filp, struct drm_minor *minor) #endif return 0; + +err_unlock: + mutex_unlock(&dev->filelist_mutex); +err_free: + drm_file_free(priv); + + return ret; } /** @@ -526,6 +536,8 @@ void drm_file_update_pid(struct drm_file *filp) dev = filp->minor->dev; mutex_lock(&dev->filelist_mutex); old = rcu_replace_pointer(filp->pid, pid, 1); + if (pid != old) + drm_clients_migrate(filp, old, pid); mutex_unlock(&dev->filelist_mutex); if (pid != old) { diff --git a/include/drm/drm_clients.h b/include/drm/drm_clients.h new file mode 100644 index 000000000000..2732fffab3f0 --- /dev/null +++ b/include/drm/drm_clients.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2022 Intel Corporation + */ + +#ifndef _DRM_CLIENTS_H_ +#define _DRM_CLIENTS_H_ + +#include + +#include + +struct drm_pid_clients { + atomic_t num; + struct list_head file_list; + struct rcu_head rcu; +}; + +#if IS_ENABLED(CONFIG_CGROUP_DRM) +void drm_clients_close(struct drm_file *file_priv); +int drm_clients_open(struct drm_file *file_priv); + +void +drm_clients_migrate(struct drm_file *file_priv, + struct pid *old, struct pid *new); +#else +static inline void drm_clients_close(struct drm_file *file_priv) +{ +} + +static inline int drm_clients_open(struct drm_file *file_priv) +{ + return 0; +} + +static void +drm_clients_migrate(struct drm_file *file_priv, + struct pid *old, struct pid *new) +{ + +} +#endif + +#endif diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h index 27d545131d4a..644c5b17d6a7 100644 --- a/include/drm/drm_file.h +++ b/include/drm/drm_file.h @@ -279,6 +279,10 @@ struct drm_file { /** @minor: &struct drm_minor for this file. */ struct drm_minor *minor; +#if IS_ENABLED(CONFIG_CGROUP_DRM) + struct list_head clink; +#endif + /** * @object_idr: * From patchwork Thu Jan 12 16:56:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42675 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4013576wrt; Thu, 12 Jan 2023 09:38:53 -0800 (PST) X-Google-Smtp-Source: AMrXdXt1FdJx42ZEFktkPNbC1vLzVz/Iwl9ifr66MNlkcczLgZ54W+TiZazVkQmsxA3smEz19t5f X-Received: by 2002:a17:902:e3d4:b0:193:6520:73a4 with SMTP id r20-20020a170902e3d400b00193652073a4mr10236396ple.61.1673545133271; Thu, 12 Jan 2023 09:38:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545133; cv=none; d=google.com; s=arc-20160816; b=RNZLYJD6bW48FVWqUyFhedrFUyfG6vLPo0xZnvkaX21av9O8Lq9HaPZoztXyGrxKJx vR0by6XyV1f+W0w6FWc687gKzDRuuf3K1RunsFB4PiYQTW0vlLCzcXyzKhl94CQDOqQ9 JxhkFuEtVEmgzI8TpTRQp/CO2mnL7ksoJ01pZx024shNDO5b5eklk06rCGiWFnaHpxw2 TzBdObyI8G3BMIP7JOuzPzQYT7EziNrouWXIF8/REnDGXcUw3bORL8KhJpsXwT9zQZlq N/WuhWYYAoGa8U4euxiJORWc7XE9Gytaa+jJdz5sJtuCa6YaRVzjS7baW3tGC61f4Bhb sQgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xFj5W26GNKlyf39Ictg3IXx24tiwxrq1DiXrMrDs9EI=; b=XykBoES0a4W1IYxkWdsspLQD3ARNA+8DWfsx/VbGlyzB6/a/piBeUvqn1ivuIM0Myt s/kuJHr5FabVSVO+mrNVZAaG3hBISGJamgy5SCGjj7YrzJpfXgn/4r19kxCFeJBfT4RD lQR6hVZPNW1ZljmzQP1f8Qw9sG2A/EFtgSvTskcM9sbVQiQ1fQ1NmPgqIJNBfL+J90eP gtd8YDPVZxlbbnRwqwZ2S4D2/G9isPtLrIrTvNNmtDLHZD/72fzErQ2mjEExFq9ewGg/ LGtohobLc/5U0KHiwNbXhUIPB8d1hNLfDsAgvg6x4eBdTUjj46kZ5PDqyBYgOqFHqsqm HcWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OsJ7nj+R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f6-20020a170902ce8600b00192682632f2si18767254plg.396.2023.01.12.09.38.34; Thu, 12 Jan 2023 09:38:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OsJ7nj+R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230496AbjALRhX (ORCPT + 99 others); Thu, 12 Jan 2023 12:37:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231891AbjALRgZ (ORCPT ); Thu, 12 Jan 2023 12:36:25 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C756C7826E; Thu, 12 Jan 2023 08:57:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542679; x=1705078679; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gA5/njLr6I387+ufsBDRnAQjKtZ0ST5kVZFmIu51Hl0=; b=OsJ7nj+RG9O+8htK08tYbUjwRBCUde4UInFKJj9eHhZ3/IhKYGzacD6k fEm9/5SW13OlNcVtbSc0qqmoylBSRDCp0+rM5G0GreWqig3F1riso2cJm k8Lepfn9IzWLDi3Lftq28XJS6scZpdtApcjEYtz5g8H5KepFzQCvrhICc Zrej3Rq9Pr93HbrK/O5jeuNeM3XIEF58diEzdPdUIFWNdK4vcj2mX1rFS hHoGlXhZonPDwoBVGHv9WgWfsxTL7jkQXP3PwSH6n49ZwAoO/I0nH7260 6ftB3MZgmyCGhnUU19mn5q4PD4AVQqL+1pgZ6ToOUQDrDlgex34OH2Gbd A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016435" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016435" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:46 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232719" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232719" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:42 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 05/12] drm/cgroup: Allow safe external access to file_priv Date: Thu, 12 Jan 2023 16:56:02 +0000 Message-Id: <20230112165609.1083270-6-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839261925916011?= X-GMAIL-MSGID: =?utf-8?q?1754839261925916011?= From: Tvrtko Ursulin Entry points from the cgroup subsystem into the drm cgroup controller will need to walk the file_priv structures associated with registered clients and since those are not RCU protected lets add a hack for now to make this safe. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/drm_cgroup.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/drm_cgroup.c b/drivers/gpu/drm/drm_cgroup.c index d91512a560ff..46b012d2be42 100644 --- a/drivers/gpu/drm/drm_cgroup.c +++ b/drivers/gpu/drm/drm_cgroup.c @@ -18,6 +18,13 @@ __del_clients(struct drm_pid_clients *clients, if (atomic_dec_and_test(&clients->num)) { xa_erase(&drm_pid_clients, pid); kfree_rcu(clients, rcu); + + /* + * FIXME: file_priv is not RCU protected so we add this hack + * to avoid any races with code which walks clients->file_list + * and accesses file_priv. + */ + synchronize_rcu(); } } From patchwork Thu Jan 12 16:56:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42672 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4013228wrt; Thu, 12 Jan 2023 09:38:11 -0800 (PST) X-Google-Smtp-Source: AMrXdXvZWSICkfhnU0L+CaaCr+ssclLugEdJ0mQ4L/bZR8V79QLJ7Q/PxAxpIVze7KaKJVrNjOpm X-Received: by 2002:a17:90a:fa04:b0:226:7a48:e644 with SMTP id cm4-20020a17090afa0400b002267a48e644mr7386832pjb.8.1673545091220; Thu, 12 Jan 2023 09:38:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545091; cv=none; d=google.com; s=arc-20160816; b=ITAuc6AYbLEJk1D3YvsraiVTdhUj4Ytf0DAUBmWx6ol5JddNGfh03IS1qAPrdlzwrO 0jy7xjcXFd41XlIkzTEIJnSO5YiB77C0cyvepre4t4/tZIrtknJpy1ktkkT5zyMplq+u 5inT2vgufLfhTFTiLBlNP4kvHgto4MpWj1LTu38/yN9XwC1DFpVMMQ1dYIJSou62xJAu J8PHqI8TbycytaWGlX/ODSQcMxzotYdLY9CPGlQYOriFLXujirABKO3Zf8TTAD35b/ya qsf6m0alno/qH2myFuOusuwHQnFjy0jVH0fBMxpKY3/KkKDTnL8Wscsc5BNKW74gEhDw u/Yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HfC6zi2yz5FpXfRBLE51PCPYGZsv5yFJwA79DWmOd8k=; b=yg31mjJVJRqQBLWxD73oCqz0GWjg4Pt8y+Z2CUXzl1dk1V8JtzjaRb6ebxJ2vnpoTA 2iP+yMdnjrDaJi8pzzssL9WoFWJNea9DjNObnbrPQuNfkquBIqYCn2bePaHQiA3L9xvZ ZPfzqtgH2LkLPmQT3gA9p6Ik0auBI7pYFrlzW/nF8hrfbjvDOEb4XS1nChGYKjBFVe0p O7RrB/cStBOXVOg5W6EmxA2RL974J6Cp/v8YN2XEAj4KO+u/6MzX067oF7upg7FDyDl3 ePwZGRTdvGpXJdacJjzdlJ93iMpxJu1YmEWJ6lu2trg2PWofqS3HYOOBfylOUaBacE7J KZEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GxkZWRAl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b5-20020a17090a7ac500b0022695223cdcsi20064936pjl.178.2023.01.12.09.37.57; Thu, 12 Jan 2023 09:38:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=GxkZWRAl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231699AbjALRhT (ORCPT + 99 others); Thu, 12 Jan 2023 12:37:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233079AbjALRgV (ORCPT ); Thu, 12 Jan 2023 12:36:21 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EABB7825F; Thu, 12 Jan 2023 08:58:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542680; x=1705078680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bAsuGvW+cjJ47gR3odvUtOpS3YhJhQdU2PbgaMP59V0=; b=GxkZWRAlTs5zpThtXT6ic6nm7QhNW6YC+bQUoHI2Le7UzVa31mHe90om vj2pzLLgzcyPRSU8kTwAQCtwwgiwzyIHU25EbI77C48BF81yNIshYeHr6 XYS/qWoH3WnOWoiiCqNX1qkr19QzIhp/qe513QqccINPmaAogw2DfSyuk DzucEY3WOk9WlxigIuCuBI1jINRDiMSYJbQxDrigqPLJeoVz5wz3qiJOf wox6rkHgf/lu1OnDsOKntygLfpnajLzv4TuR7ATBuDfIWvLchEGVG6F1P 4j9P8qwVyPmVtn7P6hgPQYVjXR/cZ9YDIKfbhdabru9VRn2A0MM6LwzrB A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016465" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016465" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:50 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232782" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232782" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:46 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 06/12] drm/cgroup: Add ability to query drm cgroup GPU time Date: Thu, 12 Jan 2023 16:56:03 +0000 Message-Id: <20230112165609.1083270-7-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839217686452367?= X-GMAIL-MSGID: =?utf-8?q?1754839217686452367?= From: Tvrtko Ursulin Add a driver callback and core helper which allow querying the time spent on GPUs for processes belonging to a group. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/drm_cgroup.c | 24 ++++++++++++++++++++++++ include/drm/drm_clients.h | 2 ++ include/drm/drm_drv.h | 28 ++++++++++++++++++++++++++++ 3 files changed, 54 insertions(+) diff --git a/drivers/gpu/drm/drm_cgroup.c b/drivers/gpu/drm/drm_cgroup.c index 46b012d2be42..bc1e34f1a552 100644 --- a/drivers/gpu/drm/drm_cgroup.c +++ b/drivers/gpu/drm/drm_cgroup.c @@ -138,3 +138,27 @@ drm_clients_migrate(struct drm_file *file_priv, pid_nr(old), pid_nr(new)); rcu_read_unlock(); } + +u64 drm_pid_get_active_time_us(struct pid *pid) +{ + struct drm_pid_clients *clients; + u64 total = 0; + + rcu_read_lock(); + clients = xa_load(&drm_pid_clients, (unsigned long)pid); + if (clients) { + struct drm_file *fpriv; + + list_for_each_entry_rcu(fpriv, &clients->file_list, clink) { + const struct drm_cgroup_ops *cg_ops = + fpriv->minor->dev->driver->cg_ops; + + if (cg_ops && cg_ops->active_time_us) + total += cg_ops->active_time_us(fpriv); + } + } + rcu_read_unlock(); + + return total; +} +EXPORT_SYMBOL_GPL(drm_pid_get_active_time_us); diff --git a/include/drm/drm_clients.h b/include/drm/drm_clients.h index 2732fffab3f0..7e0c0cf14f25 100644 --- a/include/drm/drm_clients.h +++ b/include/drm/drm_clients.h @@ -41,4 +41,6 @@ drm_clients_migrate(struct drm_file *file_priv, } #endif +u64 drm_pid_get_active_time_us(struct pid *pid); + #endif diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index d7c521e8860f..f5f0e088e1fe 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -158,6 +158,24 @@ enum drm_driver_feature { DRIVER_KMS_LEGACY_CONTEXT = BIT(31), }; +/** + * struct drm_cgroup_ops + * + * This structure contains a number of callbacks that drivers can provide if + * they are able to support one or more of the functionalities implemented by + * the DRM cgroup controller. + */ +struct drm_cgroup_ops { + /** + * @active_time_us: + * + * Optional callback for reporting the GPU time consumed by this client. + * + * Used by the DRM core when queried by the DRM cgroup controller. + */ + u64 (*active_time_us) (struct drm_file *); +}; + /** * struct drm_driver - DRM driver structure * @@ -469,6 +487,16 @@ struct drm_driver { */ const struct file_operations *fops; +#ifdef CONFIG_CGROUP_DRM + /** + * @cg_ops: + * + * Optional pointer to driver callbacks facilitating integration with + * the DRM cgroup controller. + */ + const struct drm_cgroup_ops *cg_ops; +#endif + #ifdef CONFIG_DRM_LEGACY /* Everything below here is for legacy driver, never use! */ /* private: */ From patchwork Thu Jan 12 16:56:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42677 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4013929wrt; Thu, 12 Jan 2023 09:39:34 -0800 (PST) X-Google-Smtp-Source: AMrXdXuaQfqvI6B7QEeT7GzhQ9IFYRrT7AsOwNAxi2dH+o7mfga5UpjX7KtU3GTsGr78L/jGX6oP X-Received: by 2002:a62:1b85:0:b0:576:450d:6e68 with SMTP id b127-20020a621b85000000b00576450d6e68mr70226456pfb.27.1673545174420; Thu, 12 Jan 2023 09:39:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545174; cv=none; d=google.com; s=arc-20160816; b=RrWqg+NqbSpQANSmZbqNyfP6G+mykfmyMMo8542g7unDRpTLMFx3y2yXXXBw1K/j85 gepNzRNcj6S/zBvArK70gmDjQN7zSRTPhLFa04ZXDzhfZte9Pr2BTI1P5xa7scsX4QgQ wKq87HJCy0DUs5bt3DTzErNOJ5F1fk6iJiBmHkgj3uVoh9BwTmzT7/w7fBgJQYO9+0ZT FOHFJAJkQNsk9OkdLtjEnzwtird3eOOC6mn8P+1hW6akGeqw0xhVeM+0ZkcBTVmNnfCw us3hEqm27ECgSdApzAcoonliuQwYXrBH5a2vnGrDPz29o5kisHBo4pg0nxmb7Tjik0nx CD7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nQk05ytoRr3Ax9fHXv5q5hTnGBbqKwfszHoS1t6H1SA=; b=GXTnmg1l2Hw/Sm41OhgB29Kq2ob8Gr5Ih+rOfZZ6u2A2FK0fjT3ersvlOuHNIUeSyf xUPS6o+EsAdLM4NlTOhH9lE8zAMksgrbiGFsxlF4w9rXAz1WN4bR6cHmLzb5qP6GuLhr CY+HQ5GqfTrmL0VIxxqUQpVeBmRIZKL6AJM12G8ztXFoOr1WJ5k99YC0/y3aFMbkdb0q JtMk7i6E5A0CsR3ZdjL4VQoYYXW2XAp6ZZaBsn2m0WP1+o9kMJGp19YjJ+k9JhHbuw+N KtQNewY47PVLXFFbwlzkyQlPurcM/8CK45tMCidn4BtYVSmwcT1dzRd9HI8EktLs/UzC jH2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FD69GaTr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q11-20020a056a00150b00b00572e6576462si19896385pfu.97.2023.01.12.09.39.19; Thu, 12 Jan 2023 09:39:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FD69GaTr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234782AbjALRht (ORCPT + 99 others); Thu, 12 Jan 2023 12:37:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232767AbjALRgz (ORCPT ); Thu, 12 Jan 2023 12:36:55 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AB84CBBC1; Thu, 12 Jan 2023 08:58:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542685; x=1705078685; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q3zWAjP64jG9FhONvHni1RwGOKYO5eUnyqNVQmfnHLs=; b=FD69GaTrYEYWrjXwtI6aqOn0roYSHPDbD5zS/FAc3g8+h9Ez9ydftMqV bk6Qlu/efBtqYCuwEMUvDrrhHWrp2Bh4gZ5dGRIh4zztuzaxycTs1UM4E NUdDDSoXkBNqSziPp2V79M9leELWWl0BGdvkeK19Gw85FOTs9xFdDyOtV 7J58RjFP2sdgFATZfz+sOZS6N9iqXkIqe/6OAlNaHKTQZKNyKXr6FxC7t TN1HoFfzBlhCGYpEheBZfqUe3/1uRB+/JcUo2CySgl5jsa6N9zH39kUOd xiLlHQddmSxghiiIcGD3C94ybN9cTC1iMotYJyMfq06jEcPb6FuU7THJ9 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016498" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016498" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:54 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232836" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232836" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:50 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 07/12] drm/cgroup: Add over budget signalling callback Date: Thu, 12 Jan 2023 16:56:04 +0000 Message-Id: <20230112165609.1083270-8-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839304450567355?= X-GMAIL-MSGID: =?utf-8?q?1754839304450567355?= From: Tvrtko Ursulin Add a new callback via which the drm cgroup controller is notifying the drm core that a certain process is above its allotted GPU time. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/drm_cgroup.c | 21 +++++++++++++++++++++ include/drm/drm_clients.h | 1 + include/drm/drm_drv.h | 8 ++++++++ 3 files changed, 30 insertions(+) diff --git a/drivers/gpu/drm/drm_cgroup.c b/drivers/gpu/drm/drm_cgroup.c index bc1e34f1a552..ef951421bba6 100644 --- a/drivers/gpu/drm/drm_cgroup.c +++ b/drivers/gpu/drm/drm_cgroup.c @@ -162,3 +162,24 @@ u64 drm_pid_get_active_time_us(struct pid *pid) return total; } EXPORT_SYMBOL_GPL(drm_pid_get_active_time_us); + +void drm_pid_signal_budget(struct pid *pid, u64 usage, u64 budget) +{ + struct drm_pid_clients *clients; + + rcu_read_lock(); + clients = xa_load(&drm_pid_clients, (unsigned long)pid); + if (clients) { + struct drm_file *fpriv; + + list_for_each_entry_rcu(fpriv, &clients->file_list, clink) { + const struct drm_cgroup_ops *cg_ops = + fpriv->minor->dev->driver->cg_ops; + + if (cg_ops && cg_ops->signal_budget) + cg_ops->signal_budget(fpriv, usage, budget); + } + } + rcu_read_unlock(); +} +EXPORT_SYMBOL_GPL(drm_pid_signal_budget); diff --git a/include/drm/drm_clients.h b/include/drm/drm_clients.h index 7e0c0cf14f25..f3571caa35f8 100644 --- a/include/drm/drm_clients.h +++ b/include/drm/drm_clients.h @@ -42,5 +42,6 @@ drm_clients_migrate(struct drm_file *file_priv, #endif u64 drm_pid_get_active_time_us(struct pid *pid); +void drm_pid_signal_budget(struct pid *pid, u64 usage, u64 budget); #endif diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index f5f0e088e1fe..0945e562821a 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -174,6 +174,14 @@ struct drm_cgroup_ops { * Used by the DRM core when queried by the DRM cgroup controller. */ u64 (*active_time_us) (struct drm_file *); + + /** + * @signal_budget: + * + * Optional callback used by the DRM core to forward over/under GPU time + * messages sent by the DRM cgroup controller. + */ + int (*signal_budget) (struct drm_file *, u64 used, u64 budget); }; /** From patchwork Thu Jan 12 16:56:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42678 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4013987wrt; Thu, 12 Jan 2023 09:39:40 -0800 (PST) X-Google-Smtp-Source: AMrXdXt2Fv8BTHopMEhB/eR2SqXzs5PvA3VF25WeBc9X5GsY2uRfZeVtUtr6teGJtkRXct+9BvPX X-Received: by 2002:a17:90a:e28d:b0:229:c20:9cc5 with SMTP id d13-20020a17090ae28d00b002290c209cc5mr1896120pjz.41.1673545180252; Thu, 12 Jan 2023 09:39:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545180; cv=none; d=google.com; s=arc-20160816; b=CiskPoXJYY3lMw3OzFkvxfH/CLCxKVjf/jUsPx0/19fmtp4JPrnPULilML6Zesp63t w0vTjrnjYA8Icp/XRQL69zGSwpXB0Fjtufbc48mX3wvt4yNBrNFZVDNIqyhHD3VROe/J m5biAzeHGgi+JJQmQm8LvqVFZnTWm1ZjDoEpnHvcWPcPmuXfnSokgYsJRqiOi/qPXr1Z K0iooFbX/iLPF9iJTH+SCwu88oWz8Uanm/Yjt1rnbWju6hgEj7k5Ir0V35VQgIN52rnA bZFuRgogQ1ybXJ15xyovyvrLJWjjyBD68/qtGqNeX1kCt12Nx0DzdhI8yH6FZCg9hcdk j8pw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nG9WZpkf41ISNMVUpox8dRUQJQT8l192M1pL4x33Fks=; b=nP+8s/62lfnUZhHsscLEoSoqenYNmeBBTcI3RM7n4nNRUaKkCn4W7KDPnhzBIa9+Yj TbEgV+4p+lNpOKgivx6rMa/iZ2g20lbsSV/S1WRVf6bNEBiOtzMAoAkAr0lrjqG+JBBX yz3/EL8qVydLuVJFqJe2/+cNo+Hbmof6YSCjORml8/SoL4aBL6LmDeFduWwHot/sa4nH gm2ETLNOolalioARg0+Ekto/wXzmbcJ0SKhYjYf5+Jha3DlxnA0v4AwnWx+XrcV5CD+f o7+SbRO4CmwGI0SDDjncvjXssmjftsJqW7LnWmvHSGMU0VOubcNceRVDcrljFWJWai6B iyLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=UvOK29aw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m11-20020a17090aab0b00b00226821c7d94si16904463pjq.19.2023.01.12.09.39.27; Thu, 12 Jan 2023 09:39:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=UvOK29aw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233223AbjALRhl (ORCPT + 99 others); Thu, 12 Jan 2023 12:37:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232875AbjALRgx (ORCPT ); Thu, 12 Jan 2023 12:36:53 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA560CBBC9; Thu, 12 Jan 2023 08:58:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542685; x=1705078685; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O4XaQU0KHXm7hstqYZ0YTz10nkjZFCF6V51/eW89DXg=; b=UvOK29aweZah2HmcgZ4TU4AIluBFTxEUwOIi927JbKEuIQH8AWQFlDdb hSlzWWNtEjLGGbrqgAGRzRRIVcX7TNicW9/aF8xBGeQOVTSGsQj9Q4YUL MzOEO2rxDsrKsU2eKXV8mfMq3f7n0phwIWkD3taqTWgHS8Cwj4ByXPj4S 6R3dmSV0wexEI5skjJmBxplMybxosTpMovBVXQq8r7ejIFRPsA3spkGti KSgJ2QsbWrRxF0Ea0VzRBod0O/TH1SUMEQoTXAY0mFWaV8x7lW3NeoFhi MCZVlrsTCMB+Aw4zvPtdiKfqgBVZ0TiOLMh5xmLKa7ZEXECXTOntiLqLw Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016536" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016536" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:57:02 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232885" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232885" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:54 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 08/12] drm/cgroup: Only track clients which are providing drm_cgroup_ops Date: Thu, 12 Jan 2023 16:56:05 +0000 Message-Id: <20230112165609.1083270-9-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839310974784275?= X-GMAIL-MSGID: =?utf-8?q?1754839310974784275?= From: Tvrtko Ursulin To reduce the number of tracking going on, especially with drivers which will not support any sort of control from the drm cgroup controller side, lets express the funcionality as opt-in and use the presence of drm_cgroup_ops as activation criteria. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/drm_cgroup.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/gpu/drm/drm_cgroup.c b/drivers/gpu/drm/drm_cgroup.c index ef951421bba6..09249f795af3 100644 --- a/drivers/gpu/drm/drm_cgroup.c +++ b/drivers/gpu/drm/drm_cgroup.c @@ -36,6 +36,9 @@ void drm_clients_close(struct drm_file *file_priv) lockdep_assert_held(&dev->filelist_mutex); + if (!dev->driver->cg_ops) + return; + pid = rcu_access_pointer(file_priv->pid); clients = xa_load(&drm_pid_clients, (unsigned long)pid); if (drm_WARN_ON_ONCE(dev, !clients)) @@ -67,6 +70,9 @@ int drm_clients_open(struct drm_file *file_priv) lockdep_assert_held(&dev->filelist_mutex); + if (!dev->driver->cg_ops) + return 0; + pid = (unsigned long)rcu_access_pointer(file_priv->pid); clients = xa_load(&drm_pid_clients, pid); if (!clients) { @@ -102,6 +108,9 @@ drm_clients_migrate(struct drm_file *file_priv, lockdep_assert_held(&dev->filelist_mutex); + if (!dev->driver->cg_ops) + return; + existing_clients = xa_load(&drm_pid_clients, (unsigned long)new); clients = xa_load(&drm_pid_clients, (unsigned long)old); From patchwork Thu Jan 12 16:56:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42682 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4015430wrt; Thu, 12 Jan 2023 09:42:42 -0800 (PST) X-Google-Smtp-Source: AMrXdXu+riLNYDmsXlRTW/cQbhPynX6ZECYoQWI50Df7jUvtAjKDMWRd0musWeBEvSuzSIXtKtoB X-Received: by 2002:a17:903:1d0:b0:192:4f32:3ba7 with SMTP id e16-20020a17090301d000b001924f323ba7mr96165751plh.18.1673545362188; Thu, 12 Jan 2023 09:42:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545362; cv=none; d=google.com; s=arc-20160816; b=FVYci147pNX32flaAZ3HUiABgI4aqwXBnLqktuoih3fqtNKm4Hxs9Oq2U+RHu1s0/7 rPmaUcBSBlNKvPkIvtV6gxvh0iCQbkQorUz6L96tSjxsQR63rjX610b5q3Hl+D+GodQo tN59QUPd29PVCogdIAaEIXuPPgP2WvqJeYFudkJA7kuwBSx1FavTeG5EQc6mmoEDkO9l jtYpW1/B+pfQiZwJqCxNE3VKTqUyeF6kHV38i5pNZYmNb4Tf/he1+Ei8Dy6JpYEZK+/E BVs853vg4gQXTqz1qyQirSg4Wj2D3tqOD8uti0/5Tp5yFbVgw5vQ02S8nPN3WwPurROu 4e4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZDB+TONHufzf2v14ED7ooI/qBkErj0BfG/cV+hO8IzA=; b=0qOjb9l3hmt16s9LIDQvVwJd8Btk4tgeniCpgCsihHe/8vNBxs6XheUhcv42O+WZZE 62jtji+CJ+r1n8xUVnCgyQBo3REcck0L4FRQslklcLqYUx2SR0KjmJvbJSVajce5HpCt /nG+SzgAC9DBg3XJvOjoCdo5vDEE8wmx85q2tf/2goSZgDWqBJ3U+rTJpJ84oUj59fFn GCD/NDTOBEzcmPjKZwYyPPJ+u0kDVL1GaSUdWwnkM1lYfXGAZ2CtRNddRPD+15RP921c s1Y4bWKTMiFDHEHCczVdmtc3Lp6YUafR0NvBwUOnJ6LTLiy3Oep7lG8nevnJ7UvFqmFF lStA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="atQ/BoH/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v1-20020a170902b7c100b001873a81f2d1si16472013plz.87.2023.01.12.09.42.26; Thu, 12 Jan 2023 09:42:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="atQ/BoH/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234131AbjALRie (ORCPT + 99 others); Thu, 12 Jan 2023 12:38:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231433AbjALRhU (ORCPT ); Thu, 12 Jan 2023 12:37:20 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BE226B5F0; Thu, 12 Jan 2023 08:58:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542692; x=1705078692; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wO4vq8DTSbSrK6Q8SSiu3RWC+/DM/VhnYEUQ6NLXXx0=; b=atQ/BoH/MvOMXw6PXRNKUJCHXLZMhXRGGNwi10Q/DVYJqWtX/6d8npE+ GONl/vKMP+kS5vVFk1Q+JG5JBifFhYVsVMvfB7h+Bj6o5XtazbLgGL1Nl vnwpDUNJPWaM1QJoZ26F+1mgMN+J0Q91VIpy18LlKE7Tl4uTHEdVuCSBH 9/DXucs9xgMWnv9s9hNGmmQRM1UFM4VnQJSMM2yOcrUF+jsEywF3xFfmh MLHaIHEB0nUUskWuF+tmx9tolTCfu9xQFCCNKeF49Rh+gOwOZggh8RVGA notdwk4ziuFKDHjHWiLgPRdVznZnsU4t4IbYAfJTiim+FWH11/dzXJiml w==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016545" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016545" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:57:03 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232910" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232910" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:56:59 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 09/12] cgroup/drm: Client exit hook Date: Thu, 12 Jan 2023 16:56:06 +0000 Message-Id: <20230112165609.1083270-10-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839501669621343?= X-GMAIL-MSGID: =?utf-8?q?1754839501669621343?= From: Tvrtko Ursulin We need the ability for DRM core to inform the cgroup controller when a client has closed a DRM file descriptor. This will allow us not needing to keep state relating to GPU time usage by tasks sets in the cgroup controller itself. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/drm_cgroup.c | 9 +++++++++ include/linux/cgroup_drm.h | 4 ++++ kernel/cgroup/drm.c | 13 +++++++++++++ 3 files changed, 26 insertions(+) diff --git a/drivers/gpu/drm/drm_cgroup.c b/drivers/gpu/drm/drm_cgroup.c index 09249f795af3..ea1479d05355 100644 --- a/drivers/gpu/drm/drm_cgroup.c +++ b/drivers/gpu/drm/drm_cgroup.c @@ -3,6 +3,8 @@ * Copyright © 2022 Intel Corporation */ +#include + #include #include #include @@ -32,6 +34,7 @@ void drm_clients_close(struct drm_file *file_priv) { struct drm_device *dev = file_priv->minor->dev; struct drm_pid_clients *clients; + struct task_struct *task; struct pid *pid; lockdep_assert_held(&dev->filelist_mutex); @@ -44,6 +47,12 @@ void drm_clients_close(struct drm_file *file_priv) if (drm_WARN_ON_ONCE(dev, !clients)) return; + task = get_pid_task(pid, PIDTYPE_PID); + if (task) { + drmcgroup_client_exited(task); + put_task_struct(task); + } + __del_clients(clients, file_priv, (unsigned long)pid); } diff --git a/include/linux/cgroup_drm.h b/include/linux/cgroup_drm.h index bf8abc6b8ebf..2f755b896136 100644 --- a/include/linux/cgroup_drm.h +++ b/include/linux/cgroup_drm.h @@ -6,4 +6,8 @@ #ifndef _CGROUP_DRM_H #define _CGROUP_DRM_H +struct task_struct; + +void drmcgroup_client_exited(struct task_struct *task); + #endif /* _CGROUP_DRM_H */ diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c index 48a8d646a094..3e7f165806de 100644 --- a/kernel/cgroup/drm.c +++ b/kernel/cgroup/drm.c @@ -22,6 +22,11 @@ css_to_drmcs(struct cgroup_subsys_state *css) return container_of(css, struct drm_cgroup_state, css); } +static inline struct drm_cgroup_state *get_task_drmcs(struct task_struct *task) +{ + return css_to_drmcs(task_get_css(task, drm_cgrp_id)); +} + static struct drm_root_cgroup_state root_drmcs; static void drmcs_free(struct cgroup_subsys_state *css) @@ -46,6 +51,14 @@ drmcs_alloc(struct cgroup_subsys_state *parent_css) return &drmcs->css; } +void drmcgroup_client_exited(struct task_struct *task) +{ + struct drm_cgroup_state *drmcs = get_task_drmcs(task); + + css_put(&drmcs->css); +} +EXPORT_SYMBOL_GPL(drmcgroup_client_exited); + struct cftype files[] = { { } /* Zero entry terminates. */ }; From patchwork Thu Jan 12 16:56:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42679 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4014381wrt; Thu, 12 Jan 2023 09:40:29 -0800 (PST) X-Google-Smtp-Source: AMrXdXtTlOEtHnREPgZEgXqLNIul2THgvoyTyeauTA3UVhbsglb4Gtr1ilGiGh2++58pUomnahPR X-Received: by 2002:a05:6a00:1d10:b0:582:ba01:a3c4 with SMTP id a16-20020a056a001d1000b00582ba01a3c4mr31780271pfx.21.1673545229060; Thu, 12 Jan 2023 09:40:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545229; cv=none; d=google.com; s=arc-20160816; b=NIo8l+dTsZ8d/SFyJ8d4dJnAui/3MERz7ahDVPh+F790CPOT1ii4NesYmnfY0PNzhI +zD7RS8mEqeLqMGwwX4zR3YVrS8a82QWwqx6ZlQKS5KIzbXrS4ze4LeB2u73HeJ1Mbuf 7X2sWnEYfdWzUPJZqfNSYlNdzj/r6h8QnWEoM5F1jGJFafEJ2+dUIF9BRMpAfC96C9Z8 3KlZuH5d/EPtyHpr35xk21dE+Z0MUy6SE6Ja3vXJYXjM3BvRYtWKY8wnpaoUHfgaOnIk iPN0la7Z7twvHwbku3t7+CjwXHoVyolAdunQPblzyKsCGxycbp18P99YclCfVDSU65y5 bDLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=PaCYl5pfAOAl0zWNW1SdVCmCHB+N4Z/ilet5sr9O1hk=; b=dAa9bT02vdM2U4oD8W/MGD4p58QMAWSXkrmMmE9REvEChhrU/ZXz8HXzGYRMyplz7e l3xSEvdeXQSihZ8gm+1sm3grksJPg+ZLkLvOZr8rBHxqSc6NMovjQB9/2WWE2j+tyPoR FU+QQ1yjs7bt6Gaph6Yp+aYzzhrFHpnTqqA+JN3JOcEqINt+umToj0HShDlq4gG/fAMj fiXIb/8hf8N27uhzK284z5yMUtqIfqv32ICIYZjaItkvfkUuqldYAQFkjUn+1kBryIpm Tvzto7tSo5DOVtYNzIZI2Z4cDrK7z1X4Ba0q7Wt/oJDGybtUFCixdgvIYjyG/7r3BI3e TLYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=epWOJMX4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w1-20020a056a0014c100b00589170e18cesi11263395pfu.170.2023.01.12.09.40.16; Thu, 12 Jan 2023 09:40:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=epWOJMX4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231777AbjALRii (ORCPT + 99 others); Thu, 12 Jan 2023 12:38:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229525AbjALRhc (ORCPT ); Thu, 12 Jan 2023 12:37:32 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B6E86B5E0; Thu, 12 Jan 2023 08:58:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542692; x=1705078692; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7NGQdLseU9on8kCCt5npAv0n5O2aYy9kAJl8WQz1nfc=; b=epWOJMX4pODVf5W2OjmbAPODe+PoIABTVedKsdq6JhwMtvBwm64amXLT W8wyt9M58srE62CsultszOlyUEqLJ6XCcjT0CjB0bZc9ifMsA9vLQIOaL QjYVJQFVo6/ro+A78XFPvUJT2l+93DBJ/lkVDu1ks1jxqFm2yewN0uAwG +CC62HdImiFKw3LFqlxKfvCkz6Y9zRPPAARGOSvJTSt/bF7M9xbp1R0Pp c+v3LXj8tuDrL7l2NzIcGIjzivAIqDh9bW8LIa1iARDGD0ic/vPXx7luP 6WxASSHXI3kvcri63V3ppV7Wm8WIcicwnW8q8skMbLsAaw0yxXkQk7Z42 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016567" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016567" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:57:07 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232926" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232926" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:57:03 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 10/12] cgroup/drm: Introduce weight based drm cgroup control Date: Thu, 12 Jan 2023 16:56:07 +0000 Message-Id: <20230112165609.1083270-11-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839362109026244?= X-GMAIL-MSGID: =?utf-8?q?1754839362109026244?= From: Tvrtko Ursulin Similar to CPU scheduling, implement a concept of weight in the drm cgroup controller. Uses the same range and default as the CPU controller - CGROUP_WEIGHT_MIN, CGROUP_WEIGHT_DFL and CGROUP_WEIGHT_MAX. Later each cgroup is assigned a time budget proportionaly based on the relative weights of it's siblings. This time budget is in turn split by the group's children and so on. This will be used to implement a soft, or best effort signal from drm cgroup to drm core notifying about groups which are over their allotted budget. No guarantees that the limit can be enforced are provided or implied. Signed-off-by: Tvrtko Ursulin --- Documentation/admin-guide/cgroup-v2.rst | 37 ++ drivers/gpu/drm/Kconfig | 1 + init/Kconfig | 1 + kernel/cgroup/drm.c | 506 +++++++++++++++++++++++- 4 files changed, 541 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index c8ae7c897f14..9894dd59e4c5 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -2407,6 +2407,43 @@ HugeTLB Interface Files hugetlb pages of in this cgroup. Only active in use hugetlb pages are included. The per-node values are in bytes. +DRM +--- + +The DRM controller allows configuring scheduling soft limits. + +DRM scheduling soft limits +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Because of the heterogenous hardware and driver DRM capabilities, soft limits +are implemented as a loose co-operative (bi-directional) interface between the +controller and DRM core. + +The controller configures the GPU time allowed per group and periodically scans +the belonging tasks to detect the over budget condition, at which point it +invokes a callback notifying the DRM core of the condition. + +DRM core provides an API to query per process GPU utilization and 2nd API to +receive notification from the cgroup controller when the group enters or exits +the over budget condition. + +Individual DRM drivers which implement the interface are expected to act on this +in the best-effort manner only. There are no guarantees that the soft limits +will be respected. + +DRM scheduling soft limits interface files +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + drm.weight + Standard cgroup weight based control [1, 10000] used to configure the + relative distributing of GPU time between the sibling groups. + + drm.period_us (debugging aid during RFC only) + An integer representing the period with which the controller should look + at the GPU usage by the group and potentially send the over/under budget + signal. + Value of zero (defaul) disables the soft limit checking. + Misc ---- diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index b56b9f2fe8e6..0fbfd3026b71 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -7,6 +7,7 @@ # menuconfig DRM tristate "Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)" + default y if CGROUP_DRM=y depends on (AGP || AGP=n) && !EMULATED_CMPXCHG && HAS_DMA select DRM_PANEL_ORIENTATION_QUIRKS select HDMI diff --git a/init/Kconfig b/init/Kconfig index c5ace0d57007..304418674097 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1091,6 +1091,7 @@ config CGROUP_RDMA config CGROUP_DRM bool "DRM controller" + select DRM help Provides the DRM subsystem controller. diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c index 3e7f165806de..527b7bf8c576 100644 --- a/kernel/cgroup/drm.c +++ b/kernel/cgroup/drm.c @@ -8,14 +8,43 @@ #include #include +#include + struct drm_cgroup_state { struct cgroup_subsys_state css; + + unsigned int weight; + + /* + * Below fields are owned and updated by the scan worker. Either the + * worker accesses them, or worker needs to be suspended and synced + * before they can be touched from the outside. + */ + unsigned int sum_children_weights; + + bool over; + bool over_budget; + + u64 per_s_budget_us; + u64 prev_active_us; + u64 active_us; }; struct drm_root_cgroup_state { struct drm_cgroup_state drmcs; + + unsigned int period_us; + + ktime_t prev_timestamp; + + bool scanning_suspended; + unsigned int suspended_period_us; + + struct delayed_work scan_work; }; +static DEFINE_MUTEX(drmcg_mutex); + static inline struct drm_cgroup_state * css_to_drmcs(struct cgroup_subsys_state *css) { @@ -29,10 +58,355 @@ static inline struct drm_cgroup_state *get_task_drmcs(struct task_struct *task) static struct drm_root_cgroup_state root_drmcs; +static u64 drmcs_get_active_time_us(struct drm_cgroup_state *drmcs) +{ + struct cgroup *cgrp = drmcs->css.cgroup; + struct task_struct *task; + struct css_task_iter it; + u64 total = 0; + + css_task_iter_start(&cgrp->self, + CSS_TASK_ITER_PROCS | CSS_TASK_ITER_THREADED, + &it); + while ((task = css_task_iter_next(&it))) { + u64 time; + + /* Ignore kernel threads here. */ + if (task->flags & PF_KTHREAD) + continue; + + time = drm_pid_get_active_time_us(task_pid(task)); + total += time; + } + css_task_iter_end(&it); + + return total; +} + +static u64 +drmcs_read_weight(struct cgroup_subsys_state *css, struct cftype *cft) +{ + struct drm_cgroup_state *drmcs = css_to_drmcs(css); + + return drmcs->weight; +} + +static int +drmcs_write_weight(struct cgroup_subsys_state *css, struct cftype *cftype, + u64 weight) +{ + struct drm_cgroup_state *drmcs = css_to_drmcs(css); + int ret; + + if (weight < CGROUP_WEIGHT_MIN || weight > CGROUP_WEIGHT_MAX) + return -ERANGE; + + ret = mutex_lock_interruptible(&drmcg_mutex); + if (ret) + return ret; + drmcs->weight = weight; + mutex_unlock(&drmcg_mutex); + + return 0; +} + +static void +signal_drm_budget(struct drm_cgroup_state *drmcs, u64 usage, u64 budget) +{ + struct cgroup *cgrp = drmcs->css.cgroup; + struct task_struct *task; + struct css_task_iter it; + + css_task_iter_start(&cgrp->self, + CSS_TASK_ITER_PROCS | CSS_TASK_ITER_THREADED, + &it); + while ((task = css_task_iter_next(&it))) { + /* Ignore kernel threads here. */ + if (task->flags & PF_KTHREAD) + continue; + + drm_pid_signal_budget(task_pid(task), usage, budget); + } + css_task_iter_end(&it); +} + +static bool +__start_scanning(struct drm_cgroup_state *root, unsigned int period_us) +{ + struct cgroup_subsys_state *node; + bool ok = false; + + rcu_read_lock(); + + css_for_each_descendant_post(node, &root->css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + + if (!css_tryget_online(node)) + goto out; + + drmcs->active_us = 0; + drmcs->sum_children_weights = 0; + + if (period_us && node == &root->css) + drmcs->per_s_budget_us = + DIV_ROUND_UP_ULL((u64)period_us * USEC_PER_SEC, + USEC_PER_SEC); + else + drmcs->per_s_budget_us = 0; + + css_put(node); + } + + css_for_each_descendant_post(node, &root->css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + struct drm_cgroup_state *parent; + u64 active; + + if (!css_tryget_online(node)) + goto out; + if (!node->parent) { + css_put(node); + continue; + } + if (!css_tryget_online(node->parent)) { + css_put(node); + goto out; + } + parent = css_to_drmcs(node->parent); + + active = drmcs_get_active_time_us(drmcs); + if (period_us && active > drmcs->prev_active_us) + drmcs->active_us += active - drmcs->prev_active_us; + drmcs->prev_active_us = active; + + parent->active_us += drmcs->active_us; + parent->sum_children_weights += drmcs->weight; + + css_put(node); + css_put(&parent->css); + } + + ok = true; + +out: + rcu_read_unlock(); + + return ok; +} + +static void scan_worker(struct work_struct *work) +{ + struct drm_cgroup_state *root = &root_drmcs.drmcs; + struct cgroup_subsys_state *node; + unsigned int period_us; + ktime_t now; + + rcu_read_lock(); + + if (WARN_ON_ONCE(!css_tryget_online(&root->css))) { + rcu_read_unlock(); + return; + } + + now = ktime_get(); + period_us = ktime_to_us(ktime_sub(now, root_drmcs.prev_timestamp)); + root_drmcs.prev_timestamp = now; + + /* + * 1st pass - reset working values and update hierarchical weights and + * GPU utilisation. + */ + if (!__start_scanning(root, period_us)) + goto out_retry; /* + * Always come back later if scanner races with + * core cgroup management. (Repeated pattern.) + */ + + css_for_each_descendant_pre(node, &root->css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + struct cgroup_subsys_state *css; + unsigned int over_weights = 0; + u64 unused_us = 0; + + if (!css_tryget_online(node)) + goto out_retry; + + /* + * 2nd pass - calculate initial budgets, mark over budget + * siblings and add up unused budget for the group. + */ + css_for_each_child(css, &drmcs->css) { + struct drm_cgroup_state *sibling = css_to_drmcs(css); + + if (!css_tryget_online(css)) { + css_put(node); + goto out_retry; + } + + sibling->per_s_budget_us = + DIV_ROUND_UP_ULL(drmcs->per_s_budget_us * + sibling->weight, + drmcs->sum_children_weights); + + sibling->over = sibling->active_us > + sibling->per_s_budget_us; + if (sibling->over) + over_weights += sibling->weight; + else + unused_us += sibling->per_s_budget_us - + sibling->active_us; + + css_put(css); + } + + /* + * 3rd pass - spread unused budget according to relative weights + * of over budget siblings. + */ + css_for_each_child(css, &drmcs->css) { + struct drm_cgroup_state *sibling = css_to_drmcs(css); + + if (!css_tryget_online(css)) { + css_put(node); + goto out_retry; + } + + if (sibling->over) { + u64 budget_us = + DIV_ROUND_UP_ULL(unused_us * + sibling->weight, + over_weights); + sibling->per_s_budget_us += budget_us; + sibling->over = sibling->active_us > + sibling->per_s_budget_us; + } + + css_put(css); + } + + css_put(node); + } + + /* + * 4th pass - send out over/under budget notifications. + */ + css_for_each_descendant_post(node, &root->css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + + if (!css_tryget_online(node)) + goto out_retry; + + if (drmcs->over || drmcs->over_budget) + signal_drm_budget(drmcs, + drmcs->active_us, + drmcs->per_s_budget_us); + drmcs->over_budget = drmcs->over; + + css_put(node); + } + +out_retry: + rcu_read_unlock(); + + period_us = READ_ONCE(root_drmcs.period_us); + if (period_us) + schedule_delayed_work(&root_drmcs.scan_work, + usecs_to_jiffies(period_us)); + + css_put(&root->css); +} + +static void start_scanning(u64 period_us) +{ + lockdep_assert_held(&drmcg_mutex); + + root_drmcs.period_us = (unsigned int)period_us; + WARN_ON_ONCE(!__start_scanning(&root_drmcs.drmcs, 0)); + root_drmcs.prev_timestamp = ktime_get(); + mod_delayed_work(system_wq, &root_drmcs.scan_work, + usecs_to_jiffies(period_us)); +} + +static void stop_scanning(struct drm_cgroup_state *drmcs) +{ + if (drmcs == &root_drmcs.drmcs) { + root_drmcs.period_us = 0; + cancel_delayed_work_sync(&root_drmcs.scan_work); + } + + if (drmcs->over_budget) { + /* + * Signal under budget when scanning goes off so drivers + * correctly update their state. + */ + signal_drm_budget(drmcs, 0, USEC_PER_SEC); + drmcs->over_budget = false; + } +} + +static void signal_stop_scanning(void) +{ + struct cgroup_subsys_state *node; + + lockdep_assert_held(&drmcg_mutex); + + stop_scanning(&root_drmcs.drmcs); /* Handle outside RCU lock. */ + + rcu_read_lock(); + css_for_each_descendant_pre(node, &root_drmcs.drmcs.css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + + if (drmcs == &root_drmcs.drmcs) + continue; + + if (css_tryget_online(node)) { + stop_scanning(drmcs); + css_put(node); + } + } + rcu_read_unlock(); +} + +static void start_suspend_scanning(void) +{ + lockdep_assert_held(&drmcg_mutex); + + if (root_drmcs.scanning_suspended) + return; + + root_drmcs.scanning_suspended = true; + root_drmcs.suspended_period_us = root_drmcs.period_us; + root_drmcs.period_us = 0; +} + +static void finish_suspend_scanning(void) +{ + if (root_drmcs.suspended_period_us) + cancel_delayed_work_sync(&root_drmcs.scan_work); +} + +static void resume_scanning(void) +{ + lockdep_assert_held(&drmcg_mutex); + + if (!root_drmcs.scanning_suspended) + return; + + root_drmcs.scanning_suspended = false; + if (root_drmcs.suspended_period_us) { + start_scanning(root_drmcs.suspended_period_us); + root_drmcs.suspended_period_us = 0; + } +} + static void drmcs_free(struct cgroup_subsys_state *css) { - if (css != &root_drmcs.drmcs.css) - kfree(css_to_drmcs(css)); + struct drm_cgroup_state *drmcs = css_to_drmcs(css); + + stop_scanning(drmcs); + + if (drmcs != &root_drmcs.drmcs) + kfree(drmcs); } static struct cgroup_subsys_state * @@ -42,30 +416,154 @@ drmcs_alloc(struct cgroup_subsys_state *parent_css) if (!parent_css) { drmcs = &root_drmcs.drmcs; + INIT_DELAYED_WORK(&root_drmcs.scan_work, scan_worker); } else { drmcs = kzalloc(sizeof(*drmcs), GFP_KERNEL); if (!drmcs) return ERR_PTR(-ENOMEM); } + drmcs->weight = CGROUP_WEIGHT_DFL; + return &drmcs->css; } +static int drmcs_can_attach(struct cgroup_taskset *tset) +{ + int ret; + + /* + * As processes are getting moved between groups we need to ensure + * both that the old group does not see a sudden downward jump in the + * GPU utilisation, and that the new group does not see a sudden jump + * up with all the GPU time clients belonging to the migrated process + * have accumulated. + * + * To achieve that we suspend the scanner until the migration is + * completed where the resume at the end ensures both groups start + * observing GPU utilisation from a reset state. + */ + + ret = mutex_lock_interruptible(&drmcg_mutex); + if (ret) + return ret; + start_suspend_scanning(); + mutex_unlock(&drmcg_mutex); + + finish_suspend_scanning(); + + return 0; +} + +static void tset_resume_scanning(struct cgroup_taskset *tset) +{ + mutex_lock(&drmcg_mutex); + resume_scanning(); + mutex_unlock(&drmcg_mutex); +} + +static void drmcs_attach(struct cgroup_taskset *tset) +{ + tset_resume_scanning(tset); +} + +static void drmcs_cancel_attach(struct cgroup_taskset *tset) +{ + tset_resume_scanning(tset); +} + +static u64 +drmcs_read_period_us(struct cgroup_subsys_state *css, struct cftype *cft) +{ + return root_drmcs.period_us; +} + +static int +drmcs_write_period_us(struct cgroup_subsys_state *css, struct cftype *cftype, + u64 period_us) +{ + int ret; + + if (css->cgroup->level) + return -EINVAL; + if ((period_us && period_us < 500000) || period_us > USEC_PER_SEC * 60) + return -EINVAL; + + ret = mutex_lock_interruptible(&drmcg_mutex); + if (ret) + return ret; + + if (!root_drmcs.scanning_suspended) { + if (period_us) + start_scanning(period_us); + else + signal_stop_scanning(); + } else { + /* + * If scanning is temporarily suspended just update the period + * which will apply once resumed, or simply skip resuming in + * case of disabling. + */ + root_drmcs.suspended_period_us = period_us; + if (!period_us) + root_drmcs.scanning_suspended = false; + } + + mutex_unlock(&drmcg_mutex); + + return 0; +} + void drmcgroup_client_exited(struct task_struct *task) { - struct drm_cgroup_state *drmcs = get_task_drmcs(task); + /* + * QQQ/TODO - Skip if task is not a member of a cgroup which has a + * DRM controller enabled? + */ + + /* + * Since we are not tracking accumulated GPU time for each cgroup, + * avoid jumps in group observed GPU usage by re-setting the scanner + * at a point when GPU usage can suddenly jump down. + * + * Downside is clients can influence the effectiveness of the over- + * budget scanning by continuously closing DRM file descriptors but for + * now we do not worry about it. + */ + + mutex_lock(&drmcg_mutex); + start_suspend_scanning(); + mutex_unlock(&drmcg_mutex); + + finish_suspend_scanning(); - css_put(&drmcs->css); + mutex_lock(&drmcg_mutex); + resume_scanning(); + mutex_unlock(&drmcg_mutex); } EXPORT_SYMBOL_GPL(drmcgroup_client_exited); struct cftype files[] = { + { + .name = "weight", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = drmcs_read_weight, + .write_u64 = drmcs_write_weight, + }, + { + .name = "period_us", + .read_u64 = drmcs_read_period_us, + .write_u64 = drmcs_write_period_us, + }, { } /* Zero entry terminates. */ }; struct cgroup_subsys drm_cgrp_subsys = { .css_alloc = drmcs_alloc, .css_free = drmcs_free, + .can_attach = drmcs_can_attach, + .attach = drmcs_attach, + .cancel_attach = drmcs_cancel_attach, .early_init = false, .legacy_cftypes = files, .dfl_cftypes = files, From patchwork Thu Jan 12 16:56:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42680 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4014574wrt; Thu, 12 Jan 2023 09:40:52 -0800 (PST) X-Google-Smtp-Source: AMrXdXvfuVdFov2OootmcSFA4R4TVqoTwZgWILBY1XOyc76XndY8p2/hpdEP8BUMOlYv22aX7P7j X-Received: by 2002:a17:90b:3c89:b0:228:cb21:5028 with SMTP id pv9-20020a17090b3c8900b00228cb215028mr7273285pjb.10.1673545252000; Thu, 12 Jan 2023 09:40:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545251; cv=none; d=google.com; s=arc-20160816; b=VB25EbRJoSrJHGYD8np/PYLWcX2B8UA4OTBOYDzu/URcUlyoqIBXKw9piKI6B0Gdic K1D1Mr07fDF3kRGmRnkTxCgJLIgPiX+6FpgzIdyEepTgO4FSk7RvnALfqg5iG/Vt5nqf Ehzj3cJoprwCtK57kDHPgKN7D2ZKnBQV/smFycAxUU9LEhSV7WPOkwAIHEl7gEEoLpLW rqCetLfmm4kgD8ZsQeSBUAxYimzSyyGVFbQx2eeJ4orPcPeeBcTPEisQEGNgcXBlh5oI As3HjfJJA3k1GM+McZ3hQdIOitVVrJ02DxqPcZfOyQE1f5dJ1uVjWd6ZOR+baqrieKKR aPag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=VjmIeKAZgleEDjCj+1dupR8EZ+hAFIqQ7EoYlNwBxCA=; b=V6Zi5aj2fty0Xo9eTnD0/9TYLiTyMypk7TZqg7mngs4ep2rqBthTLqu3R6Q9a35Q1b FjSaZ/sXmLpQdO0XwbAZa9GFnaU4IhxrrOiy1+hj8NL2b0YyfNzrWSMryoo/aEGZ/v+2 UDBE74yHR+Ahs3PaGofngyeI/sMN+v5edRSjuMIsJ1p0ucgreOtDQRvfoHvz6rM9UjlX gVRBSmr958Jtn0CN4GXdVIZKGYZNLm91rdWu+wPgAmVXsCRX1kNapcbKTe4xehutczg9 PwerVdVl0sh+ZuyXLQZIPm5OmNGOZrv81YSToSR3w++XYeAsRSV7QvEBYQO0H1yNxKCl C1Fw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jVroV+ls; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m9-20020a17090a2c0900b0022645e2a88csi16353759pjd.56.2023.01.12.09.40.39; Thu, 12 Jan 2023 09:40:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jVroV+ls; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231660AbjALRjT (ORCPT + 99 others); Thu, 12 Jan 2023 12:39:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233616AbjALRib (ORCPT ); Thu, 12 Jan 2023 12:38:31 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1D376C2A3; Thu, 12 Jan 2023 08:58:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542706; x=1705078706; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iKX25k37KrR8F3/a+XNt177zp1Ljuj+PLsGTKxsR9n8=; b=jVroV+lspFwvRrtZAihM4xvvVjw9Njnw5HO871gSCaG6XH9eT3NsMiMA F1dRs3GNKk9ANd3+2FgfZboHFeLQQlwrf2nU/ZR9ATDP2eB19DwyYjpfI +hf3jKk3egRblUlH48Jj48Z6iAUL+Ueqai6GVlACn6QIujmKpYrbQPdbJ o37OsYxTVk5KYL7LrVutwlIKAc8adQJHidziql1LM5SEtp/HlyfYCzODd sQA2HlHxV/z+optNYTPAQNu3M9NT/aGBN4QuhSe0Iio7OHWsEvmOROPLU 9vDiabN/9B1uLOAkRdeOfqUVx+pxFBZIbAgubPZ47JYf4Tl5XgcY9UBXl Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016594" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016594" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:57:11 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232942" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232942" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:57:08 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 11/12] drm/i915: Wire up with drm controller GPU time query Date: Thu, 12 Jan 2023 16:56:08 +0000 Message-Id: <20230112165609.1083270-12-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839386410344588?= X-GMAIL-MSGID: =?utf-8?q?1754839386410344588?= From: Tvrtko Ursulin Implement the drm_cgroup_ops->active_time_us callback. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_driver.c | 10 ++++ drivers/gpu/drm/i915/i915_drm_client.c | 76 ++++++++++++++++++++++---- drivers/gpu/drm/i915/i915_drm_client.h | 2 + 3 files changed, 78 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index c1e427ba57ae..50935cdb3a93 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -1897,6 +1897,12 @@ static const struct drm_ioctl_desc i915_ioctls[] = { DRM_IOCTL_DEF_DRV(I915_GEM_VM_DESTROY, i915_gem_vm_destroy_ioctl, DRM_RENDER_ALLOW), }; +#ifdef CONFIG_CGROUP_DRM +static const struct drm_cgroup_ops i915_drm_cgroup_ops = { + .active_time_us = i915_drm_cgroup_get_active_time_us, +}; +#endif + /* * Interface history: * @@ -1925,6 +1931,10 @@ static const struct drm_driver i915_drm_driver = { .lastclose = i915_driver_lastclose, .postclose = i915_driver_postclose, +#ifdef CONFIG_CGROUP_DRM + .cg_ops = &i915_drm_cgroup_ops, +#endif + .prime_handle_to_fd = drm_gem_prime_handle_to_fd, .prime_fd_to_handle = drm_gem_prime_fd_to_handle, .gem_prime_import = i915_gem_prime_import, diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index b09d1d386574..c9754cb0277f 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -75,7 +75,7 @@ void i915_drm_clients_fini(struct i915_drm_clients *clients) xa_destroy(&clients->xarray); } -#ifdef CONFIG_PROC_FS +#if defined(CONFIG_PROC_FS) || defined(CONFIG_CGROUP_DRM) static const char * const uabi_class_names[] = { [I915_ENGINE_CLASS_RENDER] = "render", [I915_ENGINE_CLASS_COPY] = "copy", @@ -100,22 +100,78 @@ static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) return total; } -static void -show_client_class(struct seq_file *m, - struct i915_drm_client *client, - unsigned int class) +static u64 get_class_active_ns(struct i915_drm_client *client, + unsigned int class, + unsigned int *capacity) { - const struct list_head *list = &client->ctx_list; - u64 total = atomic64_read(&client->past_runtime[class]); - const unsigned int capacity = - client->clients->i915->engine_uabi_class_count[class]; struct i915_gem_context *ctx; + u64 total; + + *capacity = + client->clients->i915->engine_uabi_class_count[class]; + if (!*capacity) + return 0; + + total = atomic64_read(&client->past_runtime[class]); rcu_read_lock(); - list_for_each_entry_rcu(ctx, list, client_link) + list_for_each_entry_rcu(ctx, &client->ctx_list, client_link) total += busy_add(ctx, class); rcu_read_unlock(); + return total; +} +#endif + +#ifdef CONFIG_CGROUP_DRM +static bool supports_stats(struct drm_i915_private *i915) +{ + if (GRAPHICS_VER(i915) < 8) + return false; + + /* temporary... */ + if (intel_uc_uses_guc_submission(&to_gt(i915)->uc)) + return false; + + return true; +} + +u64 i915_drm_cgroup_get_active_time_us(struct drm_file *file) +{ + struct drm_i915_file_private *fpriv = file->driver_priv; + struct i915_drm_client *client = fpriv->client; + unsigned int i; + u64 busy = 0; + + if (!supports_stats(client->clients->i915)) + return 0; + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) { + unsigned int capacity; + u64 b; + + b = get_class_active_ns(client, i, &capacity); + if (capacity) { + b = DIV_ROUND_UP_ULL(b, capacity * 1000); + busy += b; + } + } + + return busy; +} +#endif + +#ifdef CONFIG_PROC_FS +static void +show_client_class(struct seq_file *m, + struct i915_drm_client *client, + unsigned int class) +{ + unsigned int capacity; + u64 total; + + total = get_class_active_ns(client, class, &capacity); + if (capacity) seq_printf(m, "drm-engine-%s:\t%llu ns\n", uabi_class_names[class], total); diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index 69496af996d9..c8439eaa89be 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -65,4 +65,6 @@ void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); void i915_drm_clients_fini(struct i915_drm_clients *clients); +u64 i915_drm_cgroup_get_active_time_us(struct drm_file *file); + #endif /* !__I915_DRM_CLIENT_H__ */ From patchwork Thu Jan 12 16:56:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 42681 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp4015205wrt; Thu, 12 Jan 2023 09:42:16 -0800 (PST) X-Google-Smtp-Source: AMrXdXsC+WxBo6gFsCKc/6Yy8sHEP76f6lNroJK2j5VOApU3OiLK/dQJY10swDhVGeL5Dwt/okr0 X-Received: by 2002:a17:90a:6647:b0:229:6fd:56ba with SMTP id f7-20020a17090a664700b0022906fd56bamr2501816pjm.26.1673545335944; Thu, 12 Jan 2023 09:42:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545335; cv=none; d=google.com; s=arc-20160816; b=ZZUGoe9GIL+xnzxivpYgnGgAQq/+t/GqldnvPNJv1Lnlds5mUpntM+suh4CoMdzUCk FwXwsO+jKcWPNg8IH93bkUj2IrOhhQhiUhIQ1BNixBgnxUw6vn4KadneYeJsUD8aAurF PHgcID8GKdSnul3yjonxYRGx1DmxrPNgFG19+nu///MQKLMqQV5FJSq3wqQEl2e92D/o PFuVMyFqXR9e7/NE2GlcTUfLbwL2RQCDD0J2kGQkWAaExHcSBvbcQ6fYcYCTrX7U3hIs epIfQMH/4+NOWFxrODNIxEOdyGrKoJK7MSQikCX6VJTHyji5DYtUS+ZBjaRNe4nw/LTz doAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Ovh0IK/saEdBTOBcPqWEqRJdlKsLKvQunhtQwgTgMYE=; b=pjPnz+hgLYToNvXuxzAQweyJWsNrHTJcHw5Vlyd31U/vQPF/lbg20zmgSLbc3F/P8l WumLRHCCNHOuBpCPY9Eg95M3Pq9cq5klEXY2CHFa5ICZiN/Bo0JsFZbckOww5YhWK/HZ +th5GzWl4g3AJPZ3x8nTAxUYOdmKt0yscAYZcOhngPSFrVYefGliDrVDW1ah3bIT19UP 8Ff4QpEvmc5QfLFrORpdfQsFCOTJnLsbYizbHjp024JwrhLXjW8Zn0UThwEKpKOcocZt nkXVV+S7YGICDC0mlbb4FEPWiMzINMCpnxeBjeozMu8S38pO7akYf/t5WAivN4smD0k3 9yDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=WH4kXChK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w62-20020a638241000000b004b3b04315cbsi11802362pgd.365.2023.01.12.09.42.03; Thu, 12 Jan 2023 09:42:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=WH4kXChK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229917AbjALRjY (ORCPT + 99 others); Thu, 12 Jan 2023 12:39:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235023AbjALRib (ORCPT ); Thu, 12 Jan 2023 12:38:31 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 529CD6C2BA; Thu, 12 Jan 2023 08:58:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673542707; x=1705078707; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=f0q7RWdjWc7bT/dk3KoBtvbgBH0mVKLA43taVZS41KM=; b=WH4kXChKZMMifW9pmfSA8Ys2irAZC/mUopzYNWfBJAXZcB3+h/zPNqwW cfJcBbw5CIvHmLIQcrpFckNDImUnDszWJoN7H4db5A8d2xgFIC6g/ww6X 7aJ9A+uGxgPKKjZawgOktBRBRNl2REzIyVjNOsV6aWGi6CoTZB2iAKwXu pKsusJRYVEto0zaF+RNmv3meFn6Qkk2Q8fB7/NZMd8/EbN9wQtalHKhD9 YSHn51BZrfe9LRRtcMZWSg49knjm1ncA6Sz0VeLJUId69fB5oCJY7mszV dT5OR/CrwqnXHvixDpTBfRBqrEjEGOhR7wPaBMDqRX521QClM2oNvi+T9 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="325016608" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="325016608" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:57:15 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="651232959" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="651232959" Received: from jacton-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.195.171]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:57:11 -0800 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo , Johannes Weiner , Zefan Li , Dave Airlie , Daniel Vetter , Rob Clark , =?utf-8?q?St=C3=A9phane_Marchesin?= , "T . J . Mercier" , Kenny.Ho@amd.com, =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Brian Welty , Tvrtko Ursulin Subject: [RFC 12/12] drm/i915: Implement cgroup controller over budget throttling Date: Thu, 12 Jan 2023 16:56:09 +0000 Message-Id: <20230112165609.1083270-13-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> References: <20230112165609.1083270-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,HK_RANDOM_ENVFROM,HK_RANDOM_FROM, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754839474443441098?= X-GMAIL-MSGID: =?utf-8?q?1754839474443441098?= From: Tvrtko Ursulin When notified by the drm core we are over our allotted time budget, i915 instance will check if any of the GPU engines it is reponsible for is fully saturated. If it is, and the client in question is using that engine, it will throttle it. For now throttling is done simplistically by lowering the scheduling priority while clients are throttled. Signed-off-by: Tvrtko Ursulin --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 38 ++++- drivers/gpu/drm/i915/i915_driver.c | 1 + drivers/gpu/drm/i915/i915_drm_client.c | 133 ++++++++++++++++++ drivers/gpu/drm/i915/i915_drm_client.h | 11 ++ 4 files changed, 182 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 94d86ee24693..c3e57b51a106 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3065,6 +3065,42 @@ static void retire_requests(struct intel_timeline *tl, struct i915_request *end) break; } +#ifdef CONFIG_CGROUP_DRM +static unsigned int +__get_class(struct drm_i915_file_private *fpriv, const struct i915_request *rq) +{ + unsigned int class; + + class = rq->context->engine->uabi_class; + + if (WARN_ON_ONCE(class >= ARRAY_SIZE(fpriv->client->throttle))) + class = 0; + + return class; +} + +static void copy_priority(struct i915_sched_attr *attr, + const struct i915_execbuffer *eb, + const struct i915_request *rq) +{ + struct drm_i915_file_private *file_priv = eb->file->driver_priv; + int prio; + + *attr = eb->gem_context->sched; + + prio = file_priv->client->throttle[__get_class(file_priv, rq)]; + if (prio) + attr->priority = prio; +} +#else +static void copy_priority(struct i915_sched_attr *attr, + const struct i915_execbuffer *eb, + const struct i915_request *rq) +{ + *attr = eb->gem_context->sched; +} +#endif + static int eb_request_add(struct i915_execbuffer *eb, struct i915_request *rq, int err, bool last_parallel) { @@ -3081,7 +3117,7 @@ static int eb_request_add(struct i915_execbuffer *eb, struct i915_request *rq, /* Check that the context wasn't destroyed before submission */ if (likely(!intel_context_is_closed(eb->context))) { - attr = eb->gem_context->sched; + copy_priority(&attr, eb, rq); } else { /* Serialise with context_close via the add_to_timeline */ i915_request_set_error_once(rq, -ENOENT); diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index 50935cdb3a93..11c0c6f45e57 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -1900,6 +1900,7 @@ static const struct drm_ioctl_desc i915_ioctls[] = { #ifdef CONFIG_CGROUP_DRM static const struct drm_cgroup_ops i915_drm_cgroup_ops = { .active_time_us = i915_drm_cgroup_get_active_time_us, + .signal_budget = i915_drm_cgroup_signal_budget, }; #endif diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index c9754cb0277f..72d92ee292ae 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -4,6 +4,7 @@ */ #include +#include #include #include @@ -159,6 +160,138 @@ u64 i915_drm_cgroup_get_active_time_us(struct drm_file *file) return busy; } + +int i915_drm_cgroup_signal_budget(struct drm_file *file, u64 usage, u64 budget) +{ + struct drm_i915_file_private *fpriv = file->driver_priv; + u64 class_usage[I915_LAST_UABI_ENGINE_CLASS + 1]; + u64 class_last[I915_LAST_UABI_ENGINE_CLASS + 1]; + struct drm_i915_private *i915 = fpriv->dev_priv; + struct i915_drm_client *client = fpriv->client; + struct intel_engine_cs *engine; + bool over = usage > budget; + struct task_struct *task; + struct pid *pid; + unsigned int i; + ktime_t unused; + int ret = 0; + u64 t; + + if (!supports_stats(i915)) + return -EINVAL; + + if (usage == 0 && budget == 0) + return 0; + + rcu_read_lock(); + pid = rcu_dereference(file->pid); + task = pid_task(pid, PIDTYPE_TGID); + if (over) { + client->over_budget++; + if (!client->over_budget) + client->over_budget = 2; + + drm_dbg(&i915->drm, "%s[%u] over budget (%llu/%llu)\n", + task ? task->comm : "", pid_vnr(pid), + usage, budget); + } else { + client->over_budget = 0; + memset(client->class_last, 0, sizeof(client->class_last)); + memset(client->throttle, 0, sizeof(client->throttle)); + + drm_dbg(&i915->drm, "%s[%u] un-throttled; under budget\n", + task ? task->comm : "", pid_vnr(pid)); + + rcu_read_unlock(); + return 0; + } + rcu_read_unlock(); + + memset(class_usage, 0, sizeof(class_usage)); + for_each_uabi_engine(engine, i915) + class_usage[engine->uabi_class] += + ktime_to_ns(intel_engine_get_busy_time(engine, &unused)); + + memcpy(class_last, client->class_last, sizeof(class_last)); + memcpy(client->class_last, class_usage, sizeof(class_last)); + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) + class_usage[i] -= class_last[i]; + + t = client->last; + client->last = ktime_get_raw_ns(); + t = client->last - t; + + if (client->over_budget == 1) + return 0; + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) { + u64 client_class_usage[I915_LAST_UABI_ENGINE_CLASS + 1]; + unsigned int capacity, rel_usage; + + if (!i915->engine_uabi_class_count[i]) + continue; + + t = DIV_ROUND_UP_ULL(t, 1000); + class_usage[i] = DIV_ROUND_CLOSEST_ULL(class_usage[i], 1000); + rel_usage = DIV_ROUND_CLOSEST_ULL(class_usage[i] * 100ULL, + t * + i915->engine_uabi_class_count[i]); + if (rel_usage < 95) { + /* Physical class not oversubsribed. */ + if (client->throttle[i]) { + client->throttle[i] = 0; + + rcu_read_lock(); + pid = rcu_dereference(file->pid); + task = pid_task(pid, PIDTYPE_TGID); + drm_dbg(&i915->drm, + "%s[%u] un-throttled; physical class %s utilisation %u%%\n", + task ? task->comm : "", + pid_vnr(pid), + uabi_class_names[i], + rel_usage); + rcu_read_unlock(); + } + continue; + } + + client_class_usage[i] = + get_class_active_ns(client, i, &capacity); + if (client_class_usage[i]) { + int permille; + + ret |= 1; + + permille = DIV_ROUND_CLOSEST_ULL((usage - budget) * + 1000, + budget); + client->throttle[i] = + DIV_ROUND_CLOSEST(permille * + I915_CONTEXT_MIN_USER_PRIORITY, + 1000); + if (client->throttle[i] < + I915_CONTEXT_MIN_USER_PRIORITY) + client->throttle[i] = + I915_CONTEXT_MIN_USER_PRIORITY; + + rcu_read_lock(); + pid = rcu_dereference(file->pid); + task = pid_task(pid, PIDTYPE_TGID); + drm_dbg(&i915->drm, + "%s[%u] %d‰ over budget, throttled to priority %d; physical class %s utilisation %u%%\n", + task ? task->comm : "", + pid_vnr(pid), + permille, + client->throttle[i], + uabi_class_names[i], + rel_usage); + rcu_read_unlock(); + } + } + + return ret; +} #endif #ifdef CONFIG_PROC_FS diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index c8439eaa89be..092a7952a67b 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -15,6 +15,8 @@ #define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE +struct drm_file; + struct drm_i915_private; struct i915_drm_clients { @@ -38,6 +40,13 @@ struct i915_drm_client { * @past_runtime: Accumulation of pphwsp runtimes from closed contexts. */ atomic64_t past_runtime[I915_LAST_UABI_ENGINE_CLASS + 1]; + +#ifdef CONFIG_CGROUP_DRM + int throttle[I915_LAST_UABI_ENGINE_CLASS + 1]; + unsigned int over_budget; + u64 last; + u64 class_last[I915_LAST_UABI_ENGINE_CLASS + 1]; +#endif }; void i915_drm_clients_init(struct i915_drm_clients *clients, @@ -66,5 +75,7 @@ void i915_drm_client_fdinfo(struct seq_file *m, struct file *f); void i915_drm_clients_fini(struct i915_drm_clients *clients); u64 i915_drm_cgroup_get_active_time_us(struct drm_file *file); +int i915_drm_cgroup_signal_budget(struct drm_file *file, + u64 usage, u64 budget); #endif /* !__I915_DRM_CLIENT_H__ */