From patchwork Fri Sep 29 18:14:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 148956 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2016:b0:403:3b70:6f57 with SMTP id fe22csp457909vqb; Thu, 5 Oct 2023 10:34:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHiX0m+0gIgkgURNrIhtPWAuFvr2f9nmXGdw/GDYziQ49W4BMF7Jr5yhyLE39RkBoM573HJ X-Received: by 2002:a17:902:ee41:b0:1c7:65e3:e610 with SMTP id 1-20020a170902ee4100b001c765e3e610mr5680885plo.49.1696527254987; Thu, 05 Oct 2023 10:34:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696527254; cv=none; d=google.com; s=arc-20160816; b=BOIP7NhInspZkK235pCsZ6pof59pkwU02Qoec8874p7bzyjTZaHD7G41P3Hs+Uecxk PEx6PD0eDlorBf6RpdEHPA9IHhDWInlgr1JvrVhK+eHCfzV2RmuuPrxaYquFyuzTLCp2 VuX61qZbRiYWaZaj3DzZ7F0n6AFSIHOYEA/icXOK4ELPDagUyA+6vPhvGoo0SusEZiLG P6PokBhsiKiND0WGTSrsOncv7fmrLGBUUM094Rw+1Iyuqs6EeM5bmkWF6ImG+IXBc5Kn phRnQQfwaZkRDvbf/t3SG03YCZPClQTWWJe6s1wwTBBvGazTtjGMeAAK2nmscnKDGmZd qHvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/iqx1xZCuNDh7jQzoceFG9z+0C1pfv7G8Ko61KlX9eo=; fh=spFblIJQT4eADR1RsxWEdnrlx7ABR2LLC2kzUTZ3o+M=; b=bl5WgYGEYnbxnwjl7X470ly6nQrJpjVliDwzylsY1GYNxEOARCx2xrGjU5it0mKR65 HP985M5XbpOqKQGMA6/x9im0c+P6i+zbvAXukQ3sBXLTex9KUesaVHjS7u67eYXsIgZ9 2gmXIgHJs8pQSCVWkuoF7Y8pN9636ml62FErt5NFa/DgFR9hWwkCsNMVsmLFDi+tZzzv j/zwDdG/bxtLtddHDAFCw64Je/v8Dmxsxt1A953YKBMOH55J1w7RMQ7DxYAMcj8Gu4wN kQqRBeg/nKBUmJQpCr7hWEZdkSHD1OlRQLvt3FjM+7fNp7Ki7BkCjlOjqoCA4XgUCxGJ U4gA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=UkP5APeC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id f2-20020a170902ce8200b001b845157b69si2054167plg.414.2023.10.05.10.34.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Oct 2023 10:34:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=UkP5APeC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id B37EC80AFBA6; Fri, 29 Sep 2023 11:17:18 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233860AbjI2SQr (ORCPT + 19 others); Fri, 29 Sep 2023 14:16:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233439AbjI2SQf (ORCPT ); Fri, 29 Sep 2023 14:16:35 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A84F19F; Fri, 29 Sep 2023 11:16:31 -0700 (PDT) Received: from localhost.localdomain (unknown [IPv6:2a02:8010:65b5:0:1ac0:4dff:feee:236a]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: alarumbe) by madras.collabora.co.uk (Postfix) with ESMTPSA id CF83D6607334; Fri, 29 Sep 2023 19:16:29 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1696011390; bh=JJg8Mhnz2/Hou1YulcgrXIfCyk4un5l/EjQrPxsjKRg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UkP5APeCoXXL9Bwztx0o7nR21URKUieUdf1nv0ZwSePgevIxgU64t++5MKPC2qC2a pdyDODQIs8Tja/fIMuniaa6NKGk+1wtjC/hMvH5q6djImWi/LX22AtGE7ueXbErsjf O4fDVv8CAjp8U1ZZ5gdHAW065HFIIDwSG/2F8yfeFVdY6vY9Csxnypa32/lWnJLN2B iLT1egZL87HHpu6XGJ/iYJ8RjmAp+mvwV7HHWJMRrOEsTqLHZY14GW+xWm7bdHabsM fUJ+RMzEvzr3t3VqFN//UPJZBJ4ilILaJpUBPKfzZRnlux625VY6itC7Dlk4Lsc9C8 7+nUOQyjsz6Uw== From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, daniel@ffwll.ch, robdclark@gmail.com, quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org, sean@poorly.run, marijn.suijten@somainline.org, robh@kernel.org, steven.price@arm.com Cc: adrian.larumbe@collabora.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, healych@amazon.com, kernel@collabora.com, tvrtko.ursulin@linux.intel.com, boris.brezillon@collabora.com, AngeloGioacchino Del Regno Subject: [PATCH v8 1/5] drm/panfrost: Add cycle count GPU register definitions Date: Fri, 29 Sep 2023 19:14:27 +0100 Message-ID: <20230929181616.2769345-2-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230929181616.2769345-1-adrian.larumbe@collabora.com> References: <20230929181616.2769345-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Fri, 29 Sep 2023 11:17:18 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778937762919212308 X-GMAIL-MSGID: 1778937762919212308 These GPU registers will be used when programming the cycle counter, which we need for providing accurate fdinfo drm-cycles values to user space. Signed-off-by: Adrián Larumbe Reviewed-by: Boris Brezillon Reviewed-by: Steven Price Reviewed-by: AngeloGioacchino Del Regno --- drivers/gpu/drm/panfrost/panfrost_regs.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/gpu/drm/panfrost/panfrost_regs.h b/drivers/gpu/drm/panfrost/panfrost_regs.h index 919f44ac853d..55ec807550b3 100644 --- a/drivers/gpu/drm/panfrost/panfrost_regs.h +++ b/drivers/gpu/drm/panfrost/panfrost_regs.h @@ -46,6 +46,8 @@ #define GPU_CMD_SOFT_RESET 0x01 #define GPU_CMD_PERFCNT_CLEAR 0x03 #define GPU_CMD_PERFCNT_SAMPLE 0x04 +#define GPU_CMD_CYCLE_COUNT_START 0x05 +#define GPU_CMD_CYCLE_COUNT_STOP 0x06 #define GPU_CMD_CLEAN_CACHES 0x07 #define GPU_CMD_CLEAN_INV_CACHES 0x08 #define GPU_STATUS 0x34 @@ -73,6 +75,9 @@ #define GPU_PRFCNT_TILER_EN 0x74 #define GPU_PRFCNT_MMU_L2_EN 0x7c +#define GPU_CYCLE_COUNT_LO 0x90 +#define GPU_CYCLE_COUNT_HI 0x94 + #define GPU_THREAD_MAX_THREADS 0x0A0 /* (RO) Maximum number of threads per core */ #define GPU_THREAD_MAX_WORKGROUP_SIZE 0x0A4 /* (RO) Maximum workgroup size */ #define GPU_THREAD_MAX_BARRIER_SIZE 0x0A8 /* (RO) Maximum threads waiting at a barrier */ From patchwork Fri Sep 29 18:14:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 146709 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:cae8:0:b0:403:3b70:6f57 with SMTP id r8csp4230663vqu; Fri, 29 Sep 2023 11:31:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHntz61qH1PH/Uko2VGKzUkHlG4tVCxggLyHurz/4IdKjGAc1k6FvLMKs1vepFTgjlGoGS+ X-Received: by 2002:a17:902:efc6:b0:1c5:e9a8:dbc0 with SMTP id ja6-20020a170902efc600b001c5e9a8dbc0mr4795410plb.51.1696012265919; Fri, 29 Sep 2023 11:31:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696012265; cv=none; d=google.com; s=arc-20160816; b=CechV9gBjwoZi0evuZbTigtBO3N8j/pZDjeNZ+UezjoFC3oAo4TFeKnSlUKFbnludc v+e7X99N5mt+KbHRfpOeAC5Yt5av7Sldonr/fRQkfZXFrlhVeJFJfAzoghbmQdkcxXaA xNcmhfq3J5Gi3HnOe53uYF49W2dHzh81sTJtQLAt51TduSNDDt3Sh5+fiEfZz/ZZSD// uLOURPMD1Z9kpWK5a41dbXevsrCV5v9+CskExt9MA3/z99usgRpFDfOBfyT15q7VzEwo ebWkYFcrK8oytMJBs6TpAI95dWlR5MWcSj+e/gSTC4vr9xsdCDc9/Qqqujetc7S5nP8g wKhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Yuwy+lpmXBaxK7W4bAfkAsvGcCTp4CqHX5cNyKCcrho=; fh=spFblIJQT4eADR1RsxWEdnrlx7ABR2LLC2kzUTZ3o+M=; b=0M8b1S5K6vC6cHX5Pwvrm5Oh2C/tu97U6il9Ur80y3DW6URDEtKyrUAkZi95ffYxS5 0LyNX7hsLS8oRTCLzrKyOJVful86zJvZCAgtE7dI7wcTbynQ7v/0m4B4Ja+a785OhSYm F0kztoJ3TDmCih3YfwmFwWWJgxslgtiKjrgq7nS9OZs4Yb3RNoUe2sVQwnCH9sqJfNPJ F68E/86vmXoAmJ8nLFOvvQ39dX1ubaiU5yJtxtBsVM2vw2Ck1hINgGyY+lc050VMJyNo HsULQC44lEfSKhirmnplQ5W1PZqs/Hhy+G+hePuvB1mZ2l0BG6cPcUAjwgRfx/FfqCQW piqQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=XiC8dJsx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id n5-20020a170902e54500b001c0727658b0si22754477plf.283.2023.09.29.11.31.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 11:31:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=XiC8dJsx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id EC433826EB95; Fri, 29 Sep 2023 11:17:01 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233839AbjI2SQo (ORCPT + 19 others); Fri, 29 Sep 2023 14:16:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233787AbjI2SQh (ORCPT ); Fri, 29 Sep 2023 14:16:37 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 133A21A5; Fri, 29 Sep 2023 11:16:33 -0700 (PDT) Received: from localhost.localdomain (unknown [IPv6:2a02:8010:65b5:0:1ac0:4dff:feee:236a]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: alarumbe) by madras.collabora.co.uk (Postfix) with ESMTPSA id 13FB1660733F; Fri, 29 Sep 2023 19:16:30 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1696011390; bh=M8fE//OF/luQfuc9yhuNkIxCw5bhQh6w0duAQgH08cc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XiC8dJsx5uNFK1vshieYzSltSwcpPX2dmTD6AIljTef0gCpyoLlLRFadBMFEeBhm4 Slq6xbS/GpT6uKjzYH1maIc1pmPt7cijL9WoER/xUNyVOQtKbPo2kH4JYUW5/RAeI9 UHWlI+pv0K8zEmckhl1BeChGpkWo2MlY1ba+8HI0slaH5TCvoVckf1E76oM5SurDRC KSYwGs1GhwnDlBLSBVvdsmJvHfJ1Dq8nu5VM0aJ9aJCq4ahxK7PU7g+mJnhOVhzJKq zagziWUq9SgcNQLAvRR3s7tK+AS/+9BQ7D4ucbqkzQrhkKj06lJxhRK/YIsapw2RAb m2gWNNpNP2n3A== From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, daniel@ffwll.ch, robdclark@gmail.com, quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org, sean@poorly.run, marijn.suijten@somainline.org, robh@kernel.org, steven.price@arm.com Cc: adrian.larumbe@collabora.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, healych@amazon.com, kernel@collabora.com, tvrtko.ursulin@linux.intel.com, boris.brezillon@collabora.com, AngeloGioacchino Del Regno Subject: [PATCH v8 2/5] drm/panfrost: Add fdinfo support GPU load metrics Date: Fri, 29 Sep 2023 19:14:28 +0100 Message-ID: <20230929181616.2769345-3-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230929181616.2769345-1-adrian.larumbe@collabora.com> References: <20230929181616.2769345-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Fri, 29 Sep 2023 11:17:02 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778397757370388583 X-GMAIL-MSGID: 1778397757370388583 The drm-stats fdinfo tags made available to user space are drm-engine, drm-cycles, drm-max-freq and drm-curfreq, one per job slot. This deviates from standard practice in other DRM drivers, where a single set of key:value pairs is provided for the whole render engine. However, Panfrost has separate queues for fragment and vertex/tiler jobs, so a decision was made to calculate bus cycles and workload times separately. Maximum operating frequency is calculated at devfreq initialisation time. Current frequency is made available to user space because nvtop uses it when performing engine usage calculations. It is important to bear in mind that both GPU cycle and kernel time numbers provided are at best rough estimations, and always reported in excess from the actual figure because of two reasons: - Excess time because of the delay between the end of a job processing, the subsequent job IRQ and the actual time of the sample. - Time spent in the engine queue waiting for the GPU to pick up the next job. To avoid race conditions during enablement/disabling, a reference counting mechanism was introduced, and a job flag that tells us whether a given job increased the refcount. This is necessary, because user space can toggle cycle counting through a debugfs file, and a given job might have been in flight by the time cycle counting was disabled. The main goal of the debugfs cycle counter knob is letting tools like nvtop or IGT's gputop switch it at any time, to avoid power waste in case no engine usage measuring is necessary. Also add a documentation file explaining the possible values for fdinfo's engine keystrings and Panfrost-specific drm-curfreq- pairs. Signed-off-by: Adrián Larumbe Reviewed-by: Boris Brezillon Reviewed-by: Steven Price Reviewed-by: AngeloGioacchino Del Regno --- Documentation/gpu/drm-usage-stats.rst | 1 + Documentation/gpu/panfrost.rst | 38 ++++++++++++++ drivers/gpu/drm/panfrost/Makefile | 2 + drivers/gpu/drm/panfrost/panfrost_debugfs.c | 21 ++++++++ drivers/gpu/drm/panfrost/panfrost_debugfs.h | 14 +++++ drivers/gpu/drm/panfrost/panfrost_devfreq.c | 8 +++ drivers/gpu/drm/panfrost/panfrost_devfreq.h | 3 ++ drivers/gpu/drm/panfrost/panfrost_device.c | 2 + drivers/gpu/drm/panfrost/panfrost_device.h | 13 +++++ drivers/gpu/drm/panfrost/panfrost_drv.c | 58 ++++++++++++++++++++- drivers/gpu/drm/panfrost/panfrost_gpu.c | 41 +++++++++++++++ drivers/gpu/drm/panfrost/panfrost_gpu.h | 4 ++ drivers/gpu/drm/panfrost/panfrost_job.c | 24 +++++++++ drivers/gpu/drm/panfrost/panfrost_job.h | 5 ++ 14 files changed, 233 insertions(+), 1 deletion(-) create mode 100644 Documentation/gpu/panfrost.rst create mode 100644 drivers/gpu/drm/panfrost/panfrost_debugfs.c create mode 100644 drivers/gpu/drm/panfrost/panfrost_debugfs.h diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst index fe35a291ff3e..8d963cd7c1b7 100644 --- a/Documentation/gpu/drm-usage-stats.rst +++ b/Documentation/gpu/drm-usage-stats.rst @@ -169,3 +169,4 @@ Driver specific implementations ------------------------------- :ref:`i915-usage-stats` +:ref:`panfrost-usage-stats` diff --git a/Documentation/gpu/panfrost.rst b/Documentation/gpu/panfrost.rst new file mode 100644 index 000000000000..ecc48ba5ac11 --- /dev/null +++ b/Documentation/gpu/panfrost.rst @@ -0,0 +1,38 @@ +=========================== + drm/Panfrost Mali Driver +=========================== + +.. _panfrost-usage-stats: + +Panfrost DRM client usage stats implementation +========================================== + +The drm/Panfrost driver implements the DRM client usage stats specification as +documented in :ref:`drm-client-usage-stats`. + +Example of the output showing the implemented key value pairs and entirety of +the currently possible format options: + +:: + pos: 0 + flags: 02400002 + mnt_id: 27 + ino: 531 + drm-driver: panfrost + drm-client-id: 14 + drm-engine-fragment: 1846584880 ns + drm-cycles-fragment: 1424359409 + drm-maxfreq-fragment: 799999987 Hz + drm-curfreq-fragment: 799999987 Hz + drm-engine-vertex-tiler: 71932239 ns + drm-cycles-vertex-tiler: 52617357 + drm-maxfreq-vertex-tiler: 799999987 Hz + drm-curfreq-vertex-tiler: 799999987 Hz + drm-total-memory: 290 MiB + drm-shared-memory: 0 MiB + drm-active-memory: 226 MiB + drm-resident-memory: 36496 KiB + drm-purgeable-memory: 128 KiB + +Possible `drm-engine-` key names are: `fragment`, and `vertex-tiler`. +`drm-curfreq-` values convey the current operating frequency for that engine. diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile index 7da2b3f02ed9..2c01c1e7523e 100644 --- a/drivers/gpu/drm/panfrost/Makefile +++ b/drivers/gpu/drm/panfrost/Makefile @@ -12,4 +12,6 @@ panfrost-y := \ panfrost_perfcnt.o \ panfrost_dump.o +panfrost-$(CONFIG_DEBUG_FS) += panfrost_debugfs.o + obj-$(CONFIG_DRM_PANFROST) += panfrost.o diff --git a/drivers/gpu/drm/panfrost/panfrost_debugfs.c b/drivers/gpu/drm/panfrost/panfrost_debugfs.c new file mode 100644 index 000000000000..72d4286a6bf7 --- /dev/null +++ b/drivers/gpu/drm/panfrost/panfrost_debugfs.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright 2023 Collabora ltd. */ +/* Copyright 2023 Amazon.com, Inc. or its affiliates. */ + +#include +#include +#include +#include +#include + +#include "panfrost_device.h" +#include "panfrost_gpu.h" +#include "panfrost_debugfs.h" + +void panfrost_debugfs_init(struct drm_minor *minor) +{ + struct drm_device *dev = minor->dev; + struct panfrost_device *pfdev = platform_get_drvdata(to_platform_device(dev->dev)); + + debugfs_create_atomic_t("profile", 0600, minor->debugfs_root, &pfdev->profile_mode); +} diff --git a/drivers/gpu/drm/panfrost/panfrost_debugfs.h b/drivers/gpu/drm/panfrost/panfrost_debugfs.h new file mode 100644 index 000000000000..c5af5f35877f --- /dev/null +++ b/drivers/gpu/drm/panfrost/panfrost_debugfs.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright 2023 Collabora ltd. + * Copyright 2023 Amazon.com, Inc. or its affiliates. + */ + +#ifndef PANFROST_DEBUGFS_H +#define PANFROST_DEBUGFS_H + +#ifdef CONFIG_DEBUG_FS +void panfrost_debugfs_init(struct drm_minor *minor); +#endif + +#endif /* PANFROST_DEBUGFS_H */ diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c index 58dfb15a8757..28caffc689e2 100644 --- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c +++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c @@ -58,6 +58,7 @@ static int panfrost_devfreq_get_dev_status(struct device *dev, spin_lock_irqsave(&pfdevfreq->lock, irqflags); panfrost_devfreq_update_utilization(pfdevfreq); + pfdevfreq->current_frequency = status->current_frequency; status->total_time = ktime_to_ns(ktime_add(pfdevfreq->busy_time, pfdevfreq->idle_time)); @@ -117,6 +118,7 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev) struct devfreq *devfreq; struct thermal_cooling_device *cooling; struct panfrost_devfreq *pfdevfreq = &pfdev->pfdevfreq; + unsigned long freq = ULONG_MAX; if (pfdev->comp->num_supplies > 1) { /* @@ -172,6 +174,12 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev) return ret; } + /* Find the fastest defined rate */ + opp = dev_pm_opp_find_freq_floor(dev, &freq); + if (IS_ERR(opp)) + return PTR_ERR(opp); + pfdevfreq->fast_rate = freq; + dev_pm_opp_put(opp); /* diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.h b/drivers/gpu/drm/panfrost/panfrost_devfreq.h index 1514c1f9d91c..48dbe185f206 100644 --- a/drivers/gpu/drm/panfrost/panfrost_devfreq.h +++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.h @@ -19,6 +19,9 @@ struct panfrost_devfreq { struct devfreq_simple_ondemand_data gov_data; bool opp_of_table_added; + unsigned long current_frequency; + unsigned long fast_rate; + ktime_t busy_time; ktime_t idle_time; ktime_t time_last_update; diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c index fa1a086a862b..28f7046e1b1a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.c +++ b/drivers/gpu/drm/panfrost/panfrost_device.c @@ -207,6 +207,8 @@ int panfrost_device_init(struct panfrost_device *pfdev) spin_lock_init(&pfdev->as_lock); + spin_lock_init(&pfdev->cycle_counter.lock); + err = panfrost_clk_init(pfdev); if (err) { dev_err(pfdev->dev, "clk init failed %d\n", err); diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index b0126b9fbadc..1e85656dc2f7 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -107,6 +107,7 @@ struct panfrost_device { struct list_head scheduled_jobs; struct panfrost_perfcnt *perfcnt; + atomic_t profile_mode; struct mutex sched_lock; @@ -121,6 +122,11 @@ struct panfrost_device { struct shrinker shrinker; struct panfrost_devfreq pfdevfreq; + + struct { + atomic_t use_count; + spinlock_t lock; + } cycle_counter; }; struct panfrost_mmu { @@ -135,12 +141,19 @@ struct panfrost_mmu { struct list_head list; }; +struct panfrost_engine_usage { + unsigned long long elapsed_ns[NUM_JOB_SLOTS]; + unsigned long long cycles[NUM_JOB_SLOTS]; +}; + struct panfrost_file_priv { struct panfrost_device *pfdev; struct drm_sched_entity sched_entity[NUM_JOB_SLOTS]; struct panfrost_mmu *mmu; + + struct panfrost_engine_usage engine_usage; }; static inline struct panfrost_device *to_panfrost_device(struct drm_device *ddev) diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index a2ab99698ca8..97e5bc4a82c8 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -20,6 +20,7 @@ #include "panfrost_job.h" #include "panfrost_gpu.h" #include "panfrost_perfcnt.h" +#include "panfrost_debugfs.h" static bool unstable_ioctls; module_param_unsafe(unstable_ioctls, bool, 0600); @@ -267,6 +268,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data, job->requirements = args->requirements; job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev); job->mmu = file_priv->mmu; + job->engine_usage = &file_priv->engine_usage; slot = panfrost_job_get_slot(job); @@ -523,7 +525,56 @@ static const struct drm_ioctl_desc panfrost_drm_driver_ioctls[] = { PANFROST_IOCTL(MADVISE, madvise, DRM_RENDER_ALLOW), }; -DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops); +static void panfrost_gpu_show_fdinfo(struct panfrost_device *pfdev, + struct panfrost_file_priv *panfrost_priv, + struct drm_printer *p) +{ + int i; + + /* + * IMPORTANT NOTE: drm-cycles and drm-engine measurements are not + * accurate, as they only provide a rough estimation of the number of + * GPU cycles and CPU time spent in a given context. This is due to two + * different factors: + * - Firstly, we must consider the time the CPU and then the kernel + * takes to process the GPU interrupt, which means additional time and + * GPU cycles will be added in excess to the real figure. + * - Secondly, the pipelining done by the Job Manager (2 job slots per + * engine) implies there is no way to know exactly how much time each + * job spent on the GPU. + */ + + static const char * const engine_names[] = { + "fragment", "vertex-tiler", "compute-only" + }; + + BUILD_BUG_ON(ARRAY_SIZE(engine_names) != NUM_JOB_SLOTS); + + for (i = 0; i < NUM_JOB_SLOTS - 1; i++) { + drm_printf(p, "drm-engine-%s:\t%llu ns\n", + engine_names[i], panfrost_priv->engine_usage.elapsed_ns[i]); + drm_printf(p, "drm-cycles-%s:\t%llu\n", + engine_names[i], panfrost_priv->engine_usage.cycles[i]); + drm_printf(p, "drm-maxfreq-%s:\t%lu Hz\n", + engine_names[i], pfdev->pfdevfreq.fast_rate); + drm_printf(p, "drm-curfreq-%s:\t%lu Hz\n", + engine_names[i], pfdev->pfdevfreq.current_frequency); + } +} + +static void panfrost_show_fdinfo(struct drm_printer *p, struct drm_file *file) +{ + struct drm_device *dev = file->minor->dev; + struct panfrost_device *pfdev = dev->dev_private; + + panfrost_gpu_show_fdinfo(pfdev, file->driver_priv, p); +} + +static const struct file_operations panfrost_drm_driver_fops = { + .owner = THIS_MODULE, + DRM_GEM_FOPS, + .show_fdinfo = drm_show_fdinfo, +}; /* * Panfrost driver version: @@ -535,6 +586,7 @@ static const struct drm_driver panfrost_drm_driver = { .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ, .open = panfrost_open, .postclose = panfrost_postclose, + .show_fdinfo = panfrost_show_fdinfo, .ioctls = panfrost_drm_driver_ioctls, .num_ioctls = ARRAY_SIZE(panfrost_drm_driver_ioctls), .fops = &panfrost_drm_driver_fops, @@ -546,6 +598,10 @@ static const struct drm_driver panfrost_drm_driver = { .gem_create_object = panfrost_gem_create_object, .gem_prime_import_sg_table = panfrost_gem_prime_import_sg_table, + +#ifdef CONFIG_DEBUG_FS + .debugfs_init = panfrost_debugfs_init, +#endif }; static int panfrost_probe(struct platform_device *pdev) diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c index 2faa344d89ee..f0be7e19b13e 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gpu.c +++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c @@ -73,6 +73,13 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev) gpu_write(pfdev, GPU_INT_CLEAR, GPU_IRQ_MASK_ALL); gpu_write(pfdev, GPU_INT_MASK, GPU_IRQ_MASK_ALL); + /* + * All in-flight jobs should have released their cycle + * counter references upon reset, but let us make sure + */ + if (drm_WARN_ON(pfdev->ddev, atomic_read(&pfdev->cycle_counter.use_count) != 0)) + atomic_set(&pfdev->cycle_counter.use_count, 0); + return 0; } @@ -321,6 +328,40 @@ static void panfrost_gpu_init_features(struct panfrost_device *pfdev) pfdev->features.shader_present, pfdev->features.l2_present); } +void panfrost_cycle_counter_get(struct panfrost_device *pfdev) +{ + if (atomic_inc_not_zero(&pfdev->cycle_counter.use_count)) + return; + + spin_lock(&pfdev->cycle_counter.lock); + if (atomic_inc_return(&pfdev->cycle_counter.use_count) == 1) + gpu_write(pfdev, GPU_CMD, GPU_CMD_CYCLE_COUNT_START); + spin_unlock(&pfdev->cycle_counter.lock); +} + +void panfrost_cycle_counter_put(struct panfrost_device *pfdev) +{ + if (atomic_add_unless(&pfdev->cycle_counter.use_count, -1, 1)) + return; + + spin_lock(&pfdev->cycle_counter.lock); + if (atomic_dec_return(&pfdev->cycle_counter.use_count) == 0) + gpu_write(pfdev, GPU_CMD, GPU_CMD_CYCLE_COUNT_STOP); + spin_unlock(&pfdev->cycle_counter.lock); +} + +unsigned long long panfrost_cycle_counter_read(struct panfrost_device *pfdev) +{ + u32 hi, lo; + + do { + hi = gpu_read(pfdev, GPU_CYCLE_COUNT_HI); + lo = gpu_read(pfdev, GPU_CYCLE_COUNT_LO); + } while (hi != gpu_read(pfdev, GPU_CYCLE_COUNT_HI)); + + return ((u64)hi << 32) | lo; +} + void panfrost_gpu_power_on(struct panfrost_device *pfdev) { int ret; diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.h b/drivers/gpu/drm/panfrost/panfrost_gpu.h index 468c51e7e46d..876fdad9f721 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gpu.h +++ b/drivers/gpu/drm/panfrost/panfrost_gpu.h @@ -16,6 +16,10 @@ int panfrost_gpu_soft_reset(struct panfrost_device *pfdev); void panfrost_gpu_power_on(struct panfrost_device *pfdev); void panfrost_gpu_power_off(struct panfrost_device *pfdev); +void panfrost_cycle_counter_get(struct panfrost_device *pfdev); +void panfrost_cycle_counter_put(struct panfrost_device *pfdev); +unsigned long long panfrost_cycle_counter_read(struct panfrost_device *pfdev); + void panfrost_gpu_amlogic_quirk(struct panfrost_device *pfdev); #endif diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 033f5e684707..fb16de2d0420 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -159,6 +159,16 @@ panfrost_dequeue_job(struct panfrost_device *pfdev, int slot) struct panfrost_job *job = pfdev->jobs[slot][0]; WARN_ON(!job); + if (job->is_profiled) { + if (job->engine_usage) { + job->engine_usage->elapsed_ns[slot] += + ktime_to_ns(ktime_sub(ktime_get(), job->start_time)); + job->engine_usage->cycles[slot] += + panfrost_cycle_counter_read(pfdev) - job->start_cycles; + } + panfrost_cycle_counter_put(job->pfdev); + } + pfdev->jobs[slot][0] = pfdev->jobs[slot][1]; pfdev->jobs[slot][1] = NULL; @@ -233,6 +243,13 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js) subslot = panfrost_enqueue_job(pfdev, js, job); /* Don't queue the job if a reset is in progress */ if (!atomic_read(&pfdev->reset.pending)) { + if (atomic_read(&pfdev->profile_mode)) { + panfrost_cycle_counter_get(pfdev); + job->is_profiled = true; + job->start_time = ktime_get(); + job->start_cycles = panfrost_cycle_counter_read(pfdev); + } + job_write(pfdev, JS_COMMAND_NEXT(js), JS_COMMAND_START); dev_dbg(pfdev->dev, "JS: Submitting atom %p to js[%d][%d] with head=0x%llx AS %d", @@ -660,10 +677,14 @@ panfrost_reset(struct panfrost_device *pfdev, * stuck jobs. Let's make sure the PM counters stay balanced by * manually calling pm_runtime_put_noidle() and * panfrost_devfreq_record_idle() for each stuck job. + * Let's also make sure the cycle counting register's refcnt is + * kept balanced to prevent it from running forever */ spin_lock(&pfdev->js->job_lock); for (i = 0; i < NUM_JOB_SLOTS; i++) { for (j = 0; j < ARRAY_SIZE(pfdev->jobs[0]) && pfdev->jobs[i][j]; j++) { + if (pfdev->jobs[i][j]->is_profiled) + panfrost_cycle_counter_put(pfdev->jobs[i][j]->pfdev); pm_runtime_put_noidle(pfdev->dev); panfrost_devfreq_record_idle(&pfdev->pfdevfreq); } @@ -926,6 +947,9 @@ void panfrost_job_close(struct panfrost_file_priv *panfrost_priv) } job_write(pfdev, JS_COMMAND(i), cmd); + + /* Jobs can outlive their file context */ + job->engine_usage = NULL; } } spin_unlock(&pfdev->js->job_lock); diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h index 8becc1ba0eb9..17ff808dba07 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.h +++ b/drivers/gpu/drm/panfrost/panfrost_job.h @@ -32,6 +32,11 @@ struct panfrost_job { /* Fence to be signaled by drm-sched once its done with the job */ struct dma_fence *render_done_fence; + + struct panfrost_engine_usage *engine_usage; + bool is_profiled; + ktime_t start_time; + u64 start_cycles; }; int panfrost_job_init(struct panfrost_device *pfdev); From patchwork Fri Sep 29 18:14:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 146718 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:cae8:0:b0:403:3b70:6f57 with SMTP id r8csp4241864vqu; Fri, 29 Sep 2023 11:53:49 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFVLr87fRKQnTTGtukHqVHJYny4AbtPSY39IxtIPYgzcKQXK2oTO2nj/ejPagLnNxhUkh+x X-Received: by 2002:a17:902:ee89:b0:1c5:f68d:cec3 with SMTP id a9-20020a170902ee8900b001c5f68dcec3mr4459523pld.64.1696013629151; Fri, 29 Sep 2023 11:53:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696013629; cv=none; d=google.com; s=arc-20160816; b=VtKRMha/r0y9ZzcysYNqrIr9yK8/gidCBxw0f5sRvJ3RBpBqp7o0zXkDcaYX0nJ+Cf JM11xoOaM5zyUpku46ae+6reav4+crcnf7pNEfo/HJOB5qHaQmO5kALPAA6y/Pt+wKKs kurBplkqAz1GFaW78hzu0hKJP5YgHDPaaqkCw03z/hZutAvov9vwJHgnF/eCtVa4ObLA bdGNgvv1cpmMWDnsXL/o7yaAm2V35VbWy3zO6Gg3sDIeAVzlQieSQj1tYFDX9AZ9zvOM oAoNXcAWh5LQ7wGWL2f3n/UiO928ygXgqgi/xkB1Z754VuhR+akS9pVIBp6cHxPJ2tZg HCBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ayKSFBBzzYVWPPMmemDYllzhSncPxlqMw6+93w40Th0=; fh=spFblIJQT4eADR1RsxWEdnrlx7ABR2LLC2kzUTZ3o+M=; b=wejrl6mscDo53G0z3qwd4OPCKBz1Z/O6zeNsPlngvxHRZE0PvFImA8ITI/ABFOnK3d gKQI1E+6NLujidh4kjNhx0sDdmTOziq8hNkyMDYJM4GbMgEPhp8nTgPZ5xZ8p+RCqMr3 izp/UshjgdGTK+zw0SpBJEDRuX/8S5GUjv+ehJgtyrU8KX15MwKTaxrOlQPEjBOLYtkg Ks23Q7+PrSgT6PniB9NrsOcDSXsLdFbGlqeYFx21q7RgYRmv+BUE0lf3TCdhqXKYco6N fshOErXHgNkn8rU2HKOAwjoCdMuGUqnycLFVM0B9kJhHC6lvjUapOjKfzkUIrIo5IrcG j9UA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=iIRdY2zj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id kn6-20020a170903078600b001bbd639dff9si13721817plb.467.2023.09.29.11.53.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 11:53:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=iIRdY2zj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id E01FA806AA22; Fri, 29 Sep 2023 11:17:13 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233798AbjI2SQl (ORCPT + 19 others); Fri, 29 Sep 2023 14:16:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233051AbjI2SQf (ORCPT ); Fri, 29 Sep 2023 14:16:35 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6A6F1A4; Fri, 29 Sep 2023 11:16:31 -0700 (PDT) Received: from localhost.localdomain (unknown [IPv6:2a02:8010:65b5:0:1ac0:4dff:feee:236a]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: alarumbe) by madras.collabora.co.uk (Postfix) with ESMTPSA id 4EB986607347; Fri, 29 Sep 2023 19:16:30 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1696011390; bh=oQRC2/7kakFVXnxc/dlMxlyz1zqib0bHYDtG5iDdAHo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iIRdY2zj8T9sL8uHfyesDgY45cvFOQzztvdf8mEkeZCMCqMG2aiKe+RuDhpooKUMb 1/nNGJBQN38yP/RLpdqIWXM4ngFj4850YCTEl0JwmOeoi+87il8wBXEttXi00Gplgn vxvLVMo0qIoVLklHWeaIVBYJD4UTFypTJt2ey7iJCVvto9lbecGHIbHrPbmK/elbYH sHtlVteqSpnqVV/7dvj6Yn3S5SuoqGHsZUDS1W0yVafvDKIioL6NB+u7QIL1YxulEr dAx4umxU4K9K4v1+zicThspOBMq+yhTep4lRLJnXT4GowJub1MLyEt0AZ7Nad2ZeeS XSScA1DepKBMQ== From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, daniel@ffwll.ch, robdclark@gmail.com, quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org, sean@poorly.run, marijn.suijten@somainline.org, robh@kernel.org, steven.price@arm.com Cc: adrian.larumbe@collabora.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, healych@amazon.com, kernel@collabora.com, tvrtko.ursulin@linux.intel.com, boris.brezillon@collabora.com, AngeloGioacchino Del Regno Subject: [PATCH v8 3/5] drm/panfrost: Add fdinfo support for memory stats Date: Fri, 29 Sep 2023 19:14:29 +0100 Message-ID: <20230929181616.2769345-4-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230929181616.2769345-1-adrian.larumbe@collabora.com> References: <20230929181616.2769345-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Fri, 29 Sep 2023 11:17:13 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778399186973499454 X-GMAIL-MSGID: 1778399186973499454 A new DRM GEM object function is added so that drm_show_memory_stats can provide more accurate memory usage numbers. Ideally, in panfrost_gem_status, the BO's purgeable flag would be checked after locking the driver's shrinker mutex, but drm_show_memory_stats takes over the drm file's object handle database spinlock, so there's potential for a race condition here. Signed-off-by: Adrián Larumbe Reviewed-by: Boris Brezillon Reviewed-by: Steven Price Reviewed-by: AngeloGioacchino Del Regno --- drivers/gpu/drm/panfrost/panfrost_drv.c | 2 ++ drivers/gpu/drm/panfrost/panfrost_gem.c | 15 +++++++++++++++ 2 files changed, 17 insertions(+) diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 97e5bc4a82c8..b834777b409b 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -568,6 +568,8 @@ static void panfrost_show_fdinfo(struct drm_printer *p, struct drm_file *file) struct panfrost_device *pfdev = dev->dev_private; panfrost_gpu_show_fdinfo(pfdev, file->driver_priv, p); + + drm_show_memory_stats(p, file); } static const struct file_operations panfrost_drm_driver_fops = { diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 3c812fbd126f..de238b71b321 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -195,6 +195,20 @@ static int panfrost_gem_pin(struct drm_gem_object *obj) return drm_gem_shmem_pin(&bo->base); } +static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj) +{ + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + enum drm_gem_object_status res = 0; + + if (bo->base.pages) + res |= DRM_GEM_OBJECT_RESIDENT; + + if (bo->base.madv == PANFROST_MADV_DONTNEED) + res |= DRM_GEM_OBJECT_PURGEABLE; + + return res; +} + static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .open = panfrost_gem_open, @@ -206,6 +220,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .vmap = drm_gem_shmem_object_vmap, .vunmap = drm_gem_shmem_object_vunmap, .mmap = drm_gem_shmem_object_mmap, + .status = panfrost_gem_status, .vm_ops = &drm_gem_shmem_vm_ops, }; From patchwork Fri Sep 29 18:14:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 146752 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp15106vqb; Fri, 29 Sep 2023 13:05:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG+XxqY5PW6kPPjrMxSrT0+M61yYOWWEZGXKQxlhR+kuMa82jGrWQrwtgUDSltAa2k+LNjs X-Received: by 2002:a17:903:18f:b0:1c5:cd1c:46fa with SMTP id z15-20020a170903018f00b001c5cd1c46famr5825848plg.36.1696017912727; Fri, 29 Sep 2023 13:05:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696017912; cv=none; d=google.com; s=arc-20160816; b=x83z0/7ZbZN26BqtHSRpQCE9GI0w4leenIjdYZnKW2WoPh92NCTseTxG5/4ebCHRaO 9h5VeRLOSBpQI8svtsnSbz6VlnHyvgI9lt2ycJiydI6udyWojuijQ1rfCnpeHQh4uXNK 4czrx3qy+15g3/Bg0JzCMgaOhzcI63DQnlmF4QlH3rkCvtouhFuGYqn1HpLKk/6eFhf6 8PPa0GJU4H383O/jGSeFmaK0tBsuZrjUgjkIP8cgj3gczvVbJmDqfiD7sDo8GopNY/25 u4KdkDjXudWC8pQF6nFwzkMy7hC2jQDdJIQDX93bEKpB9IG+pAa92z2rrpdu1OKeHa9l qAAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=qZ2jDV6dvdW6D2ipHV2xezMMzidahpjoEnYtyabq3gk=; fh=spFblIJQT4eADR1RsxWEdnrlx7ABR2LLC2kzUTZ3o+M=; b=Xzs1rlutUodODZ17WbSvDokziCv1mguqnXs8bWZfCCPnsPKmchbrPKp4FqSYB/3wxQ JtqjNvfOzBe9NpmD89TQvoTTKakkH9ip5cMdZNrSCXOgw6SJg7jIBnk9m2v00HPiPZrt z2O/KP7izVE+iqPeOqKaHUmF+ayIYP3quUd6zxxZsL6Bt07WAKhWYA0yal9x1kr/RdPI pl8wiYDSP5rjACGcgzDcXR3rsorqkISUk2F/eUkInzzLvbnBc84TslwjNkhDWl8aLtS6 mouRN/KbxreFcKCiExnrsIjz9ZZvEgqqOzzqY/V+89hiI7rIQ5U3ikkuaVlbTlkIeTuj omNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="mXBblnl/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id ij9-20020a170902ab4900b001c5e1816251si16188026plb.497.2023.09.29.13.05.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 13:05:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="mXBblnl/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 8D27C8271426; Fri, 29 Sep 2023 11:17:03 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233866AbjI2SQu (ORCPT + 19 others); Fri, 29 Sep 2023 14:16:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233779AbjI2SQh (ORCPT ); Fri, 29 Sep 2023 14:16:37 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 134EE1A8; Fri, 29 Sep 2023 11:16:33 -0700 (PDT) Received: from localhost.localdomain (unknown [IPv6:2a02:8010:65b5:0:1ac0:4dff:feee:236a]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: alarumbe) by madras.collabora.co.uk (Postfix) with ESMTPSA id 8BCBC6607348; Fri, 29 Sep 2023 19:16:30 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1696011390; bh=QnD8hEwDCIlVD3Jh+7OKdjKl/ZNfyX9lMUu35e8TZGo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mXBblnl/ZPg+Aa84VaCwgFhvbHnud2N2mDMaQ4NqGDrcPd0XPNG6mxoiT25KvrZnR HSVlH0CrBur9t4DcUqcNel5FW8uNqHa/GRnO97Yq5rC03R+FYlzFYFYC+XodVo40lz G574aAlrMM4fdwmQFHYSwoOAcUrKA+kTdtkyc5zwiA42QxhUeYKOxmrOWzpZVzeiok WQY0xiEtnNJZXQ6UJ9jth+9eVgexkahhi0isVnQmJW4DZkZ9OlS6KigqshpB7FF29u ZP9pxeE3vHnuVthrV10Kt8pSyYStMrpPkH3EzNd/Lq2LKOpNeWrfffhCXrDxxjGygY fOPyvetd8H+ew== From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, daniel@ffwll.ch, robdclark@gmail.com, quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org, sean@poorly.run, marijn.suijten@somainline.org, robh@kernel.org, steven.price@arm.com Cc: adrian.larumbe@collabora.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, healych@amazon.com, kernel@collabora.com, tvrtko.ursulin@linux.intel.com, boris.brezillon@collabora.com, AngeloGioacchino Del Regno Subject: [PATCH v8 4/5] drm/drm_file: Add DRM obj's RSS reporting function for fdinfo Date: Fri, 29 Sep 2023 19:14:30 +0100 Message-ID: <20230929181616.2769345-5-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230929181616.2769345-1-adrian.larumbe@collabora.com> References: <20230929181616.2769345-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Fri, 29 Sep 2023 11:17:03 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778403678955440870 X-GMAIL-MSGID: 1778403678955440870 Some BO's might be mapped onto physical memory chunkwise and on demand, like Panfrost's tiler heap. In this case, even though the drm_gem_shmem_object page array might already be allocated, only a very small fraction of the BO is currently backed by system memory, but drm_show_memory_stats will then proceed to add its entire virtual size to the file's total resident size regardless. This led to very unrealistic RSS sizes being reckoned for Panfrost, where said tiler heap buffer is initially allocated with a virtual size of 128 MiB, but only a small part of it will eventually be backed by system memory after successive GPU page faults. Provide a new DRM object generic function that would allow drivers to return a more accurate RSS and purgeable sizes for their BOs. Signed-off-by: Adrián Larumbe Reviewed-by: Boris Brezillon Reviewed-by: Steven Price Reviewed-by: AngeloGioacchino Del Regno Reviewed-by: Rob Clark --- drivers/gpu/drm/drm_file.c | 8 +++++--- include/drm/drm_gem.h | 9 +++++++++ 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index 883d83bc0e3d..9a1bd8d0d785 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -930,6 +930,8 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file) spin_lock(&file->table_lock); idr_for_each_entry (&file->object_idr, obj, id) { enum drm_gem_object_status s = 0; + size_t add_size = (obj->funcs && obj->funcs->rss) ? + obj->funcs->rss(obj) : obj->size; if (obj->funcs && obj->funcs->status) { s = obj->funcs->status(obj); @@ -944,7 +946,7 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file) } if (s & DRM_GEM_OBJECT_RESIDENT) { - status.resident += obj->size; + status.resident += add_size; } else { /* If already purged or not yet backed by pages, don't * count it as purgeable: @@ -953,14 +955,14 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file) } if (!dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true))) { - status.active += obj->size; + status.active += add_size; /* If still active, don't count as purgeable: */ s &= ~DRM_GEM_OBJECT_PURGEABLE; } if (s & DRM_GEM_OBJECT_PURGEABLE) - status.purgeable += obj->size; + status.purgeable += add_size; } spin_unlock(&file->table_lock); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index bc9f6aa2f3fe..16364487fde9 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -208,6 +208,15 @@ struct drm_gem_object_funcs { */ enum drm_gem_object_status (*status)(struct drm_gem_object *obj); + /** + * @rss: + * + * Return resident size of the object in physical memory. + * + * Called by drm_show_memory_stats(). + */ + size_t (*rss)(struct drm_gem_object *obj); + /** * @vm_ops: * From patchwork Fri Sep 29 18:14:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Adri=C3=A1n_Larumbe?= X-Patchwork-Id: 146708 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:cae8:0:b0:403:3b70:6f57 with SMTP id r8csp4227037vqu; Fri, 29 Sep 2023 11:24:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHbjjXxcW8oRmSMxBVc3z0FEuhUcCwdwMdi611XGP39KrcEHoic/QNUFXlBLxwn9IS6Rf2Z X-Received: by 2002:a17:90b:2397:b0:279:1141:398b with SMTP id mr23-20020a17090b239700b002791141398bmr4690150pjb.26.1696011841275; Fri, 29 Sep 2023 11:24:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696011841; cv=none; d=google.com; s=arc-20160816; b=sbk6Ur7qx2HcRSHjdCKy7H4KJRct2cEaoEonn+1feEq65RvHaIxqcXAKU5Za3Z8ErX waBFZQTnOYpoPZMJTchfX/KFyr10g5aAGL6wkH7Jtydpc/Vuh6gfnx/WPhetuZqweLeK tt0dyqKDI+1bongvxuEpl+umZYLjq3Mi2mahEYrFl0YzoYkLEHONP5N1/WCVd6FbJUcL cqz8ERo4RCIvXyUjy1fBDta/AA7QXtaT/GqBHC2SWuX35hEe3G6FtwxQh9ycsxFPsaCk xEj9t/Zkk5gF1MaynzN217zvOXhiX3DM3dynKMmJ4s0tNzS/HN73Hc4qZ6vmbqAvRu8y +vBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=EAj+6om5+/P3koGMr/+iPMfOVSE2x3RVdlgtkmRhKls=; fh=spFblIJQT4eADR1RsxWEdnrlx7ABR2LLC2kzUTZ3o+M=; b=A5/fFPD9at/mu5umTa2dhbT3FRHspTcXdKYZklUNKLT58bsK4GFte3ix9izEg3gGhB Jg3RULvPWbc+EXZnOT2+skRjkRdaqmWbP94AFkhhadJWYyRdwwZFOERBwrwwhNoBTkOb 4YAMUZDXOuoTRQ+eYgtlYeGRej6dobC/Rgm3RmolxcTZIgIp4Qh7UNhw7UoruMuJHPAA x0pHXfwr4GgaATlmb5ApogpDMmctZ49P2siaSDUYfPwUTXoeKPSJA9UnlW3WVrDjrtz6 0gzjA8y6dMBp1vQ+z6++Oqg1JNTaUSU+hsBnwKmesptab1TCM9grWJF6O910quav4c5i L5jQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=IgTDB7Va; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id np16-20020a17090b4c5000b002749baa2edesi2289926pjb.24.2023.09.29.11.24.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 11:24:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=IgTDB7Va; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=collabora.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 74B048041392; Fri, 29 Sep 2023 11:17:44 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233873AbjI2SQw (ORCPT + 19 others); Fri, 29 Sep 2023 14:16:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233538AbjI2SQi (ORCPT ); Fri, 29 Sep 2023 14:16:38 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9AAD1AE; Fri, 29 Sep 2023 11:16:33 -0700 (PDT) Received: from localhost.localdomain (unknown [IPv6:2a02:8010:65b5:0:1ac0:4dff:feee:236a]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: alarumbe) by madras.collabora.co.uk (Postfix) with ESMTPSA id C49A66607349; Fri, 29 Sep 2023 19:16:30 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1696011390; bh=DJoaVMy8ENkNzQozP+tziMFSLiEKCxJ8BZ8mfkcAcBE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IgTDB7VaTVIFK/yXJgy3tq2tI1RvDZ/RInfcINl5Amjr4uecWiS/EbWOxKMc+Aw/m QaaqDnCQElS4qTfkeCFuh7/WuWQUEJK1HEL6H7Wx5I1vhCcaqkCjy5enMbxKycjRA9 iiJ6HiUmQUqEgtw5OXVlUwSPa+gVAkHIqPeBx6an+ewWA9IJQg/Xz8PzEXB22No7MD gtUx4Cib0xlnHpX48z6jmlfJew5gTJuOOsXnwyYqL3UrZU4ejMZoyyXsPP1bMwk2/6 6NcCAeNSu/uh+sEepmekIRYbn5HjDjXnxSiH8FsgWc2cXduUU6memnuK0DIJptzxnN Xd4pwJ/W4x9ww== From: =?utf-8?q?Adri=C3=A1n_Larumbe?= To: maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, daniel@ffwll.ch, robdclark@gmail.com, quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org, sean@poorly.run, marijn.suijten@somainline.org, robh@kernel.org, steven.price@arm.com Cc: adrian.larumbe@collabora.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, healych@amazon.com, kernel@collabora.com, tvrtko.ursulin@linux.intel.com, boris.brezillon@collabora.com, AngeloGioacchino Del Regno Subject: [PATCH v8 5/5] drm/panfrost: Implement generic DRM object RSS reporting function Date: Fri, 29 Sep 2023 19:14:31 +0100 Message-ID: <20230929181616.2769345-6-adrian.larumbe@collabora.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20230929181616.2769345-1-adrian.larumbe@collabora.com> References: <20230929181616.2769345-1-adrian.larumbe@collabora.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Fri, 29 Sep 2023 11:17:44 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778397312659691241 X-GMAIL-MSGID: 1778397312659691241 BO's RSS is updated every time new pages are allocated on demand and mapped for the object at GPU page fault's IRQ handler, but only for heap buffers. The reason this is unnecessary for non-heap buffers is that they are mapped onto the GPU's VA space and backed by physical memory in their entirety at BO creation time. This calculation is unnecessary for imported PRIME objects, since heap buffers cannot be exported by our driver, and the actual BO RSS size is the one reported in its attached dmabuf structure. Signed-off-by: Adrián Larumbe Reviewed-by: Boris Brezillon Reviewed-by: Steven Price Reviewed-by: AngeloGioacchino Del Regno --- drivers/gpu/drm/panfrost/panfrost_gem.c | 15 +++++++++++++++ drivers/gpu/drm/panfrost/panfrost_gem.h | 5 +++++ drivers/gpu/drm/panfrost/panfrost_mmu.c | 1 + 3 files changed, 21 insertions(+) diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index de238b71b321..0cf64456e29a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -209,6 +209,20 @@ static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj return res; } +static size_t panfrost_gem_rss(struct drm_gem_object *obj) +{ + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + + if (bo->is_heap) { + return bo->heap_rss_size; + } else if (bo->base.pages) { + WARN_ON(bo->heap_rss_size); + return bo->base.base.size; + } + + return 0; +} + static const struct drm_gem_object_funcs panfrost_gem_funcs = { .free = panfrost_gem_free_object, .open = panfrost_gem_open, @@ -221,6 +235,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .vunmap = drm_gem_shmem_object_vunmap, .mmap = drm_gem_shmem_object_mmap, .status = panfrost_gem_status, + .rss = panfrost_gem_rss, .vm_ops = &drm_gem_shmem_vm_ops, }; diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index ad2877eeeccd..13c0a8149c3a 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -36,6 +36,11 @@ struct panfrost_gem_object { */ atomic_t gpu_usecount; + /* + * Object chunk size currently mapped onto physical memory + */ + size_t heap_rss_size; + bool noexec :1; bool is_heap :1; }; diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index d54d4e7b2195..846dd697c410 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -522,6 +522,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt); bomapping->active = true; + bo->heap_rss_size += SZ_2M; dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);