From patchwork Tue Nov 1 22:33:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13946 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3248779wru; Tue, 1 Nov 2022 15:37:29 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4pVIRAlmudeHESNP0gmxlwJ+4i8EZfvTfF7cQpTsAjJJA6tgvm/1sx+rKW/MQsFgFwkdlW X-Received: by 2002:a17:907:2bf9:b0:7a4:bbce:dd98 with SMTP id gv57-20020a1709072bf900b007a4bbcedd98mr21122189ejc.669.1667342249573; Tue, 01 Nov 2022 15:37:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667342249; cv=none; d=google.com; s=arc-20160816; b=ZptfggcQt36QouiAkOCjS6EFpfeSBGK6Be4MnzqfJW+wVqVxgxROG+w7RBP8J8oHDU 7XqUUH6G3cf11mrBo9o5lj7IqB7tHVb6C6QGlUw+D0Fge1y0en+UqK9EtXam2Bz/iFkQ oL5OCp6uwmSMKYqVv1Fy4gYG18HPov8GNyIvuQMz6b1jipRO5YXyU6Ymplfw8dt2EmBT f+uGWU/ZVFyrS+uU6/HD4Ers+Kz5o8dINqFb1fxLV6QJWkQao1eIakr/yVOJJOBkV3gj 4EKKWH9IB7Mf38dlM2LcHZA47QwY2oS8lBqlLwOjI2kj76ta6aokLtyNPXcp16RzxmlV /MTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=VxeOMGDEphJ5EBKU4t4wsdXzYKyxaBqm90z9B0rYTIU=; b=CvUr3ojYMsHDoagOQNiyfYEg/w4IW2eQQJoASKi/7mBkwfWL+NUry18sMkSd3kAYVb jze2YpmUQ6nBvUfljDM8WXa+5ZRfGgwDhAt0hKOYtON3p6zSciR6q6wqUnEKKtrhEAfp jsUeFZoF4GccMbhtyExC2RTYvVP/x3bVO2sQ56F6FFd1cVk04qr4lEggA4Nq/t6JaB4R g4SmZH3M7Lspocj7fC4k3xA4iL8/covKjjeJhnKdLpBpKRC9+PSXcDITJpQg3JEKU9l5 zRUUn1w9DQSuPGGMifVdnxrjsE7abtQ+mARvuYUAtNw0FCizoXIvymjEA2n6rpNtscT9 Ur2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=pYo5UhJP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g15-20020a50d0cf000000b0045138471d7csi13110532edf.375.2022.11.01.15.37.05; Tue, 01 Nov 2022 15:37:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=pYo5UhJP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230495AbiKAWfA (ORCPT + 99 others); Tue, 1 Nov 2022 18:35:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231392AbiKAWeL (ORCPT ); Tue, 1 Nov 2022 18:34:11 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3049A1FF99; Tue, 1 Nov 2022 15:33:58 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id f9so14679315pgj.2; Tue, 01 Nov 2022 15:33:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VxeOMGDEphJ5EBKU4t4wsdXzYKyxaBqm90z9B0rYTIU=; b=pYo5UhJPO3tW9ll+l2ZYyivpknW3o09rB6jysv+BKZWWoPct4H0/nfaIIiioNh5BwM x/Ymzu1fkDVE2e0MZ8gHuFkz3NrEU6trfYucHM2MhwjOQ4pxLanJcofV4rSM0m9w5qEd LWJB4NVUN2m+3de6oEshQ985mTcHs3C0QIfvuvWryCRYA1omADZDlMeKYXALB0Qt8VUh lwwO+F7oYNt3QKcFLsZFJFBk5eifU8ECcrCr5KIvrL787X2S2cY22lSxqrpVnaZZhrwB Q+pocQW1DVrOkeo6SvAinSDJYANlsJCWcxoXJqyl2/dWE03MNv5hbJOQH9Zwj66wDtnA 4TZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VxeOMGDEphJ5EBKU4t4wsdXzYKyxaBqm90z9B0rYTIU=; b=eOnGYfvk2TagA4vEHnalgI4WaUtsF8poCsbYG/SllbreahufIwB5f2pnSD+CyovWwi kW2Ik7sv+Jb5/dIZAd5+myNYMTKi0QiSHbjbLqUCtdc2xyPi98Tq8+dVjB24G9JmfjN3 qmePf3h/vbRWCI/0RsYGmQkK3SMfMomYzW6KVTY9/J6uapY7N6wZFYi2v3brbSey5V2d QEKwywbgfvGRJ2WwJ2GgmIlis4EdeSI8h5Wx+WBBJUN5Wzxr+lvYZ9Ri1+CshpAfzpam CFX79ZWM3iLJn1243nk7AuVNJ5HD21GgcfZpKLG9BwpzK/WgdwayAbm0e3MIdrsXZp1s SYdg== X-Gm-Message-State: ACrzQf2fd6rsFKi+q/+acYgD2z80EW8kScKVZHXdFskScObZmJDSvVpN RZGuG9xhNr6acTRBS9yFRRs= X-Received: by 2002:a05:6a00:2187:b0:56c:7e85:c8f9 with SMTP id h7-20020a056a00218700b0056c7e85c8f9mr21939023pfi.75.1667342037228; Tue, 01 Nov 2022 15:33:57 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:2703:3c72:eb1a:cffd]) by smtp.gmail.com with ESMTPSA id 65-20020a621644000000b0056bbcf88b93sm7032500pfw.42.2022.11.01.15.33.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Nov 2022 15:33:56 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Akhil P Oommen , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , Vladimir Lypak , Douglas Anderson , Chia-I Wu , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 2/2] drm/msm: Hangcheck progress detection Date: Tue, 1 Nov 2022 15:33:10 -0700 Message-Id: <20221101223319.165493-3-robdclark@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221101223319.165493-1-robdclark@gmail.com> References: <20221101223319.165493-1-robdclark@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748335067026229864?= X-GMAIL-MSGID: =?utf-8?q?1748335067026229864?= From: Rob Clark If the hangcheck timer expires, check if the fw's position in the cmdstream has advanced (changed) since last timer expiration, and allow it up to three additional "extensions" to it's alotted time. The intention is to continue to catch "shader stuck in a loop" type hangs quickly, but allow more time for things that are actually making forward progress. Because we need to sample the CP state twice to detect if there has not been progress, this also cuts the the timer's duration in half. v2: Fix typo (REG_A6XX_CP_CSQ_IB2_STAT), add comment Signed-off-by: Rob Clark Reviewed-by: Akhil P Oommen --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 16 +++++++++++++ drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 34 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_drv.h | 8 ++++++- drivers/gpu/drm/msm/msm_gpu.c | 20 +++++++++++++++- drivers/gpu/drm/msm/msm_gpu.h | 5 +++- drivers/gpu/drm/msm/msm_ringbuffer.h | 24 +++++++++++++++++++ 6 files changed, 104 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index ba22d3c918bc..9638ce71e172 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1677,6 +1677,22 @@ static uint32_t a5xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) return ring->memptrs->rptr = gpu_read(gpu, REG_A5XX_CP_RB_RPTR); } +static bool a5xx_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + struct msm_cp_state cp_state = { + .ib1_base = gpu_read64(gpu, REG_A5XX_CP_IB1_BASE), + .ib2_base = gpu_read64(gpu, REG_A5XX_CP_IB2_BASE), + .ib1_rem = gpu_read(gpu, REG_A5XX_CP_IB1_BUFSZ), + .ib2_rem = gpu_read(gpu, REG_A5XX_CP_IB2_BUFSZ), + }; + bool progress = + !!memcmp(&cp_state, &ring->last_cp_state, sizeof(cp_state)); + + ring->last_cp_state = cp_state; + + return progress; +} + static const struct adreno_gpu_funcs funcs = { .base = { .get_param = adreno_get_param, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 1ff605c18ee6..7fe60c65a1eb 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1843,6 +1843,39 @@ static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) return ring->memptrs->rptr = gpu_read(gpu, REG_A6XX_CP_RB_RPTR); } +static bool a6xx_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + struct msm_cp_state cp_state = { + .ib1_base = gpu_read64(gpu, REG_A6XX_CP_IB1_BASE), + .ib2_base = gpu_read64(gpu, REG_A6XX_CP_IB2_BASE), + .ib1_rem = gpu_read(gpu, REG_A6XX_CP_IB1_REM_SIZE), + .ib2_rem = gpu_read(gpu, REG_A6XX_CP_IB2_REM_SIZE), + }; + bool progress; + + /* + * Adjust the remaining data to account for what has already been + * fetched from memory, but not yet consumed by the SQE. + * + * This is not *technically* correct, the amount buffered could + * exceed the IB size due to hw prefetching ahead, but: + * + * (1) We aren't trying to find the exact position, just whether + * progress has been made + * (2) The CP_REG_TO_MEM at the end of a submit should be enough + * to prevent prefetching into an unrelated submit. (And + * either way, at some point the ROQ will be full.) + */ + cp_state.ib1_rem += gpu_read(gpu, REG_A6XX_CP_CSQ_IB1_STAT) >> 16; + cp_state.ib2_rem += gpu_read(gpu, REG_A6XX_CP_CSQ_IB2_STAT) >> 16; + + progress = !!memcmp(&cp_state, &ring->last_cp_state, sizeof(cp_state)); + + ring->last_cp_state = cp_state; + + return progress; +} + static u32 a618_get_speed_bin(u32 fuse) { if (fuse == 0) @@ -1961,6 +1994,7 @@ static const struct adreno_gpu_funcs funcs = { .create_address_space = a6xx_create_address_space, .create_private_address_space = a6xx_create_private_address_space, .get_rptr = a6xx_get_rptr, + .progress = a6xx_progress, }, .get_timestamp = a6xx_get_timestamp, }; diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 0609daf4fa4c..876d8d5eec2f 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -225,7 +225,13 @@ struct msm_drm_private { struct drm_atomic_state *pm_state; - /* For hang detection, in ms */ + /** + * hangcheck_period: For hang detection, in ms + * + * Note that in practice, a submit/job will get at least two hangcheck + * periods, due to checking for progress being implemented as simply + * "have the CP position registers changed since last time?" + */ unsigned int hangcheck_period; /** diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 3dffee54a951..136f5977b0bf 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -500,6 +500,21 @@ static void hangcheck_timer_reset(struct msm_gpu *gpu) round_jiffies_up(jiffies + msecs_to_jiffies(priv->hangcheck_period))); } +static bool made_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring) +{ + if (ring->hangcheck_progress_retries >= DRM_MSM_HANGCHECK_PROGRESS_RETRIES) + return false; + + if (!gpu->funcs->progress) + return false; + + if (!gpu->funcs->progress(gpu, ring)) + return false; + + ring->hangcheck_progress_retries++; + return true; +} + static void hangcheck_handler(struct timer_list *t) { struct msm_gpu *gpu = from_timer(gpu, t, hangcheck_timer); @@ -511,9 +526,12 @@ static void hangcheck_handler(struct timer_list *t) if (fence != ring->hangcheck_fence) { /* some progress has been made.. ya! */ ring->hangcheck_fence = fence; - } else if (fence_before(fence, ring->fctx->last_fence)) { + ring->hangcheck_progress_retries = 0; + } else if (fence_before(fence, ring->fctx->last_fence) && + !made_progress(gpu, ring)) { /* no progress and not done.. hung! */ ring->hangcheck_fence = fence; + ring->hangcheck_progress_retries = 0; DRM_DEV_ERROR(dev->dev, "%s: hangcheck detected gpu lockup rb %d!\n", gpu->name, ring->id); DRM_DEV_ERROR(dev->dev, "%s: completed fence: %u\n", diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 585fd9c8d45a..d8f355e9f0b2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,6 +78,8 @@ struct msm_gpu_funcs { struct msm_gem_address_space *(*create_private_address_space) (struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); + + bool (*progress)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); }; /* Additional state for iommu faults: */ @@ -236,7 +238,8 @@ struct msm_gpu { */ #define DRM_MSM_INACTIVE_PERIOD 66 /* in ms (roughly four frames) */ -#define DRM_MSM_HANGCHECK_DEFAULT_PERIOD 500 /* in ms */ +#define DRM_MSM_HANGCHECK_DEFAULT_PERIOD 250 /* in ms */ +#define DRM_MSM_HANGCHECK_PROGRESS_RETRIES 3 struct timer_list hangcheck_timer; /* Fault info for most recent iova fault: */ diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 2a5045abe46e..e3d33bae3380 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -35,6 +35,11 @@ struct msm_rbmemptrs { volatile u64 ttbr0; }; +struct msm_cp_state { + uint64_t ib1_base, ib2_base; + uint32_t ib1_rem, ib2_rem; +}; + struct msm_ringbuffer { struct msm_gpu *gpu; int id; @@ -64,6 +69,25 @@ struct msm_ringbuffer { uint64_t memptrs_iova; struct msm_fence_context *fctx; + /** + * hangcheck_progress_retries: + * + * The number of extra hangcheck duration cycles that we have given + * due to it appearing that the GPU is making forward progress. + * + * If the GPU appears to be making progress (ie. the CP has advanced + * in the command stream, we'll allow up to DRM_MSM_HANGCHECK_PROGRESS_RETRIES + * expirations of the hangcheck timer before killing the job. In other + * words we'll let the submit run for up to + * DRM_MSM_HANGCHECK_DEFAULT_PERIOD * DRM_MSM_HANGCHECK_PROGRESS_RETRIES + */ + int hangcheck_progress_retries; + + /** + * last_cp_state: The state of the CP at the last call to gpu->progress() + */ + struct msm_cp_state last_cp_state; + /* * preempt_lock protects preemption and serializes wptr updates against * preemption. Can be aquired from irq context.