From patchwork Fri Feb 23 14:37:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205424 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp625317dyb; Fri, 23 Feb 2024 06:38:42 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCX2TaVzO9yURHF7JVs7iBGooyM2LyCm4RFBhqXMpveggO8pzZinFGbO/Mxyyo93erqZb6Md8ow6/Ab26QPRGTvsUEAGrg== X-Google-Smtp-Source: AGHT+IHi0BSQuLGEEqo3fSK6anksr1FVQ6vM4/zgchQ/fByVvNCIdM3N5lVd90062GrjUue6lLMU X-Received: by 2002:a05:620a:21ce:b0:787:6a5b:25ae with SMTP id h14-20020a05620a21ce00b007876a5b25aemr3279667qka.28.1708699122778; Fri, 23 Feb 2024 06:38:42 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699122; cv=pass; d=google.com; s=arc-20160816; b=pz7kw1WNi/rtVQcDoBswZUaZN93rPUFBEo2N8OXGRsxbbB6Ppsc+IfGEj5emUJLSeV wmNqL+P5920oBQHN/wm9HonPtg+hGIFL3Xqyo3FZG5MXUb1tXiKX7uPX9mDukKq1u2X7 Y6oJRupWbWibLHCcUznEFRa8LHK3tZSIl11+n11w71PsSh/ZJhTQ0FJyqYk8UvVVUWrq 59egVJoMssJLTqjq9jo/jHHVVs2WAYDScsV/B33PM6YQvagPpyvjxntjPM/CzINqpwE3 hKyxyZaZ69z1DiCGRK+XzAoAXgbCRBQOK3c5LS+zjVoc+aJP+dZky/UJ1yicfUCDVQaa QphA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=955eO+Q/g6pl32XJ2cybDlr/zrSZEzRuSbd5YXn12rQ=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=chQX+0VnG1vGM4l53YCN/7fM6h4FR6JVNtxkhdsUnP7qkV5b3neDWM/gkUFfzq8/DN 6pjMixFdgZ38O319lIxMq8aqOuTTuM9nwTHD1F9Kt1XX6ix9mLpVhLDxojTiBc6Gc1MZ MPKrsxpySEoR6VIoFIqe2TMxj9ykJC2cA0eF4WQs5Cq+drIL+XSdK8eEvc8ezl/lQii/ XO1xJlnTwNAd6E5/RnxhITtol5kxBbcRqwBKHSAz1tZl9L+WZp5Zvm1/iI+MBXzS0CHC OuvUB5QnW/aRE4S5JhzxqIvnzhhW4XGF0srlhr06xw69aIUJeObuAkzwF7Wr/VC3gDj5 LCwg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78491-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78491-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id j21-20020a37ef15000000b00785c97af6c0si15028329qkk.302.2024.02.23.06.38.42 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:38:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78491-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78491-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78491-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 835F41C2161F for ; Fri, 23 Feb 2024 14:38:42 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7A0DD5D499; Fri, 23 Feb 2024 14:37:46 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 623434A3D; Fri, 23 Feb 2024 14:37:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699064; cv=none; b=ufDWq/EN4b1LcYhaPc9/9h9Bc+ZWfa+hG86AMQSzI7faU/f7aN6/5Ot8/q5/nxtwoHHFxChEZDiSuYnIdxMWPx+UIkg2sEnmqIjfsaEGDoBCoYbFMzS/htd5p3wzeeRS8gEYiOGdUHM7lF3ZG3MTb2SebrNNp7NvFMNNZwk0uWY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699064; c=relaxed/simple; bh=wKUshStoYB0Xav71MEjOTZeeWMu1t3L3F5o+vhJerHY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kukLwTvvbIZII734QLnOqfAn95wo8LT0VqZM6dmEtKYzhaPMLyXyTtGfcLyoqqC8ENRIE6V8h+nc6IBLHuCCrA+3BQFhBHo2PHtOT/u/ZcvruzLmhh2r9mZ2+yplyppaD6R2PZPawnDksjSUzE8vVVN3SBVRFUd7195O3dRJo9k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCF34jntz6K8yQ; Fri, 23 Feb 2024 22:33:59 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id B9533140DE3; Fri, 23 Feb 2024 22:37:38 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:37 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 01/12] cxl/mbox: Add GET_SUPPORTED_FEATURES mailbox command Date: Fri, 23 Feb 2024 22:37:12 +0800 Message-ID: <20240223143723.1574-2-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791700891633975926 X-GMAIL-MSGID: 1791700891633975926 From: Shiju Jose Add support for GET_SUPPORTED_FEATURES mailbox command. CXL spec 3.1 section 8.2.9.6 describes optional device specific features. CXL devices supports features with changeable attributes. Get Supported Features retrieves the list of supported device specific features. The settings of a feature can be retrieved using Get Feature and optionally modified using Set Feature. Signed-off-by: Shiju Jose --- drivers/cxl/core/mbox.c | 27 +++++++++++++++++++ drivers/cxl/cxlmem.h | 58 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 27166a411705..79cc7fd433aa 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1290,6 +1290,33 @@ int cxl_set_timestamp(struct cxl_memdev_state *mds) } EXPORT_SYMBOL_NS_GPL(cxl_set_timestamp, CXL); +int cxl_get_supported_features(struct cxl_memdev_state *mds, + u32 count, u16 start_index, + struct cxl_mbox_get_supp_feats_out *feats_out) +{ + struct cxl_mbox_get_supp_feats_in pi; + struct cxl_mbox_cmd mbox_cmd; + int rc; + + pi.count = cpu_to_le32(count); + pi.start_index = cpu_to_le16(start_index); + + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_SUPPORTED_FEATURES, + .size_in = sizeof(pi), + .payload_in = &pi, + .size_out = count, + .payload_out = feats_out, + .min_out = sizeof(*feats_out), + }; + rc = cxl_internal_send_cmd(mds, &mbox_cmd); + if (rc < 0) + return rc; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_get_supported_features, CXL); + int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 5303d6942b88..dd66523cd96a 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -529,6 +529,7 @@ enum cxl_opcode { CXL_MBOX_OP_SET_TIMESTAMP = 0x0301, CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, CXL_MBOX_OP_GET_LOG = 0x0401, + CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -698,6 +699,60 @@ struct cxl_mbox_set_timestamp_in { } __packed; +/* Get Supported Features CXL 3.1 Spec 8.2.9.6.1 */ +/* + * Get Supported Features input payload + * CXL rev 3.1 section 8.2.9.6.1 Table 8-95 + */ +struct cxl_mbox_get_supp_feats_in { + __le32 count; + __le16 start_index; + u8 rsvd[2]; +} __packed; + +/* + * Get Supported Features Supported Feature Entry + * CXL rev 3.1 section 8.2.9.6.1 Table 8-97 + */ +/* Supported Feature Entry : Payload out attribute flags */ +#define CXL_FEAT_ENTRY_FLAG_CHANGABLE BIT(0) +#define CXL_FEAT_ENTRY_FLAG_DEEPEST_RESET_PERSISTENCE_MASK GENMASK(3, 1) +#define CXL_FEAT_ENTRY_FLAG_PERSIST_ACROSS_FIRMWARE_UPDATE BIT(4) +#define CXL_FEAT_ENTRY_FLAG_SUPPORT_DEFAULT_SELECTION BIT(5) +#define CXL_FEAT_ENTRY_FLAG_SUPPORT_SAVED_SELECTION BIT(6) + +enum cxl_feat_attr_value_persistence { + CXL_FEAT_ATTR_VALUE_PERSISTENCE_NONE, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_CXL_RESET, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_HOT_RESET, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_WARM_RESET, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_COLD_RESET, + CXL_FEAT_ATTR_VALUE_PERSISTENCE_MAX +}; + +struct cxl_mbox_supp_feat_entry { + uuid_t uuid; + __le16 index; + __le16 get_size; + __le16 set_size; + __le32 attr_flags; + u8 get_version; + u8 set_version; + __le16 set_effects; + u8 rsvd[18]; +} __packed; + +/* + * Get Supported Features output payload + * CXL rev 3.1 section 8.2.9.6.1 Table 8-96 + */ +struct cxl_mbox_get_supp_feats_out { + __le16 nr_entries; + __le16 nr_supported; + u8 rsvd[4]; + struct cxl_mbox_supp_feat_entry feat_entries[]; +} __packed; + /* Get Poison List CXL 3.0 Spec 8.2.9.8.4.1 */ struct cxl_mbox_poison_in { __le64 offset; @@ -829,6 +884,9 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd, enum cxl_event_type event_type, const uuid_t *uuid, union cxl_event *evt); int cxl_set_timestamp(struct cxl_memdev_state *mds); +int cxl_get_supported_features(struct cxl_memdev_state *mds, + u32 count, u16 start_index, + struct cxl_mbox_get_supp_feats_out *feats_out); int cxl_poison_state_init(struct cxl_memdev_state *mds); int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr); From patchwork Fri Feb 23 14:37:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205423 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp625307dyb; Fri, 23 Feb 2024 06:38:41 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVPaIQnW0kFz0hC5U0dwtirNL/tNW0CMNud59n/4RjeRPz3tXM9D5op+o/5iuGNJzPPFoo2zZ8CHegZ6v1plQl6RmztFA== X-Google-Smtp-Source: AGHT+IHo3OSJJIQsS/2wYrYVZPzhpLIex5JGYiJm/WCeBgMxYwZWccc6bbzzdcbipk3MzNCaEtZs X-Received: by 2002:a05:6870:818a:b0:21e:b096:2494 with SMTP id k10-20020a056870818a00b0021eb0962494mr2117883oae.50.1708699121228; Fri, 23 Feb 2024 06:38:41 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699121; cv=pass; d=google.com; s=arc-20160816; b=xsHzK+oIYxy82lfplh+Oswe/ypFMPHnT4pPrulXXlyHM/hgH9ITDQG7y/nQ1vUQtFh zb6ckQcQ3/u0uULyTZ9DlYkE4jE4IqknBn/inckPPLZ2OTKOEITXdvWoXcUu0aKxg3ss sk1YQ52qIzn6bPWtOSOPoe4cIfiQGFzbgZ6ywIdEZGL7nZcDME7S53Zv25ObFiBlCYuO BxKTqqqMjzzbI713agwIOj7j6sH1MByLoVWgE9FLg6CVq3FrnNQUUmhl0eDOWJ6ZHV9o Xy3+HRJJvz+/3Cf8yym+OL1wd1nqcMAOA/tE62Dp6bfcOAISDgGTSyyGvzy4SF7GbNyf MmSw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=+th/Td19wsW0vrhRbKsf1M2KQWDRombwPDo6kwExpPs=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=jJG5TSVPIn18WVApMrmMaie2uCJLmkSGk0BWEqEZuRJaWQCzXeapnO3TtvcXGDg6kb UlAlr+mCyhQpFCcncc+c/kGfT9d8ukwtYcLHa5f7JdRZwf+4mVqXU+9m2m9Ndm8BgD+F AqmF6QURNdOSp3Tz1j/jJNia5rIQmf+RFQLuCl1zXRz71b55rHr0uTILuwFgLL7ZmtZv Lu9pLbnpoWgPUurZrCnW1Xud/UZU0xlz0O6DxqZywhQlFwk3UqOYBHI5kzLkdggTYHaA X+oIy65yy79+GQDwHd4k7lucX2qcHaulvo/tiJ7hKufOTqZIcSMeq2A2Nk2PTPik6F5b zECg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78490-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78490-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id r201-20020a632bd2000000b005dccf9561c0si11956618pgr.150.2024.02.23.06.38.41 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:38:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78490-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78490-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78490-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 072CC2853DD for ; Fri, 23 Feb 2024 14:38:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6505D5B1F4; Fri, 23 Feb 2024 14:37:46 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC7CD4C9F; Fri, 23 Feb 2024 14:37:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699064; cv=none; b=bq3Y1Nx1dgIEguszRsfh4716v5BAqqTc6DVOMsSjlaR1/o+ttYzzcdbRTOe7UiQJgkuNTKsbi0/ypz7Y1ejXbTjmzorNQsFUpa2jk9MKuZIYHLGn+QY/31SXcWx/88YQjLOUQzOnOigpaWQP3RjNMUqucHHgM2mgIq6Wk7EqQa4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699064; c=relaxed/simple; bh=lvbjD2z9NHzDWPv/TQz12bdxnv+pmG3sMvEDkHDpAKQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=r/jjZRjO3rDSky6fbTPJlZo2Vq+kn0tiXgLuvB+6JTiF7MK+4GfpBWf6rH1NlPg2PfW2DbRuf2D2deKJacX3I4eZLaQxJNZ/r+p1qB7kGvuh80UEpddA8cnbwuTH4VBJh1irlHbld+0cR3f7aswcme5ngSWu4mooMwzcrYC28pc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCF44XpBz6K8d2; Fri, 23 Feb 2024 22:34:00 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id B5B62140D26; Fri, 23 Feb 2024 22:37:39 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:38 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 02/12] cxl/mbox: Add GET_FEATURE mailbox command Date: Fri, 23 Feb 2024 22:37:13 +0800 Message-ID: <20240223143723.1574-3-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791700889695204370 X-GMAIL-MSGID: 1791700889695204370 From: Shiju Jose Add support for GET_FEATURE mailbox command. CXL spec 3.1 section 8.2.9.6 describes optional device specific features. The settings of a feature can be retrieved using Get Feature command. Signed-off-by: Shiju Jose --- drivers/cxl/core/mbox.c | 49 +++++++++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 25 +++++++++++++++++++++ 2 files changed, 74 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 79cc7fd433aa..c078e62ea194 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1317,6 +1317,55 @@ int cxl_get_supported_features(struct cxl_memdev_state *mds, } EXPORT_SYMBOL_NS_GPL(cxl_get_supported_features, CXL); +size_t cxl_get_feature(struct cxl_memdev_state *mds, + const uuid_t feat_uuid, void *feat_out, + size_t feat_out_size, + size_t feat_out_min_size, + enum cxl_get_feat_selection selection) +{ + struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_mbox_get_feat_in pi; + struct cxl_mbox_cmd mbox_cmd; + size_t data_rcvd_size = 0; + size_t data_to_rd_size; + int rc; + + if (feat_out_size < feat_out_min_size) { + dev_err(cxlds->dev, + "%s: feature out buffer size(%lu) is not big enough\n", + __func__, feat_out_size); + return 0; + } + + pi.uuid = feat_uuid; + pi.selection = selection; + do { + if ((feat_out_min_size - data_rcvd_size) <= mds->payload_size) + data_to_rd_size = feat_out_min_size - data_rcvd_size; + else + data_to_rd_size = mds->payload_size; + + pi.offset = cpu_to_le16(data_rcvd_size); + pi.count = cpu_to_le16(data_to_rd_size); + + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_FEATURE, + .size_in = sizeof(pi), + .payload_in = &pi, + .size_out = data_to_rd_size, + .payload_out = feat_out + data_rcvd_size, + .min_out = data_to_rd_size, + }; + rc = cxl_internal_send_cmd(mds, &mbox_cmd); + if (rc < 0 || mbox_cmd.size_out == 0) + return 0; + data_rcvd_size += mbox_cmd.size_out; + } while (data_rcvd_size < feat_out_min_size); + + return data_rcvd_size; +} +EXPORT_SYMBOL_NS_GPL(cxl_get_feature, CXL); + int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index dd66523cd96a..bcfefff062a6 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -530,6 +530,7 @@ enum cxl_opcode { CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, CXL_MBOX_OP_GET_LOG = 0x0401, CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, + CXL_MBOX_OP_GET_FEATURE = 0x0501, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -753,6 +754,25 @@ struct cxl_mbox_get_supp_feats_out { struct cxl_mbox_supp_feat_entry feat_entries[]; } __packed; +/* Get Feature CXL 3.1 Spec 8.2.9.6.2 */ +/* + * Get Feature input payload + * CXL rev 3.1 section 8.2.9.6.2 Table 8-99 + */ +enum cxl_get_feat_selection { + CXL_GET_FEAT_SEL_CURRENT_VALUE, + CXL_GET_FEAT_SEL_DEFAULT_VALUE, + CXL_GET_FEAT_SEL_SAVED_VALUE, + CXL_GET_FEAT_SEL_MAX +}; + +struct cxl_mbox_get_feat_in { + uuid_t uuid; + __le16 offset; + __le16 count; + u8 selection; +} __packed; + /* Get Poison List CXL 3.0 Spec 8.2.9.8.4.1 */ struct cxl_mbox_poison_in { __le64 offset; @@ -887,6 +907,11 @@ int cxl_set_timestamp(struct cxl_memdev_state *mds); int cxl_get_supported_features(struct cxl_memdev_state *mds, u32 count, u16 start_index, struct cxl_mbox_get_supp_feats_out *feats_out); +size_t cxl_get_feature(struct cxl_memdev_state *mds, + const uuid_t feat_uuid, void *feat_out, + size_t feat_out_size, + size_t feat_out_min_size, + enum cxl_get_feat_selection selection); int cxl_poison_state_init(struct cxl_memdev_state *mds); int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr); From patchwork Fri Feb 23 14:37:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205425 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp625510dyb; Fri, 23 Feb 2024 06:39:03 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXESKBJLBltUF+mf5Ny5gNUxY7QyrFeIMq/yj/h36USZ4r9CBPhU5nUdNmLR11tqOuNfwx9WSjJpUa3/3tz9LmrSTevtw== X-Google-Smtp-Source: AGHT+IHNrJgkq/eOxxV2s9+Ol+nO2U1DtxmC5MHPfuTRUkYbKjMESp7MliwP4+XOIc/okL0o5mJH X-Received: by 2002:a05:620a:12ed:b0:787:a766:6eb4 with SMTP id f13-20020a05620a12ed00b00787a7666eb4mr2099764qkl.62.1708699143001; Fri, 23 Feb 2024 06:39:03 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699142; cv=pass; d=google.com; s=arc-20160816; b=m7QBAdAH5+/Vae2VjHkkqP9D/x51qb+VrUaF4S0YeG+qMIfP3iYakNmGKGHnvOoGw3 PAwLIF2PPQVLxixWYAkSbVJYDCvj+EYjR369lUdfRPbWyHjSHVBeU3s+y1flSBvjpytO TWwaYqypFIo6UTdlpTrbZPL0gbK3ChoUc5p0yClL/QvEvQVg0Ub9lkuIk1dL9YIenJLE MSdqvf353/fvVPc0puSzA3ulhb+1Bar0jtclvsvVWnNEDUim6ZfXQCIoNeV5m/Aj1KRX ejgLzNHdePJq2L5lKtrzOhQNv/PpxUW8QE3jYZxE2sf1weKlhFsRZfY9wdVLfNFlrKL1 amhw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=m0SFmr4UIoOGpWrIhelCVMqKDnU2V3fE7hjFxymHs78=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=QRx5mZHyJcD1DAnlSuNpfN2ArsnJvdJgV2/e6sieuGWYDuQL/37qd9dHZRKV2y337D 53oWgNmTRUtCRyoKCaTUImOOkNwF/+2iJFLZhOt28Mmx0VLcoZZRA7XzR0hqYkCFerZa LmOS60L/i+hxjK/FJ+uNqxgSHhJ/OAiOCxBauQ+6TBJKPgM/njvao179m8lcsTIzX3Av oyPWN/iOLQKffZwGOfZbcSZ+AJtlkilBao1vX6LzD8f8hniJmeR0vCZDOgocB8fP5vRx TQyqSUc+XQYV+OoTdVjPeAj0aMG4DK5AAchOuB5OhNnyrl+CJdu0W5iUOh2wEBNQ1eWb FtqQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78492-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78492-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id f11-20020a05620a20cb00b007876c39c716si10322665qka.258.2024.02.23.06.39.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:39:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78492-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78492-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78492-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id B89AD1C22AE1 for ; Fri, 23 Feb 2024 14:39:02 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7014A7F7D0; Fri, 23 Feb 2024 14:37:47 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAA3D524F; Fri, 23 Feb 2024 14:37:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699065; cv=none; b=gXYJUWhlQ3ofD7K1uih8Fy7gt22F/S0GqOH/zPlsjGFIDSeaIJdgVoLs4beYsCQPzkxXHDUTVRdYdBRXMGNROW87/K6nZXT7c+17or3jnuzv0L0tNsR0K9zn0QTB91dFQbC0juf0Ye3ccQRT46aZ1yvzm0SGmkNHn+n/EdWy8Co= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699065; c=relaxed/simple; bh=8fFHbASJ7cdTEXIAbqPeXlnxFauqATtEpaY2ZD+dZpE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AjMQEDIZP3mTf92MD2TsgzBfC+lF03eyaz3WowJRR1waquDedGwMvHJLn9u/fk6QNt7KOqoKbDOFV38BNt1pPlsrDwgSX1U435R2BfCSrCIKIoCLvDGPvaB5iG5Q4kNxc4uxWDX1LQMZ7cLNM4SvmnRmq2/mLsI3apvCCip5S6c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCF54nwwz6K6hC; Fri, 23 Feb 2024 22:34:01 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id BE6E1140119; Fri, 23 Feb 2024 22:37:40 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:39 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 03/12] cxl/mbox: Add SET_FEATURE mailbox command Date: Fri, 23 Feb 2024 22:37:14 +0800 Message-ID: <20240223143723.1574-4-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791700912508730880 X-GMAIL-MSGID: 1791700912508730880 From: Shiju Jose Add support for SET_FEATURE mailbox command. CXL spec 3.1 section 8.2.9.6 describes optional device specific features. CXL devices supports features with changeable attributes. The settings of a feature can be optionally modified using Set Feature command. Signed-off-by: Shiju Jose --- drivers/cxl/core/mbox.c | 67 +++++++++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 30 ++++++++++++++++++ 2 files changed, 97 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index c078e62ea194..d1660bd20bdb 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1366,6 +1366,73 @@ size_t cxl_get_feature(struct cxl_memdev_state *mds, } EXPORT_SYMBOL_NS_GPL(cxl_get_feature, CXL); +int cxl_set_feature(struct cxl_memdev_state *mds, + const uuid_t feat_uuid, u8 feat_version, + void *feat_data, size_t feat_data_size, + u8 feat_flag) +{ + struct cxl_memdev_set_feat_pi { + struct cxl_mbox_set_feat_hdr hdr; + u8 feat_data[]; + } __packed; + size_t data_in_size, data_sent_size = 0; + struct cxl_mbox_cmd mbox_cmd; + size_t hdr_size; + int rc = 0; + + struct cxl_memdev_set_feat_pi *pi __free(kfree) = + kmalloc(mds->payload_size, GFP_KERNEL); + pi->hdr.uuid = feat_uuid; + pi->hdr.version = feat_version; + feat_flag &= ~CXL_SET_FEAT_FLAG_DATA_TRANSFER_MASK; + hdr_size = sizeof(pi->hdr); + /* + * Check minimum mbox payload size is available for + * the feature data transfer. + */ + if (hdr_size + 10 > mds->payload_size) + return -ENOMEM; + + if ((hdr_size + feat_data_size) <= mds->payload_size) { + pi->hdr.flags = cpu_to_le32(feat_flag | + CXL_SET_FEAT_FLAG_FULL_DATA_TRANSFER); + data_in_size = feat_data_size; + } else { + pi->hdr.flags = cpu_to_le32(feat_flag | + CXL_SET_FEAT_FLAG_INITIATE_DATA_TRANSFER); + data_in_size = mds->payload_size - hdr_size; + } + + do { + pi->hdr.offset = cpu_to_le16(data_sent_size); + memcpy(pi->feat_data, feat_data + data_sent_size, data_in_size); + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_SET_FEATURE, + .size_in = hdr_size + data_in_size, + .payload_in = pi, + }; + rc = cxl_internal_send_cmd(mds, &mbox_cmd); + if (rc < 0) + return rc; + + data_sent_size += data_in_size; + if (data_sent_size >= feat_data_size) + return 0; + + if ((feat_data_size - data_sent_size) <= (mds->payload_size - hdr_size)) { + data_in_size = feat_data_size - data_sent_size; + pi->hdr.flags = cpu_to_le32(feat_flag | + CXL_SET_FEAT_FLAG_FINISH_DATA_TRANSFER); + } else { + pi->hdr.flags = cpu_to_le32(feat_flag | + CXL_SET_FEAT_FLAG_CONTINUE_DATA_TRANSFER); + } + } while (true); + + return rc; +} +EXPORT_SYMBOL_NS_GPL(cxl_set_feature, CXL); + int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index bcfefff062a6..a8d4104afa53 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -531,6 +531,7 @@ enum cxl_opcode { CXL_MBOX_OP_GET_LOG = 0x0401, CXL_MBOX_OP_GET_SUPPORTED_FEATURES = 0x0500, CXL_MBOX_OP_GET_FEATURE = 0x0501, + CXL_MBOX_OP_SET_FEATURE = 0x0502, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, @@ -773,6 +774,31 @@ struct cxl_mbox_get_feat_in { u8 selection; } __packed; +/* Set Feature CXL 3.1 Spec 8.2.9.6.3 */ +/* + * Set Feature input payload + * CXL rev 3.1 section 8.2.9.6.3 Table 8-101 + */ +/* Set Feature : Payload in flags */ +#define CXL_SET_FEAT_FLAG_DATA_TRANSFER_MASK GENMASK(2, 0) +enum cxl_set_feat_flag_data_transfer { + CXL_SET_FEAT_FLAG_FULL_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_INITIATE_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_CONTINUE_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_FINISH_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_ABORT_DATA_TRANSFER, + CXL_SET_FEAT_FLAG_DATA_TRANSFER_MAX +}; +#define CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET BIT(3) + +struct cxl_mbox_set_feat_hdr { + uuid_t uuid; + __le32 flags; + __le16 offset; + u8 version; + u8 rsvd[9]; +} __packed; + /* Get Poison List CXL 3.0 Spec 8.2.9.8.4.1 */ struct cxl_mbox_poison_in { __le64 offset; @@ -912,6 +938,10 @@ size_t cxl_get_feature(struct cxl_memdev_state *mds, size_t feat_out_size, size_t feat_out_min_size, enum cxl_get_feat_selection selection); +int cxl_set_feature(struct cxl_memdev_state *mds, + const uuid_t feat_uuid, u8 feat_version, + void *feat_data, size_t feat_data_size, + u8 feat_flag); int cxl_poison_state_init(struct cxl_memdev_state *mds); int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr); From patchwork Fri Feb 23 14:37:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205427 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp626178dyb; Fri, 23 Feb 2024 06:40:15 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUHc6nOwPsAZ5JkMFjrlICleqf8bAqgqp1mSNDKAP0/Ar5QYeJsNSROKkJw14rFgL9vKmkISuj8Iuf9UbXlzKwsZ1+Tcg== X-Google-Smtp-Source: AGHT+IHLHHiYGqHRv5LS6dtjuShNVTRoHkh73MZ/j640OtjJFhMdt8k0xYjK190zICUTew17+1lb X-Received: by 2002:a05:6e02:48e:b0:363:e7c8:2180 with SMTP id b14-20020a056e02048e00b00363e7c82180mr56449ils.12.1708699215638; Fri, 23 Feb 2024 06:40:15 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699215; cv=pass; d=google.com; s=arc-20160816; b=E+dmvkTW2fgtSgR2b7+bdgxAAXW8xJP0Q37Dl7GFRE7PhPnkEpMvHtxHXFZhCRR68d 4hiS5u91ZUzYCOFH8hdl962E0xhBuBN4Gm7V4043Uz4nbG0/rdiNiKyP0E8Rt9IWHNWf pf5ZVnQLk9mX188NuBn87C8jKEdHJ/SqfftEphWo+ANFuu0hZ3t5u0YsCLPxU1Ar2/PD 9dt4OrFrUZ55+9JFb9OyfAhtMwYulDPX6EiHgk3gNUlKRAOTFF23F3EMg2nxVJ8OXJeT qudOlAXznlwelE2/tIMeD3VxVniGfMlhhmkOyXPjEWTunM+uz1cmlD8ltYYj1Vv1vEiE 8nRw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=UacrPeJ2lSG0Q1MknZVZJBuGLr9ZywjzEmuG55EsiAI=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=HD7F+BMqin4sPbFty2kBSLbp/o9lKS7rBXkxw/pxpbmmDf+BJudCttKYFMyCadrwjl M5aIg51GojvGWn0edoTq94+x0zvGwXfob26m60UqBR7cpm/WcQg44R2Ud+vH4ftHO5oY zbdbYT4xRLZuZpaEHGe79FHuzbJHZwxmOwnGDtJBLgzBPXH18ztoOuE5Yg66OVllmprL Iuu0yIGM8ExqGhYYMCMLs1Lkw3M9I0SYTpHB+02JPnuegyYhuOL1Uhtkfm7OzaGkDZtl Xqnye5Xgx9b3EwgCyziJWdBiuxdoNd2ULlnpmDTZ8oQYgO0nd+N5Icubyt7Dud8XWsCb xrcA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78493-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78493-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id k23-20020a6568d7000000b005dc4fdcf9b2si12389784pgt.9.2024.02.23.06.40.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:40:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78493-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78493-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78493-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id C43C4B21566 for ; Fri, 23 Feb 2024 14:39:31 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BB2B4823AC; Fri, 23 Feb 2024 14:37:49 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1EAB6FAD; Fri, 23 Feb 2024 14:37:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699065; cv=none; b=nw83OWSgZBtg9SG7oSMFwARztDrcJmsCCLrvJiz6AP7aXi/mF5s+o9kV7UPzfQYgR3TX7jJY38jhiqT47sBe5FWagXhHFu6NF+0oYIIc/QlFX0hUWDCNTtyp+mqsCMQ9B4MQ7dZ8PpN2QXmHCxOHgk8Kwqom/VBrO2s8MCrUcHg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699065; c=relaxed/simple; bh=JYvf2mzdsavpXHHzWLpT3tNpeCaGw7lin4/Xgp0ChKU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mLtLI0nMIaRUyUfo20iWy66/0dXYIeOexgsg2BDG3KTRBHqBxaGf6cWD55Sw7rwTN+xff+PtvDZkeyzlSH3y8p0hg0PE1sPy5nYOVLNh7AoJ+7nWonBSOA3kGnuor4KE2o4JItfMY983/86uOrJcxlIN8GjcLoVOQd8tQIgBHq0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCF64b6jz6K8yZ; Fri, 23 Feb 2024 22:34:02 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id B2F83140B54; Fri, 23 Feb 2024 22:37:41 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:40 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 04/12] cxl/memscrub: Add CXL device patrol scrub control feature Date: Fri, 23 Feb 2024 22:37:15 +0800 Message-ID: <20240223143723.1574-5-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791700988455542380 X-GMAIL-MSGID: 1791700988455542380 From: Shiju Jose CXL spec 3.1 section 8.2.9.9.11.1 describes the device patrol scrub control feature. The device patrol scrub proactively locates and makes corrections to errors in regular cycle. The patrol scrub control allows the request to configure patrol scrub input configurations. The patrol scrub control allows the requester to specify the number of hours for which the patrol scrub cycles must be completed, provided that the requested number is not less than the minimum number of hours for the patrol scrub cycle that the device is capable of. In addition, the patrol scrub controls allow the host to disable and enable the feature in case disabling of the feature is needed for other purposes such as performance-aware operations which require the background operations to be turned off. Signed-off-by: Shiju Jose --- drivers/cxl/Kconfig | 15 +++ drivers/cxl/core/Makefile | 1 + drivers/cxl/core/memscrub.c | 248 ++++++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 8 ++ drivers/cxl/pci.c | 4 + 5 files changed, 276 insertions(+) create mode 100644 drivers/cxl/core/memscrub.c diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 67998dbd1d46..e61c69fa7bf5 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -157,4 +157,19 @@ config CXL_PMU monitoring units and provide standard perf based interfaces. If unsure say 'm'. + +config CXL_SCRUB + bool "CXL: Memory scrub feature" + depends on CXL_PCI + depends on CXL_MEM + help + The CXL memory scrub control is an optional feature allows host to + control the scrub configurations of CXL Type 3 devices, which + support patrol scrub and/or DDR5 ECS(Error Check Scrub). + + Say 'y/n' to enable/disable the CXL memory scrub driver that will + attach to CXL.mem devices for memory scrub control feature. See + sections 8.2.9.9.11.1 and 8.2.9.9.11.2 in the CXL 3.1 specification + for a detailed description of CXL memory scrub control features. + endif diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 9259bcc6773c..e0fc814c3983 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -16,3 +16,4 @@ cxl_core-y += pmu.o cxl_core-y += cdat.o cxl_core-$(CONFIG_TRACING) += trace.o cxl_core-$(CONFIG_CXL_REGION) += region.o +cxl_core-$(CONFIG_CXL_SCRUB) += memscrub.o diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c new file mode 100644 index 000000000000..2079498719fe --- /dev/null +++ b/drivers/cxl/core/memscrub.c @@ -0,0 +1,248 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * CXL memory scrub driver. + * + * Copyright (c) 2024 HiSilicon Limited. + * + * - Provides functions to configure patrol scrub feature of the + * CXL memory devices. + */ + +#define pr_fmt(fmt) "CXL_MEM_SCRUB: " fmt + +#include + +/* CXL memory scrub feature common definitions */ +#define CXL_SCRUB_MAX_ATTR_RANGE_LENGTH 128 + +static int cxl_mem_get_supported_feature_entry(struct cxl_memdev *cxlmd, const uuid_t *feat_uuid, + struct cxl_mbox_supp_feat_entry *feat_entry_out) +{ + struct cxl_mbox_supp_feat_entry *feat_entry; + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + int feat_index, feats_out_size; + int nentries, count; + int ret; + + feat_index = 0; + feats_out_size = sizeof(struct cxl_mbox_get_supp_feats_out) + + sizeof(struct cxl_mbox_supp_feat_entry); + struct cxl_mbox_get_supp_feats_out *feats_out __free(kfree) = + kmalloc(feats_out_size, GFP_KERNEL); + if (!feats_out) + return -ENOMEM; + + do { + memset(feats_out, 0, feats_out_size); + ret = cxl_get_supported_features(mds, feats_out_size, + feat_index, feats_out); + if (ret) + return ret; + + nentries = feats_out->nr_entries; + if (!nentries) + return -EOPNOTSUPP; + + /* Check CXL memdev supports the feature */ + feat_entry = feats_out->feat_entries; + for (count = 0; count < nentries; count++, feat_entry++) { + if (uuid_equal(&feat_entry->uuid, feat_uuid)) { + memcpy(feat_entry_out, feat_entry, + sizeof(*feat_entry_out)); + return 0; + } + } + feat_index += nentries; + } while (true); + + return -EOPNOTSUPP; +} + +/* CXL memory patrol scrub control definitions */ +#define CXL_MEMDEV_PS_GET_FEAT_VERSION 0x01 +#define CXL_MEMDEV_PS_SET_FEAT_VERSION 0x01 + +static const uuid_t cxl_patrol_scrub_uuid = + UUID_INIT(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33, 0x75, 0x77, 0x4e, \ + 0x06, 0xdb, 0x8a); + +/* CXL memory patrol scrub control functions */ +struct cxl_patrol_scrub_context { + struct device *dev; + u16 get_feat_size; + u16 set_feat_size; + bool scrub_cycle_changeable; +}; + +/** + * struct cxl_memdev_ps_params - CXL memory patrol scrub parameter data structure. + * @enable: [IN & OUT] enable(1)/disable(0) patrol scrub. + * @scrub_cycle_changeable: [OUT] scrub cycle attribute of patrol scrub is changeable. + * @rate: [IN] Requested patrol scrub cycle in hours. + * [OUT] Current patrol scrub cycle in hours. + * @min_rate:[OUT] minimum patrol scrub cycle, in hours, supported. + * @rate_avail:[OUT] Supported patrol scrub cycle in hours. + */ +struct cxl_memdev_ps_params { + bool enable; + bool scrub_cycle_changeable; + u16 rate; + u16 min_rate; + char rate_avail[CXL_SCRUB_MAX_ATTR_RANGE_LENGTH]; +}; + +enum { + CXL_MEMDEV_PS_PARAM_ENABLE, + CXL_MEMDEV_PS_PARAM_RATE, +}; + +#define CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK BIT(0) +#define CXL_MEMDEV_PS_SCRUB_CYCLE_REALTIME_REPORT_CAP_MASK BIT(1) +#define CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK GENMASK(7, 0) +#define CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK GENMASK(15, 8) +#define CXL_MEMDEV_PS_FLAG_ENABLED_MASK BIT(0) + +struct cxl_memdev_ps_rd_attrs { + u8 scrub_cycle_cap; + __le16 scrub_cycle; + u8 scrub_flags; +} __packed; + +struct cxl_memdev_ps_wr_attrs { + u8 scrub_cycle_hr; + u8 scrub_flags; +} __packed; + +static int cxl_mem_ps_get_attrs(struct device *dev, + struct cxl_memdev_ps_params *params) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + size_t rd_data_size = sizeof(struct cxl_memdev_ps_rd_attrs); + size_t data_size; + + if (!mds) + return -EFAULT; + + struct cxl_memdev_ps_rd_attrs *rd_attrs __free(kfree) = + kmalloc(rd_data_size, GFP_KERNEL); + if (!rd_attrs) + return -ENOMEM; + + params->scrub_cycle_changeable = 0; + params->enable = 0; + params->rate = 0; + params->min_rate = 0; + data_size = cxl_get_feature(mds, cxl_patrol_scrub_uuid, rd_attrs, + rd_data_size, rd_data_size, + CXL_GET_FEAT_SEL_CURRENT_VALUE); + if (!data_size) { + snprintf(params->rate_avail, CXL_SCRUB_MAX_ATTR_RANGE_LENGTH, + "Unavailable"); + return -EIO; + } + params->scrub_cycle_changeable = FIELD_GET(CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_MASK, + rd_attrs->scrub_cycle_cap); + params->enable = FIELD_GET(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + rd_attrs->scrub_flags); + params->rate = FIELD_GET(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + rd_attrs->scrub_cycle); + params->min_rate = FIELD_GET(CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_MASK, + rd_attrs->scrub_cycle); + snprintf(params->rate_avail, CXL_SCRUB_MAX_ATTR_RANGE_LENGTH, + "Minimum scrub cycle = %d hour", params->min_rate); + + return 0; +} + +static int __maybe_unused +cxl_mem_ps_set_attrs(struct device *dev, struct cxl_memdev_ps_params *params, + u8 param_type) +{ + struct cxl_memdev_ps_wr_attrs wr_attrs; + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_memdev_ps_params rd_params; + int ret; + + if (!mds) + return -EFAULT; + + ret = cxl_mem_ps_get_attrs(dev, &rd_params); + if (ret) { + dev_err(dev, "Get cxlmemdev patrol scrub params failed ret=%d\n", + ret); + return ret; + } + + switch (param_type) { + case CXL_MEMDEV_PS_PARAM_ENABLE: + wr_attrs.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + params->enable); + wr_attrs.scrub_cycle_hr = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + rd_params.rate); + break; + case CXL_MEMDEV_PS_PARAM_RATE: + if (params->rate < rd_params.min_rate) { + dev_err(dev, "Invalid CXL patrol scrub cycle(%d) to set\n", + params->rate); + dev_err(dev, "Minimum supported CXL patrol scrub cycle in hour %d\n", + params->min_rate); + return -EINVAL; + } + wr_attrs.scrub_cycle_hr = FIELD_PREP(CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_MASK, + params->rate); + wr_attrs.scrub_flags = FIELD_PREP(CXL_MEMDEV_PS_FLAG_ENABLED_MASK, + rd_params.enable); + break; + default: + dev_err(dev, "Invalid CXL patrol scrub parameter to set\n"); + return -EINVAL; + } + + ret = cxl_set_feature(mds, cxl_patrol_scrub_uuid, CXL_MEMDEV_PS_SET_FEAT_VERSION, + &wr_attrs, sizeof(wr_attrs), + CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET); + if (ret) { + dev_err(dev, "CXL patrol scrub set feature failed ret=%d\n", + ret); + return ret; + } + + return 0; +} + +int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) +{ + struct cxl_patrol_scrub_context *cxl_ps_ctx; + struct cxl_mbox_supp_feat_entry feat_entry; + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_mem_get_supported_feature_entry(cxlmd, &cxl_patrol_scrub_uuid, + &feat_entry); + if (ret < 0) + return ret; + + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + return -EOPNOTSUPP; + + ret = cxl_mem_ps_get_attrs(&cxlmd->dev, ¶ms); + if (ret) + return dev_err_probe(&cxlmd->dev, ret, + "Get CXL patrol scrub params failed\n"); + + cxl_ps_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ps_ctx), GFP_KERNEL); + if (!cxl_ps_ctx) + return -ENOMEM; + + cxl_ps_ctx->get_feat_size = feat_entry.get_size; + cxl_ps_ctx->set_feat_size = feat_entry.set_size; + cxl_ps_ctx->scrub_cycle_changeable = params.scrub_cycle_changeable; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_mem_patrol_scrub_init, CXL); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index a8d4104afa53..e6a709a0e168 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -949,6 +949,14 @@ int cxl_trigger_poison_list(struct cxl_memdev *cxlmd); int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa); int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa); +/* cxl memory scrub functions */ +#ifdef CONFIG_CXL_SCRUB +int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd); +#else +static inline int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) +{ return -EOPNOTSUPP; } +#endif + #ifdef CONFIG_CXL_SUSPEND void cxl_mem_active_inc(void); void cxl_mem_active_dec(void); diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 233e7c42c161..371c3abcf2fe 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -886,6 +886,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + rc = cxl_mem_patrol_scrub_init(cxlmd); + if (rc) + dev_dbg(&pdev->dev, "CXL patrol scrub init failed\n"); + rc = devm_cxl_sanitize_setup_notifier(&pdev->dev, cxlmd); if (rc) return rc; From patchwork Fri Feb 23 14:37:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205426 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp625967dyb; Fri, 23 Feb 2024 06:39:53 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUSHK9hIiHRgj7/LY/c5p8kUOSetvRYhTjLrHZ+FaUznAFQUjv/CYikmJKZsrA5K9bfRVUWbOxahMRzNC+M6U8L4l4qIQ== X-Google-Smtp-Source: AGHT+IGuzaWTDaBhuhBRH8/jBWd5USsYcT/4AHzW1002dxor7GJi92T9ZWtPq4ksHHexGZ/HBkIk X-Received: by 2002:a50:ef01:0:b0:564:6bf1:e804 with SMTP id m1-20020a50ef01000000b005646bf1e804mr5293eds.42.1708699193631; Fri, 23 Feb 2024 06:39:53 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699193; cv=pass; d=google.com; s=arc-20160816; b=J0TF80yOxdsK8dovsVwg7s7faW8ot34B0AltnqRvD9cwJYmxQKnE3aGMSnRT2EN5rN IIkmdliJWkoie6Z1R9Pbg3BRghxeHlFAcqX1vD4x8MbBj/DFO9a1xGb+y7e2XCnr3tGA 9DzPJfXXz4FuWTDbnA4YqA4TgMB4DAPLRJ9XdZrJ9Hl4ZJek+ctRFepXuwl4env+l7S0 NRtdMgKdXOFO8MxKHGI8/9Nx3aAlP8GQv3ar8v0RbnHcAz8JpHr1YU64jJkqpkN9EvKx 1UMU/Rd49L5lI9kbcSA1+YPHAbqi19pSj3JiR1Ki8OfTyJf/j6TcQiKKLaNZtSbuWOPZ vi+w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=QVBCC34HmEbXN9oQsv5TW6jPAk1HxCKqznkh73IL8kU=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=i3QhqhO4hqqgY8g6n9FXXTat785rjQYmFo6jZappZEtdQCs+yQczylZprFaMIv0Qav zv+q9782MsdyDMIFiQ+0qz2mDxTj/DBnA+RAdD55lwOQAWe8eKNsWvMUlkze8Uvxf+kG H5CeqiDqE398+AITbslTotQirUSZXZDqUBGjSJLrxFXgpErXNEZaWcJP9zwr/NG1c4ZO hhMdwNzzZS94KFAVHh21HWHjp+e881rBTxV+pJ0aOBuiyDHZTxgoKORjItVzxOrg+7b8 /L02rQ4p+5L/2/68uhzjh3EL2qDAV5n/ZZd3RAvPk+wo+9zCtEKJvi0kLyKq98rIE1/G flBg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78494-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78494-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id j17-20020a508a91000000b0056415925a05si6204811edj.238.2024.02.23.06.39.53 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:39:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78494-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78494-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78494-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 1A6931F23E8F for ; Fri, 23 Feb 2024 14:39:53 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D4B22611A; Fri, 23 Feb 2024 14:37:50 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD13718AED; Fri, 23 Feb 2024 14:37:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699067; cv=none; b=KJIhb6d+uNEcAYngkr/KtKfO66Lhp8JaY67gwAR8RTlGCpMXGPGB5vVB8ogDe5x3VhfTK+bgK735BETnIz2f/ISZB7//G2QLiguOyd/l3FDC3tMldsmPJTYvR7NV+Ff+tAgyhEfl55JdZ9C+VIjAfwpHRf7VZOojpcJpzRH92QM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699067; c=relaxed/simple; bh=p+5VjVK2wbz/hMLbSbCrt0caDnmfLv+hXTfvW4v5JuI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZJpPNG1Zp3elerwGT+gng2UPnAcPUoYbnxl4BNlDbhNE6jouLRmdkDvzwgO+x0/G54PLUkfBYr5/EW1+8aIpsleeocqlM68xphnm91ck849nJtbBmxwv/8JX0Oe+t2wlgeGpwqoOMntkNsDrWmO4OqdjlLf4bOVJSS3HKNuRAgw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCDF0mVBz6JB0k; Fri, 23 Feb 2024 22:33:17 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id B8D831410B3; Fri, 23 Feb 2024 22:37:42 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:41 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 05/12] cxl/memscrub: Add CXL device ECS control feature Date: Fri, 23 Feb 2024 22:37:16 +0800 Message-ID: <20240223143723.1574-6-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791700965552465014 X-GMAIL-MSGID: 1791700965552465014 From: Shiju Jose CXL spec 3.1 section 8.2.9.9.11.2 describes the DDR5 Error Check Scrub (ECS) control feature. The Error Check Scrub (ECS) is a feature defined in JEDEC DDR5 SDRAM Specification (JESD79-5) and allows the DRAM to internally read, correct single-bit errors, and write back corrected data bits to the DRAM array while providing transparency to error counts. The ECS control feature allows the request to configure ECS input configurations during system boot or at run-time. The ECS control allows the requester to change the log entry type, the ECS threshold count provided that the request is within the definition specified in DDR5 mode registers, change mode between codeword mode and row count mode, and reset the ECS counter. Open Question: Is cxl_mem_ecs_init() invoked in the right function in cxl/core/region.c? Signed-off-by: Shiju Jose --- drivers/cxl/core/memscrub.c | 272 +++++++++++++++++++++++++++++++++++- drivers/cxl/core/region.c | 3 + drivers/cxl/cxlmem.h | 3 + 3 files changed, 276 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c index 2079498719fe..61a77fabca13 100644 --- a/drivers/cxl/core/memscrub.c +++ b/drivers/cxl/core/memscrub.c @@ -4,8 +4,8 @@ * * Copyright (c) 2024 HiSilicon Limited. * - * - Provides functions to configure patrol scrub feature of the - * CXL memory devices. + * - Provides functions to configure patrol scrub and DDR5 ECS features + * of the CXL memory devices. */ #define pr_fmt(fmt) "CXL_MEM_SCRUB: " fmt @@ -246,3 +246,271 @@ int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) return 0; } EXPORT_SYMBOL_NS_GPL(cxl_mem_patrol_scrub_init, CXL); + +/* CXL DDR5 ECS control definitions */ +#define CXL_MEMDEV_ECS_GET_FEAT_VERSION 0x01 +#define CXL_MEMDEV_ECS_SET_FEAT_VERSION 0x01 + +static const uuid_t cxl_ecs_uuid = + UUID_INIT(0xe5b13f22, 0x2328, 0x4a14, 0xb8, 0xba, 0xb9, 0x69, 0x1e, \ + 0x89, 0x33, 0x86); + +struct cxl_ecs_context { + struct device *dev; + u16 nregions; + int region_id; + u16 get_feat_size; + u16 set_feat_size; +}; + +/** + * struct cxl_memdev_ecs_params - CXL memory DDR5 ECS parameter data structure. + * @log_entry_type: ECS log entry type, per DRAM or per memory media FRU. + * @threshold: ECS threshold count per GB of memory cells. + * @mode: codeword/row count mode + * 0 : ECS counts rows with errors + * 1 : ECS counts codeword with errors + * @reset_counter: [IN] reset ECC counter to default value. + */ +struct cxl_memdev_ecs_params { + u8 log_entry_type; + u16 threshold; + u8 mode; + bool reset_counter; +}; + +enum { + CXL_MEMDEV_ECS_PARAM_LOG_ENTRY_TYPE, + CXL_MEMDEV_ECS_PARAM_THRESHOLD, + CXL_MEMDEV_ECS_PARAM_MODE, + CXL_MEMDEV_ECS_PARAM_RESET_COUNTER, +}; + +#define CXL_MEMDEV_ECS_LOG_ENTRY_TYPE_MASK GENMASK(1, 0) +#define CXL_MEMDEV_ECS_REALTIME_REPORT_CAP_MASK BIT(0) +#define CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK GENMASK(2, 0) +#define CXL_MEMDEV_ECS_MODE_MASK BIT(3) +#define CXL_MEMDEV_ECS_RESET_COUNTER_MASK BIT(4) + +static const u16 ecs_supp_threshold[] = { 0, 0, 0, 256, 1024, 4096 }; + +enum { + ECS_LOG_ENTRY_TYPE_DRAM = 0x0, + ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU = 0x1, +}; + +enum { + ECS_THRESHOLD_256 = 3, + ECS_THRESHOLD_1024 = 4, + ECS_THRESHOLD_4096 = 5, +}; + +enum { + ECS_MODE_COUNTS_ROWS = 0, + ECS_MODE_COUNTS_CODEWORDS = 1, +}; + +struct cxl_memdev_ecs_rd_attrs { + u8 ecs_log_cap; + u8 ecs_cap; + __le16 ecs_config; + u8 ecs_flags; +} __packed; + +struct cxl_memdev_ecs_wr_attrs { + u8 ecs_log_cap; + __le16 ecs_config; +} __packed; + +/* CXL DDR5 ECS control functions */ +static int __maybe_unused +cxl_mem_ecs_get_attrs(struct device *scrub_dev, int fru_id, + struct cxl_memdev_ecs_params *params) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(scrub_dev->parent); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_ecs_context *cxl_ecs_ctx; + size_t rd_data_size; + u8 threshold_index; + size_t data_size; + + if (!mds) + return -EFAULT; + cxl_ecs_ctx = dev_get_drvdata(scrub_dev); + rd_data_size = cxl_ecs_ctx->get_feat_size; + + struct cxl_memdev_ecs_rd_attrs *rd_attrs __free(kfree) = + kmalloc(rd_data_size, GFP_KERNEL); + if (!rd_attrs) + return -ENOMEM; + + params->log_entry_type = 0; + params->threshold = 0; + params->mode = 0; + data_size = cxl_get_feature(mds, cxl_ecs_uuid, rd_attrs, + rd_data_size, rd_data_size, + CXL_GET_FEAT_SEL_CURRENT_VALUE); + if (!data_size) + return -EIO; + + params->log_entry_type = FIELD_GET(CXL_MEMDEV_ECS_LOG_ENTRY_TYPE_MASK, + rd_attrs[fru_id].ecs_log_cap); + threshold_index = FIELD_GET(CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK, + rd_attrs[fru_id].ecs_config); + params->threshold = ecs_supp_threshold[threshold_index]; + params->mode = FIELD_GET(CXL_MEMDEV_ECS_MODE_MASK, + rd_attrs[fru_id].ecs_config); + + return 0; +} + +static int __maybe_unused +cxl_mem_ecs_set_attrs(struct device *scrub_dev, int fru_id, + struct cxl_memdev_ecs_params *params, u8 param_type) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(scrub_dev->parent); + struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); + struct cxl_ecs_context *cxl_ecs_ctx; + struct device *dev = scrub_dev->parent; + size_t rd_data_size, wr_data_size; + u16 nmedia_frus, count; + size_t data_size; + int ret; + + if (!mds) + return -EFAULT; + + cxl_ecs_ctx = dev_get_drvdata(scrub_dev); + nmedia_frus = cxl_ecs_ctx->nregions; + rd_data_size = cxl_ecs_ctx->get_feat_size; + wr_data_size = cxl_ecs_ctx->set_feat_size; + struct cxl_memdev_ecs_rd_attrs *rd_attrs __free(kfree) = + kmalloc(rd_data_size, GFP_KERNEL); + if (!rd_attrs) + return -ENOMEM; + + data_size = cxl_get_feature(mds, cxl_ecs_uuid, rd_attrs, + rd_data_size, rd_data_size, + CXL_GET_FEAT_SEL_CURRENT_VALUE); + if (!data_size) + return -EIO; + struct cxl_memdev_ecs_wr_attrs *wr_attrs __free(kfree) = + kmalloc(wr_data_size, GFP_KERNEL); + if (!wr_attrs) + return -ENOMEM; + + /* Fill writable attributes from the current attributes read for all the media FRUs */ + for (count = 0; count < nmedia_frus; count++) { + wr_attrs[count].ecs_log_cap = rd_attrs[count].ecs_log_cap; + wr_attrs[count].ecs_config = rd_attrs[count].ecs_config; + } + + /* Fill attribute to be set for the media FRU */ + switch (param_type) { + case CXL_MEMDEV_ECS_PARAM_LOG_ENTRY_TYPE: + if (params->log_entry_type != ECS_LOG_ENTRY_TYPE_DRAM && + params->log_entry_type != ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU) { + dev_err(dev, + "Invalid CXL ECS scrub log entry type(%d) to set\n", + params->log_entry_type); + dev_err(dev, + "Log Entry Type 0: per DRAM 1: per Memory Media FRU\n"); + return -EINVAL; + } + wr_attrs[fru_id].ecs_log_cap = FIELD_PREP(CXL_MEMDEV_ECS_LOG_ENTRY_TYPE_MASK, + params->log_entry_type); + break; + case CXL_MEMDEV_ECS_PARAM_THRESHOLD: + wr_attrs[fru_id].ecs_config &= ~CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK; + switch (params->threshold) { + case 256: + wr_attrs[fru_id].ecs_config |= FIELD_PREP( + CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_256); + break; + case 1024: + wr_attrs[fru_id].ecs_config |= FIELD_PREP( + CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_1024); + break; + case 4096: + wr_attrs[fru_id].ecs_config |= FIELD_PREP( + CXL_MEMDEV_ECS_THRESHOLD_COUNT_MASK, + ECS_THRESHOLD_4096); + break; + default: + dev_err(dev, + "Invalid CXL ECS scrub threshold count(%d) to set\n", + params->threshold); + dev_err(dev, + "Supported scrub threshold count: 256,1024,4096\n"); + return -EINVAL; + } + break; + case CXL_MEMDEV_ECS_PARAM_MODE: + if (params->mode != ECS_MODE_COUNTS_ROWS && + params->mode != ECS_MODE_COUNTS_CODEWORDS) { + dev_err(dev, + "Invalid CXL ECS scrub mode(%d) to set\n", + params->mode); + dev_err(dev, + "Mode 0: ECS counts rows with errors" + " 1: ECS counts codewords with errors\n"); + return -EINVAL; + } + wr_attrs[fru_id].ecs_config &= ~CXL_MEMDEV_ECS_MODE_MASK; + wr_attrs[fru_id].ecs_config |= FIELD_PREP(CXL_MEMDEV_ECS_MODE_MASK, + params->mode); + break; + case CXL_MEMDEV_ECS_PARAM_RESET_COUNTER: + wr_attrs[fru_id].ecs_config &= ~CXL_MEMDEV_ECS_RESET_COUNTER_MASK; + wr_attrs[fru_id].ecs_config |= FIELD_PREP(CXL_MEMDEV_ECS_RESET_COUNTER_MASK, + params->reset_counter); + break; + default: + dev_err(dev, "Invalid CXL ECS parameter to set\n"); + return -EINVAL; + } + ret = cxl_set_feature(mds, cxl_ecs_uuid, CXL_MEMDEV_ECS_SET_FEAT_VERSION, + wr_attrs, wr_data_size, + CXL_SET_FEAT_FLAG_DATA_SAVED_ACROSS_RESET); + if (ret) { + dev_err(dev, "CXL ECS set feature failed ret=%d\n", ret); + return ret; + } + + return 0; +} + +int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id) +{ + struct cxl_mbox_supp_feat_entry feat_entry; + struct cxl_ecs_context *cxl_ecs_ctx; + int nr_media_frus; + int ret; + + ret = cxl_mem_get_supported_feature_entry(cxlmd, &cxl_ecs_uuid, &feat_entry); + if (ret < 0) + return ret; + + if (!(feat_entry.attr_flags & CXL_FEAT_ENTRY_FLAG_CHANGABLE)) + return -EOPNOTSUPP; + nr_media_frus = feat_entry.get_size/ + sizeof(struct cxl_memdev_ecs_rd_attrs); + if (!nr_media_frus) + return -ENODEV; + + cxl_ecs_ctx = devm_kzalloc(&cxlmd->dev, sizeof(*cxl_ecs_ctx), GFP_KERNEL); + if (!cxl_ecs_ctx) + return -ENOMEM; + + cxl_ecs_ctx->nregions = nr_media_frus; + cxl_ecs_ctx->get_feat_size = feat_entry.get_size; + cxl_ecs_ctx->set_feat_size = feat_entry.set_size; + cxl_ecs_ctx->region_id = region_id; + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_mem_ecs_init, CXL); diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index ce0e2d82bb2b..8b81c47801fc 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2913,6 +2913,9 @@ int cxl_add_to_region(struct cxl_port *root, struct cxl_endpoint_decoder *cxled) dev_err(&cxlr->dev, "failed to enable, range: %pr\n", p->res); } + rc = cxl_mem_ecs_init(cxlmd, atomic_read(&cxlrd->region_id)); + if (rc) + dev_dbg(&cxlr->dev, "CXL memory ECS init failed\n"); put_device(region_dev); out: diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index e6a709a0e168..88a5c21e087e 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -952,9 +952,12 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa); /* cxl memory scrub functions */ #ifdef CONFIG_CXL_SCRUB int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd); +int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id); #else static inline int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) { return -EOPNOTSUPP; } +static inline int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id) +{ return -EOPNOTSUPP; } #endif #ifdef CONFIG_CXL_SUSPEND From patchwork Fri Feb 23 14:37:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205428 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp626219dyb; Fri, 23 Feb 2024 06:40:21 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXWC5a9B3vM6+BY2En84oVNxl5dsbdEiABkEOVwcENUH9GfNyDEEBxlTPDWKNd0yAoD9STNAusSdaPGZTaFC7jCueSsgg== X-Google-Smtp-Source: AGHT+IGFSmXB27oKFpMSnLLsUkcNqjv1CY9DvFRT2e0TtBQql7VVXe4DMRf1a5nZk0bj125YFdIT X-Received: by 2002:a05:620a:148c:b0:787:9e72:fdb9 with SMTP id w12-20020a05620a148c00b007879e72fdb9mr2363021qkj.68.1708699221165; Fri, 23 Feb 2024 06:40:21 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699221; cv=pass; d=google.com; s=arc-20160816; b=MpXic5Nym/ihBegF43aTWyK7/gCcY5LBzzb1IV+EzC0Cf6oQT+x90bxJJhTQB/EI/Q pEasT4/aD8SG5V62I0n/pe8yrM0nPbOOXviC81mOmYSvc0uhFP0QWqdVMgIFOGVZQYAL aF/jdM2CHvhhrJBU07mw5+zc2MgFgUPrW5iXXIVEls1cOEXTVMhx0jRRw3YnNMeoDC7t V/q+3le/lhEy+o3leryF34vXHwsJRyAInyOmpfSFxyYSlnWZj0w305eos69/euWkzKB8 NqI+phMO1By4l2VcRv146tuavNQbSyU2o4p7alyPdcaooWclsa8qIFiVzERATqPOZ/k8 3OvQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=+qXkq46NvRJ+J+IoAKuKTXtBJXeW6IYJykqiZLYPGlw=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=tWIXfYpcaEqWQw0A94cP6cFeiz3z6vTPrKXHCVxo4BMqe0ckIrk1HFZT+yex/1gFgC /KisWJDQZdw7Tw8JLKSYokJVsPaNty70RqchClp6fnAR4tNeKjIKbZbQWTFNbxFIdikn Fjnvu6v26LRjyxHW/Metm8J/2IMVaMhByl1fHJ/k7rZZ0A8Q9J9cT51ZnhzRVtBbLRzS 865NsqnU4pgVPSe/6+Z8dvCuusWbr4fehl8164+p2U9L/L99iVQm50re6QQ/shl4L/+H kmpX+T7/T7SlwPxCPcyut+VTgg0Us9dhSxNTt/roMHS2FQSlTYq0zFi/+4RkN+NHSbqt gmfA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78496-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78496-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id vw22-20020a05620a565600b00787aca2fd12si2036052qkn.581.2024.02.23.06.40.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:40:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78496-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78496-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78496-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id E16F31C20ED6 for ; Fri, 23 Feb 2024 14:40:20 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9626382D8C; Fri, 23 Feb 2024 14:37:52 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41B407BAFD; Fri, 23 Feb 2024 14:37:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699069; cv=none; b=ZbC4LzPMEuMRKenSqBP3qIiOgLJchwqKjRjnwdGD0uVdGgHjlyhjMqUas2NPBbXsu3C3drmFCTldWoeLktMpmvUm+ON3oHCmaap1WwDZ6VafESglyj02d4oDZDDjvAjjdymSGf3suVVkzfnpK00s1C+6nK66PwTut1+Mmfl+4Ds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699069; c=relaxed/simple; bh=8cGsSvfUZ9ecxlFhWZPPuPn8TOgCRhs193A2vjRRb54=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VJK5yjKjMjA7N5O9sxx5JamroJ6r8Y84spDz+EKrA4NBGBdf8CX6HBPar1G+7X/4ei6Rq9Gojnti0ZPvFSNotixQxb4blyTIgYnLxEflrlMRkdqfXmenj4IzDCMuN7udKGK5zlRO5vqdK15l2w0AVJ8Fdu/zJRMrZat4U9rlUzo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCF94lsDz6K8xl; Fri, 23 Feb 2024 22:34:05 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id B8F98141B80; Fri, 23 Feb 2024 22:37:44 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:43 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 07/12] cxl/memscrub: Register CXL device patrol scrub with scrub subsystem driver Date: Fri, 23 Feb 2024 22:37:18 +0800 Message-ID: <20240223143723.1574-8-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791700994430782698 X-GMAIL-MSGID: 1791700994430782698 From: Shiju Jose Register with the scrub subsystem driver to expose the sysfs attributes to the user for configuring the CXL device memory patrol scrub. Add the callback functions to support configuring the CXL memory device patrol scrub. Signed-off-by: Shiju Jose --- drivers/cxl/Kconfig | 6 ++ drivers/cxl/core/memscrub.c | 199 +++++++++++++++++++++++++++++++++++- 2 files changed, 202 insertions(+), 3 deletions(-) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index e61c69fa7bf5..a0fe68b83cd0 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -162,11 +162,17 @@ config CXL_SCRUB bool "CXL: Memory scrub feature" depends on CXL_PCI depends on CXL_MEM + depends on SCRUB help The CXL memory scrub control is an optional feature allows host to control the scrub configurations of CXL Type 3 devices, which support patrol scrub and/or DDR5 ECS(Error Check Scrub). + Register with the scrub configure driver to expose sysfs attributes + to the user for configuring the CXL device memory patrol and DDR5 ECS + scrubs. Provides the interface functions to support configuring the + CXL memory device patrol and ECS scrubs. + Say 'y/n' to enable/disable the CXL memory scrub driver that will attach to CXL.mem devices for memory scrub control feature. See sections 8.2.9.9.11.1 and 8.2.9.9.11.2 in the CXL 3.1 specification diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c index 61a77fabca13..b053dcb9197e 100644 --- a/drivers/cxl/core/memscrub.c +++ b/drivers/cxl/core/memscrub.c @@ -6,14 +6,19 @@ * * - Provides functions to configure patrol scrub and DDR5 ECS features * of the CXL memory devices. + * - Registers with the scrub subsystem driver to expose the sysfs attributes + * to the user for configuring the memory patrol scrub and DDR5 ECS features. + */ #define pr_fmt(fmt) "CXL_MEM_SCRUB: " fmt #include +#include /* CXL memory scrub feature common definitions */ #define CXL_SCRUB_MAX_ATTR_RANGE_LENGTH 128 +#define CXL_MEMDEV_MAX_NAME_LENGTH 128 static int cxl_mem_get_supported_feature_entry(struct cxl_memdev *cxlmd, const uuid_t *feat_uuid, struct cxl_mbox_supp_feat_entry *feat_entry_out) @@ -157,9 +162,8 @@ static int cxl_mem_ps_get_attrs(struct device *dev, return 0; } -static int __maybe_unused -cxl_mem_ps_set_attrs(struct device *dev, struct cxl_memdev_ps_params *params, - u8 param_type) +static int cxl_mem_ps_set_attrs(struct device *dev, struct cxl_memdev_ps_params *params, + u8 param_type) { struct cxl_memdev_ps_wr_attrs wr_attrs; struct cxl_memdev *cxlmd = to_cxl_memdev(dev); @@ -215,11 +219,192 @@ cxl_mem_ps_set_attrs(struct device *dev, struct cxl_memdev_ps_params *params, return 0; } +static int cxl_mem_ps_enable_read(struct device *dev, u64 *val) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_mem_ps_get_attrs(dev, ¶ms); + if (ret) { + dev_err(dev, "Get CXL patrol scrub params failed ret=%d\n", ret); + return ret; + } + *val = params.enable; + + return 0; +} + +static int cxl_mem_ps_enable_write(struct device *dev, long val) +{ + struct cxl_memdev_ps_params params; + int ret; + + params.enable = val; + ret = cxl_mem_ps_set_attrs(dev, ¶ms, CXL_MEMDEV_PS_PARAM_ENABLE); + if (ret) { + dev_err(dev, "CXL patrol scrub enable failed, enable=%d ret=%d\n", + params.enable, ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ps_rate_read(struct device *dev, u64 *val) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_mem_ps_get_attrs(dev, ¶ms); + if (ret) { + dev_err(dev, "Get CXL patrol scrub params failed ret=%d\n", ret); + return ret; + } + *val = params.rate; + + return 0; +} + +static int cxl_mem_ps_rate_write(struct device *dev, long val) +{ + struct cxl_memdev_ps_params params; + int ret; + + params.rate = val; + ret = cxl_mem_ps_set_attrs(dev, ¶ms, CXL_MEMDEV_PS_PARAM_RATE); + if (ret) { + dev_err(dev, "Set CXL patrol scrub params for rate failed ret=%d\n", ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ps_rate_available_read(struct device *dev, char *buf) +{ + struct cxl_memdev_ps_params params; + int ret; + + ret = cxl_mem_ps_get_attrs(dev, ¶ms); + if (ret) { + dev_err(dev, "Get CXL patrol scrub params failed ret=%d\n", ret); + return ret; + } + + sysfs_emit(buf, "%s\n", params.rate_avail); + + return 0; +} + +/** + * cxl_mem_patrol_scrub_is_visible() - Callback to return attribute visibility + * @dev: Pointer to scrub device + * @attr: Scrub attribute + * @mode: attribute's mode + * @region_id: ID of the memory region + * + * Returns: 0 on success, an error otherwise + */ +static umode_t cxl_mem_patrol_scrub_is_visible(struct device *dev, u32 attr_id, + umode_t mode, int region_id) +{ + const struct cxl_patrol_scrub_context *cxl_ps_ctx = dev_get_drvdata(dev); + + if (attr_id == scrub_rate_available || + attr_id == scrub_rate) { + if (!cxl_ps_ctx->scrub_cycle_changeable) + return 0; + } + + switch (attr_id) { + case scrub_rate_available: + case scrub_enable: + case scrub_rate: + return mode; + default: + return 0; + } +} + +/** + * cxl_mem_patrol_scrub_read() - Read callback for data attributes + * @dev: Pointer to scrub device + * @attr: Scrub attribute + * @region_id: ID of the memory region + * @val: Pointer to the returned data + * + * Returns: 0 on success, an error otherwise + */ +static int cxl_mem_patrol_scrub_read(struct device *dev, u32 attr, + int region_id, u64 *val) +{ + + switch (attr) { + case scrub_enable: + return cxl_mem_ps_enable_read(dev->parent, val); + case scrub_rate: + return cxl_mem_ps_rate_read(dev->parent, val); + default: + return -EOPNOTSUPP; + } +} + +/** + * cxl_mem_patrol_scrub_write() - Write callback for data attributes + * @dev: Pointer to scrub device + * @attr: Scrub attribute + * @region_id: ID of the memory region + * @val: Value to write + * + * Returns: 0 on success, an error otherwise + */ +static int cxl_mem_patrol_scrub_write(struct device *dev, u32 attr, + int region_id, u64 val) +{ + switch (attr) { + case scrub_enable: + return cxl_mem_ps_enable_write(dev->parent, val); + case scrub_rate: + return cxl_mem_ps_rate_write(dev->parent, val); + default: + return -EOPNOTSUPP; + } +} + +/** + * cxl_mem_patrol_scrub_read_strings() - Read callback for string attributes + * @dev: Pointer to scrub device + * @attr: Scrub attribute + * @region_id: ID of the memory region + * @buf: Pointer to the buffer for copying returned string + * + * Returns: 0 on success, an error otherwise + */ +static int cxl_mem_patrol_scrub_read_strings(struct device *dev, u32 attr, + int region_id, char *buf) +{ + switch (attr) { + case scrub_rate_available: + return cxl_mem_ps_rate_available_read(dev->parent, buf); + default: + return -EOPNOTSUPP; + } +} + +static const struct scrub_ops cxl_ps_scrub_ops = { + .is_visible = cxl_mem_patrol_scrub_is_visible, + .read = cxl_mem_patrol_scrub_read, + .write = cxl_mem_patrol_scrub_write, + .read_string = cxl_mem_patrol_scrub_read_strings, +}; + int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) { + char scrub_name[CXL_MEMDEV_MAX_NAME_LENGTH]; struct cxl_patrol_scrub_context *cxl_ps_ctx; struct cxl_mbox_supp_feat_entry feat_entry; struct cxl_memdev_ps_params params; + struct device *cxl_scrub_dev; int ret; ret = cxl_mem_get_supported_feature_entry(cxlmd, &cxl_patrol_scrub_uuid, @@ -243,6 +428,14 @@ int cxl_mem_patrol_scrub_init(struct cxl_memdev *cxlmd) cxl_ps_ctx->set_feat_size = feat_entry.set_size; cxl_ps_ctx->scrub_cycle_changeable = params.scrub_cycle_changeable; + snprintf(scrub_name, sizeof(scrub_name), "%s_%s", + "cxl_patrol_scrub", dev_name(&cxlmd->dev)); + cxl_scrub_dev = devm_scrub_device_register(&cxlmd->dev, scrub_name, + cxl_ps_ctx, &cxl_ps_scrub_ops, + 0, NULL); + if (IS_ERR(cxl_scrub_dev)) + return PTR_ERR(cxl_scrub_dev); + return 0; } EXPORT_SYMBOL_NS_GPL(cxl_mem_patrol_scrub_init, CXL); From patchwork Fri Feb 23 14:37:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205429 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp626422dyb; Fri, 23 Feb 2024 06:40:47 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUdt+Q303ilsX04TmhJsQuC+2WA+ml7Vl9zxdJsS+8IvwvkWQfbG85CwXD54/OuLYZFR5C9TtVM9Iy/FiozV9f90ZXwcA== X-Google-Smtp-Source: AGHT+IHdO9iGx+HRngfGHGq+fe61H6MmJ563l4aPWJ+Ricz3wF8D9ZvaC1XCOVlWmxsNvCDfyKpi X-Received: by 2002:a17:906:e204:b0:a3f:8925:50bb with SMTP id gf4-20020a170906e20400b00a3f892550bbmr1494333ejb.76.1708699246827; Fri, 23 Feb 2024 06:40:46 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699246; cv=pass; d=google.com; s=arc-20160816; b=MdO0uiSzkQZM4xDmgqHn/gl03Qi3QuWjN7MC/OLZYvJveS6G/kEaFBIcoH4B6qvz9Y 0Qw7mLCdE7KHPB0JI8r6Ji0xZnu5RgOpzc59QtTSK8LYitlS6NW9szEg0VKrB6+tCSmd wRENW89V2Da/trHYODRgLlXn+Ffxc8ke9Tyc8IHFon4NL6Ff5x683X5ilOxox56MTm+k yxeQAbLsc/n0wjvjmfDYsgVidAdWuJqpuTfcQ+VusUxfG36yQe3m2mfmzXxSWYi5eH7j ZkDcVDa6d7IZ9Ukq/4GFfS8qX2g96WfpUfwcAuakbUSRHlQtxuxl7StxoU2iWabrJLnf JmqQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=vLKdBp0Khy68JR8jcxgLPQGogE31zNHEEhxO16Ef/I8=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=pt6vg+Jc4A5Y748aGB1rZdp/dpS7TYl1r6lkP7piMTCRvFctt9ihLP7+kWhCUpParH 0yTzDNFygYsMwD/ifh8I0Pz0oI4CwZl4dg7Ic0w6yzOlFoJ9GoV4ho8nEZsHbphgvXdF Bed9fZsJS4ZIfO8Uag8cdfFPdxSv4d2CiRkA8B++YWHwI5n15WjvBNtMoYW6Wf3eplRM gCf+rrgJEx+x6RyPTeayABY05Ezr3V4c8aphfO4BSKypbB6e7QEaLbp45GYBsothgBVD csavmXElofaHH0AZVaU+P/xQ/OcKTphCYe1/EtV4FCd03lOC8vnFuLcQCtWGwmElcVJZ IzQQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78497-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78497-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id u24-20020a1709064ad800b00a3f78973d46si1605050ejt.404.2024.02.23.06.40.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:40:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78497-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78497-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78497-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 4756B1F24772 for ; Fri, 23 Feb 2024 14:40:46 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 17F9D84A29; Fri, 23 Feb 2024 14:37:54 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2C2F80C0E; Fri, 23 Feb 2024 14:37:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699070; cv=none; b=XLEzDCsMD+xz2kZEjFxv9BzHZ6O1X5QlNQ1Kzo0h1/p+DnITgeI3fcfqGhesoWaB1Mc+yPKJTXf/gIPI/YoLigieOTQC8mSB/UwlCw1evCPhxsIbHyZbfkmlW+nonhH6GdDKUdtjjK+XiGLNRKM99xZO+GJsUYJNxOHORu2gj10= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699070; c=relaxed/simple; bh=HjRjQJjpaGOU/Bd4AZEvaexaL9/hGrjKwCHKesdEaNM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=erdQyr8EtObNdY0tFZbh4BPvOgI80x5H8whhA3CzuxRcC8p2xDE+jI0Ytj+HzYcBVj4SLWBEmgGMTszK3Tq9q8pMnZ+i8voxTTzdcipf2DRO9b1ELGBiNKXPibpT3bOvQLK9M3GJ8pc6Ktf8eAlBs8dW4FTlpQwwG2I/iX04pIE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCDc2NQZz6K5p5; Fri, 23 Feb 2024 22:33:36 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id A657C140B33; Fri, 23 Feb 2024 22:37:45 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:44 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 08/12] cxl/memscrub: Register CXL device ECS with scrub subsystem driver Date: Fri, 23 Feb 2024 22:37:19 +0800 Message-ID: <20240223143723.1574-9-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791701021694835993 X-GMAIL-MSGID: 1791701021694835993 From: Shiju Jose Register with the scrub subsystem driver to expose the sysfs attributes to the user for configuring the CXL memory device's ECS feature. Add the static CXL ECS specific attributes to support configuring the CXL memory device ECS feature. Signed-off-by: Shiju Jose --- .../ABI/testing/sysfs-class-cxl-ecs-configure | 79 ++++++ drivers/cxl/core/memscrub.c | 251 +++++++++++++++++- 2 files changed, 327 insertions(+), 3 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-class-cxl-ecs-configure diff --git a/Documentation/ABI/testing/sysfs-class-cxl-ecs-configure b/Documentation/ABI/testing/sysfs-class-cxl-ecs-configure new file mode 100644 index 000000000000..541b150db71c --- /dev/null +++ b/Documentation/ABI/testing/sysfs-class-cxl-ecs-configure @@ -0,0 +1,79 @@ +See `Documentation/ABI/testing/sysfs-class-scrub-configure` for the +documentation of common scrub configure directory layout (/sys/class/scrub/), +including the attributes used for configuring the CXL patrol scrub. +Following are the attributes defined for configuring the CXL ECS. + +What: /sys/class/scrub/scrubX/regionN/ecs_log_entry_type +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RW) The log entry type of how the DDR5 ECS log is + reported. + 00b - per DRAM. + 01b - per memory media FRU. + +What: /sys/class/scrub/scrubX/regionN/ecs_log_entry_type_per_dram +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RO) Returns true if current log entry type of DDR5 ECS + region is per DRAM. + +What: /sys/class/scrub/scrubX/regionN/ecs_log_entry_type_per_memory_media +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RO) Returns true if current log entry type of DDR5 ECS + region is per memory media FRU. + +What: /sys/class/scrub/scrubX/regionN/mode +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RW) The mode of how the DDR5 ECS counts the errors. + 0 - ECS counts rows with errors. + 1 - ECS counts codewords with errors. + +What: /sys/class/scrub/scrubX/regionN/mode_counts_rows +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RO) Returns true if current mode of DDR5 ECS region + is counts rows with errors. + +What: /sys/class/scrub/scrubX/regionN/mode_counts_codewords +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RO) Returns true if current mode of DDR5 ECS region + is counts codewords with errors. + +What: /sys/class/scrub/scrubX/regionN/reset_counter +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (WO) DDR5 ECS reset ECC counter. + 0 - normal, ECC counter running actively. + 1 - reset ECC counter to the default value. + +What: /sys/class/scrub/scrubX/regionN/threshold +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RW) DDR5 ECS threshold count per GB of memory cells. + +What: /sys/class/scrub/scrubX/regionN/threshold_available +Date: February 2024 +KernelVersion: 6.8 +Contact: linux-kernel@vger.kernel.org +Description: + (RO) Supported list of DDR5 ECS threshold count per GB of + memory cells. diff --git a/drivers/cxl/core/memscrub.c b/drivers/cxl/core/memscrub.c index b053dcb9197e..e227ea2f1508 100644 --- a/drivers/cxl/core/memscrub.c +++ b/drivers/cxl/core/memscrub.c @@ -558,9 +558,9 @@ cxl_mem_ecs_get_attrs(struct device *scrub_dev, int fru_id, return 0; } -static int __maybe_unused -cxl_mem_ecs_set_attrs(struct device *scrub_dev, int fru_id, - struct cxl_memdev_ecs_params *params, u8 param_type) +static int cxl_mem_ecs_set_attrs(struct device *scrub_dev, int fru_id, + struct cxl_memdev_ecs_params *params, + u8 param_type) { struct cxl_memdev *cxlmd = to_cxl_memdev(scrub_dev->parent); struct cxl_dev_state *cxlds = cxlmd->cxlds; @@ -677,8 +677,243 @@ cxl_mem_ecs_set_attrs(struct device *scrub_dev, int fru_id, return 0; } +static int cxl_mem_ecs_log_entry_type_write(struct device *dev, int region_id, long val) +{ + struct cxl_memdev_ecs_params params; + int ret; + + params.log_entry_type = val; + ret = cxl_mem_ecs_set_attrs(dev, region_id, ¶ms, + CXL_MEMDEV_ECS_PARAM_LOG_ENTRY_TYPE); + if (ret) { + dev_err(dev->parent, "Set CXL ECS params for log entry type failed ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ecs_threshold_write(struct device *dev, int region_id, long val) +{ + struct cxl_memdev_ecs_params params; + int ret; + + params.threshold = val; + ret = cxl_mem_ecs_set_attrs(dev, region_id, ¶ms, + CXL_MEMDEV_ECS_PARAM_THRESHOLD); + if (ret) { + dev_err(dev->parent, "Set CXL ECS params for threshold failed ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ecs_mode_write(struct device *dev, int region_id, long val) +{ + struct cxl_memdev_ecs_params params; + int ret; + + params.mode = val; + ret = cxl_mem_ecs_set_attrs(dev, region_id, ¶ms, + CXL_MEMDEV_ECS_PARAM_MODE); + if (ret) { + dev_err(dev->parent, "Set CXL ECS params for mode failed ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static int cxl_mem_ecs_reset_counter_write(struct device *dev, int region_id, long val) +{ + struct cxl_memdev_ecs_params params; + int ret; + + params.reset_counter = val; + ret = cxl_mem_ecs_set_attrs(dev, region_id, ¶ms, + CXL_MEMDEV_ECS_PARAM_RESET_COUNTER); + if (ret) { + dev_err(dev->parent, "Set CXL ECS params for reset ECC counter failed ret=%d\n", + ret); + return ret; + } + + return 0; +} + +enum cxl_mem_ecs_scrub_attributes { + cxl_ecs_log_entry_type, + cxl_ecs_log_entry_type_per_dram, + cxl_ecs_log_entry_type_per_memory_media, + cxl_ecs_mode, + cxl_ecs_mode_counts_codewords, + cxl_ecs_mode_counts_rows, + cxl_ecs_reset, + cxl_ecs_threshold, + cxl_ecs_threshold_available, + cxl_ecs_max_attrs +}; + +static ssize_t cxl_mem_ecs_show_scrub_attr(struct device *dev, char *buf, + int attr_id) +{ + struct cxl_ecs_context *cxl_ecs_ctx = dev_get_drvdata(dev); + int region_id = cxl_ecs_ctx->region_id; + struct cxl_memdev_ecs_params params; + int ret; + + if (attr_id == cxl_ecs_log_entry_type || + attr_id == cxl_ecs_log_entry_type_per_dram || + attr_id == cxl_ecs_log_entry_type_per_memory_media || + attr_id == cxl_ecs_mode || + attr_id == cxl_ecs_mode_counts_codewords || + attr_id == cxl_ecs_mode_counts_rows || + attr_id == cxl_ecs_threshold) { + ret = cxl_mem_ecs_get_attrs(dev, region_id, ¶ms); + if (ret) { + dev_err(dev->parent, "Get CXL ECS params failed ret=%d\n", ret); + return ret; + } + } + switch (attr_id) { + case cxl_ecs_log_entry_type: + return sprintf(buf, "%d\n", params.log_entry_type); + case cxl_ecs_log_entry_type_per_dram: + if (params.log_entry_type == ECS_LOG_ENTRY_TYPE_DRAM) + return sysfs_emit(buf, "1\n"); + else + return sysfs_emit(buf, "0\n"); + case cxl_ecs_log_entry_type_per_memory_media: + if (params.log_entry_type == ECS_LOG_ENTRY_TYPE_MEM_MEDIA_FRU) + return sysfs_emit(buf, "1\n"); + else + return sysfs_emit(buf, "0\n"); + case cxl_ecs_mode: + return sprintf(buf, "%d\n", params.mode); + case cxl_ecs_mode_counts_codewords: + if (params.mode == ECS_MODE_COUNTS_CODEWORDS) + return sysfs_emit(buf, "1\n"); + else + return sysfs_emit(buf, "0\n"); + case cxl_ecs_mode_counts_rows: + if (params.mode == ECS_MODE_COUNTS_ROWS) + return sysfs_emit(buf, "1\n"); + else + return sysfs_emit(buf, "0\n"); + case cxl_ecs_threshold: + return sprintf(buf, "%d\n", params.threshold); + case cxl_ecs_threshold_available: + return sysfs_emit(buf, "256,1024,4096\n"); + } + + return -EOPNOTSUPP; +} + +static ssize_t cxl_mem_ecs_store_scrub_attr(struct device *dev, const char *buf, + size_t count, int attr_id) +{ + struct cxl_ecs_context *cxl_ecs_ctx = dev_get_drvdata(dev); + int region_id = cxl_ecs_ctx->region_id; + long val; + int ret; + + ret = kstrtol(buf, 10, &val); + if (ret < 0) + return ret; + + switch (attr_id) { + case cxl_ecs_log_entry_type: + ret = cxl_mem_ecs_log_entry_type_write(dev, region_id, val); + if (ret) + return -EOPNOTSUPP; + break; + case cxl_ecs_mode: + ret = cxl_mem_ecs_mode_write(dev, region_id, val); + if (ret) + return -EOPNOTSUPP; + break; + case cxl_ecs_reset: + ret = cxl_mem_ecs_reset_counter_write(dev, region_id, val); + if (ret) + return -EOPNOTSUPP; + break; + case cxl_ecs_threshold: + ret = cxl_mem_ecs_threshold_write(dev, region_id, val); + if (ret) + return -EOPNOTSUPP; + break; + default: + return -EOPNOTSUPP; + } + + return count; +} + +#define CXL_ECS_SCRUB_ATTR_RW(attr) \ +static ssize_t attr##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + return cxl_mem_ecs_show_scrub_attr(dev, buf, (cxl_ecs_##attr)); \ +} \ +static ssize_t attr##_store(struct device *dev, \ + struct device_attribute *attr, \ + const char *buf, size_t count) \ +{ \ + return cxl_mem_ecs_store_scrub_attr(dev, buf, count, (cxl_ecs_##attr));\ +} \ +static DEVICE_ATTR_RW(attr) + +#define CXL_ECS_SCRUB_ATTR_RO(attr) \ +static ssize_t attr##_show(struct device *dev, \ + struct device_attribute *attr, char *buf) \ +{ \ + return cxl_mem_ecs_show_scrub_attr(dev, buf, (cxl_ecs_##attr)); \ +} \ +static DEVICE_ATTR_RO(attr) + +#define CXL_ECS_SCRUB_ATTR_WO(attr) \ +static ssize_t attr##_store(struct device *dev, \ + struct device_attribute *attr, \ + const char *buf, size_t count) \ +{ \ + return cxl_mem_ecs_store_scrub_attr(dev, buf, count, (cxl_ecs_##attr));\ +} \ +static DEVICE_ATTR_WO(attr) + +CXL_ECS_SCRUB_ATTR_RW(log_entry_type); +CXL_ECS_SCRUB_ATTR_RO(log_entry_type_per_dram); +CXL_ECS_SCRUB_ATTR_RO(log_entry_type_per_memory_media); +CXL_ECS_SCRUB_ATTR_RW(mode); +CXL_ECS_SCRUB_ATTR_RO(mode_counts_codewords); +CXL_ECS_SCRUB_ATTR_RO(mode_counts_rows); +CXL_ECS_SCRUB_ATTR_WO(reset); +CXL_ECS_SCRUB_ATTR_RW(threshold); +CXL_ECS_SCRUB_ATTR_RO(threshold_available); + +static struct attribute *cxl_mem_ecs_scrub_attrs[] = { + &dev_attr_log_entry_type.attr, + &dev_attr_log_entry_type_per_dram.attr, + &dev_attr_log_entry_type_per_memory_media.attr, + &dev_attr_mode.attr, + &dev_attr_mode_counts_codewords.attr, + &dev_attr_mode_counts_rows.attr, + &dev_attr_reset.attr, + &dev_attr_threshold.attr, + &dev_attr_threshold_available.attr, + NULL +}; + +static struct attribute_group cxl_mem_ecs_attr_group = { + .attrs = cxl_mem_ecs_scrub_attrs +}; + int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id) { + char scrub_name[CXL_MEMDEV_MAX_NAME_LENGTH]; struct cxl_mbox_supp_feat_entry feat_entry; struct cxl_ecs_context *cxl_ecs_ctx; int nr_media_frus; @@ -704,6 +939,16 @@ int cxl_mem_ecs_init(struct cxl_memdev *cxlmd, int region_id) cxl_ecs_ctx->set_feat_size = feat_entry.set_size; cxl_ecs_ctx->region_id = region_id; + snprintf(scrub_name, sizeof(scrub_name), "%s_%s_region%d", + "cxl_ecs", dev_name(&cxlmd->dev), cxl_ecs_ctx->region_id); + struct device *cxl_scrub_dev = devm_scrub_device_register(&cxlmd->dev, + scrub_name, + cxl_ecs_ctx, NULL, + cxl_ecs_ctx->region_id, + &cxl_mem_ecs_attr_group); + if (IS_ERR(cxl_scrub_dev)) + return PTR_ERR(cxl_scrub_dev); + return 0; } EXPORT_SYMBOL_NS_GPL(cxl_mem_ecs_init, CXL); From patchwork Fri Feb 23 14:37:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205431 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp626610dyb; Fri, 23 Feb 2024 06:41:09 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCX65duAu0xuxmlPU33i0WyZ28IXsRaENXLcTAd2ydPsz7iPRHdcjRzjK3tzcj7c2lyN4ubSZlHF5ugbCGzWxaaHF9ApaQ== X-Google-Smtp-Source: AGHT+IGcZebzD3FWRlM/dkbt3pNDhd0dz8l3VQ49kYlUx1+8ccB2Mn49ylqDplOXErspScP6iEM1 X-Received: by 2002:a17:906:cf8b:b0:a3f:3d0d:30bf with SMTP id um11-20020a170906cf8b00b00a3f3d0d30bfmr9661ejb.0.1708699268759; Fri, 23 Feb 2024 06:41:08 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699268; cv=pass; d=google.com; s=arc-20160816; b=MLtXPRJAA/V60hTgu9CsdT+lszt2/ZrOHaMnEEY1KtOdJZ5ATgzh6LtX8rhqmLR2G9 ds87NSM2XOBFNGTjIzHEWN4j55X0KQEurscj6trT3poYNNauc760E6pKCzNX6DQaoh/p W3o7mgLAakonnwjD4kPIV2fZr+f6tKaRa5612m3nXk7i9KYLWcOuYJlfUjP1BN+hosgv yBaEtr/1ELYArveoSyy/OvxLch6pTsk1WA6Jg4zH6Bk8slRu/jDwZJi8uGAToC0gWsXM 3lcQ/8eIrTTRMXGIEKymB3+U8IQcpLdQT8/8AYq8r27yWRiKRk31iVBuShPiRw4o3sGa nqXw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=bBBJ8Cce8p0Y460UUUFg7jZ8EUJYqWozyE8/YHLCFpA=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=II3lbRNudBkRgBVjXhf/7U5zURWMdtGjX5HigpR/sdUJQ5bMt/667R3+ejeQniFKV/ eumO9tZt/shMM7evxgewT9EYHoBJPAJC2o6yTOXGN2d6OyMGynOfr4KQkpXHeuuiHq3z 3EO9eZU5y5qwsV9LXid+MdSiHWVdKgoFXuu205x7rh8Ymwtp5XPegR1ODid75LYYG94x AKdHWupSlKIGxdHXY+4vTwzWo1s3tadpz9Uv+qIsr/AK4NuHvenBHJUoakbXkvxhAaMd 9W4y1Cq4aaO/wcTMFwfPC2/Ycv/TxFcFjGLMtxYwDStvrXNXqSG8QoWbCf2ETppHqLXs X/fA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78499-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78499-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id y24-20020a1709060a9800b00a3e3633cf8dsi299675ejf.574.2024.02.23.06.41.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:41:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78499-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78499-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78499-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 3A4CB1F245D4 for ; Fri, 23 Feb 2024 14:41:08 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 297AF84A58; Fri, 23 Feb 2024 14:37:56 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2B432E412; Fri, 23 Feb 2024 14:37:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699072; cv=none; b=SQWuZpBSHNOGMIiYqYM4ChO0yoBuXRF7ma3H9PKa43wBs+eLDVUIoyHKcz8KcMU44vzrxjgEw2U74m1KsRcUwhIk4htIJzuJfR6cxJNqQFGBjbDYBqeXZ7fbb96q30vFIQ3Pm/KLZeu5yeKKo91mU8SKlKK4dlm5oxyB//i5piQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699072; c=relaxed/simple; bh=T0q+t3x/VkaVnw8IvMEZVDZKcyC74OoGTeGcfhG78jw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Xhtj0zU3ZqpFz1hk8tmmwDb0HP2HiHugi0J0inbtCL/V8c6QZMbOEBGY1RBUZGKRZSEJqGNnjwATXaKEyat5yY4qYkTG3ywiogxpBUUfMQpNZqpjKX1qg+lCUbR5HCfxEH57CmwX3ygLpa29PXj8QdwZ3y9f6mfwd3VRfsNT0D4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCDf2QDJz6K611; Fri, 23 Feb 2024 22:33:38 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id A378F14149E; Fri, 23 Feb 2024 22:37:47 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:46 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 10/12] ACPI:RAS2: Add common library for RAS2 PCC interfaces Date: Fri, 23 Feb 2024 22:37:21 +0800 Message-ID: <20240223143723.1574-11-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791701044736153139 X-GMAIL-MSGID: 1791701044736153139 From: A Somasundaram The code contains PCC interfaces for RAS2 table, functions to send RAS2 commands as per ACPI 6.5 & upwards revision. References for this implementation, ACPI specification 6.5 section 5.2.21 for RAS2 table and chapter 14 for PCC (Platform Communication Channel). Driver uses PCC interfaces to communicate to the ACPI HW. This code implements PCC interfaces and the functions to send the RAS2 commands to be used by OSPM. Signed-off-by: A Somasundaram Co-developed-by: Shiju Jose Signed-off-by: Shiju Jose --- drivers/acpi/Kconfig | 14 ++ drivers/acpi/Makefile | 1 + drivers/acpi/ras2_acpi_common.c | 272 ++++++++++++++++++++++++++++++++ include/acpi/ras2_acpi.h | 59 +++++++ 4 files changed, 346 insertions(+) create mode 100755 drivers/acpi/ras2_acpi_common.c create mode 100644 include/acpi/ras2_acpi.h diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig index 3c3f8037ebed..6f69c9976c4f 100644 --- a/drivers/acpi/Kconfig +++ b/drivers/acpi/Kconfig @@ -284,6 +284,20 @@ config ACPI_CPPC_LIB If your platform does not support CPPC in firmware, leave this option disabled. +config ACPI_RAS2 + bool "ACPI RAS2 driver" + depends on ACPI_PROCESSOR + select MAILBOX + select PCC + help + The driver adds support for PCC (platform communication + channel) interfaces to communicate with the ACPI complaint + hardware platform supports RAS2(RAS2 Feature table). + The driver adds support for RAS2(extraction of RAS2 + tables from OS system table), PCC interfaces and OSPM interfaces + to send RAS2 commands. Driver adds platform device which + binds to the RAS2 memory driver. + config ACPI_PROCESSOR tristate "Processor" depends on X86 || ARM64 || LOONGARCH diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index 12ef8180d272..b12fba9cff06 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -105,6 +105,7 @@ obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o obj-$(CONFIG_ACPI_BGRT) += bgrt.o obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o obj-$(CONFIG_ACPI_SPCR_TABLE) += spcr.o +obj-$(CONFIG_ACPI_RAS2) += ras2_acpi_common.o obj-$(CONFIG_ACPI_DEBUGGER_USER) += acpi_dbg.o obj-$(CONFIG_ACPI_PPTT) += pptt.o obj-$(CONFIG_ACPI_PFRUT) += pfr_update.o pfr_telemetry.o diff --git a/drivers/acpi/ras2_acpi_common.c b/drivers/acpi/ras2_acpi_common.c new file mode 100755 index 000000000000..c6e4ed96cd81 --- /dev/null +++ b/drivers/acpi/ras2_acpi_common.c @@ -0,0 +1,272 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * ACPI RAS2 table processing common functions + * + * (C) Copyright 2014, 2015 Hewlett-Packard Enterprises. + * + * Copyright (c) 2024 HiSilicon Limited. + * + * Support for + * RAS2 - ACPI 6.5 Specification, section 5.2.21 + * PCC(Platform Communications Channel) - ACPI 6.5 Specification, + * chapter 14. + * + * Code contains common functions for RAS2. + * PCC(Platform communication channel) interfaces for the RAS2 + * and the functions for sending RAS2 commands to the ACPI HW. + */ + +#include +#include +#include +#include +#include +#include + +static int ras2_check_pcc_chan(struct ras2_context *ras2_ctx) +{ + struct acpi_ras2_shared_memory __iomem *generic_comm_base = ras2_ctx->pcc_comm_addr; + ktime_t next_deadline = ktime_add(ktime_get(), ras2_ctx->deadline); + + while (!ktime_after(ktime_get(), next_deadline)) { + /* + * As per ACPI spec, the PCC space wil be initialized by + * platform and should have set the command completion bit when + * PCC can be used by OSPM + */ + if (readw_relaxed(&generic_comm_base->status) & RAS2_PCC_CMD_COMPLETE) + return 0; + /* + * Reducing the bus traffic in case this loop takes longer than + * a few retries. + */ + udelay(10); + } + + return -EIO; +} + +/** + * ras2_send_pcc_cmd() - Send RAS2 command via PCC channel + * @ras2_ctx: pointer to the ras2 context structure + * @cmd: command to send + * + * Returns: 0 on success, an error otherwise + */ +int ras2_send_pcc_cmd(struct ras2_context *ras2_ctx, u16 cmd) +{ + int ret; + struct acpi_ras2_shared_memory *generic_comm_base = + (struct acpi_ras2_shared_memory *)ras2_ctx->pcc_comm_addr; + static ktime_t last_cmd_cmpl_time, last_mpar_reset; + struct mbox_chan *pcc_channel; + static int mpar_count; + unsigned int time_delta; + + if (cmd == RAS2_PCC_CMD_EXEC) { + ret = ras2_check_pcc_chan(ras2_ctx); + if (ret) + return ret; + } + pcc_channel = ras2_ctx->pcc_chan->mchan; + + /* + * Handle the Minimum Request Turnaround Time(MRTT) + * "The minimum amount of time that OSPM must wait after the completion + * of a command before issuing the next command, in microseconds" + */ + if (ras2_ctx->pcc_mrtt) { + time_delta = ktime_us_delta(ktime_get(), last_cmd_cmpl_time); + if (ras2_ctx->pcc_mrtt > time_delta) + udelay(ras2_ctx->pcc_mrtt - time_delta); + } + + /* + * Handle the non-zero Maximum Periodic Access Rate(MPAR) + * "The maximum number of periodic requests that the subspace channel can + * support, reported in commands per minute. 0 indicates no limitation." + * + * This parameter should be ideally zero or large enough so that it can + * handle maximum number of requests that all the cores in the system can + * collectively generate. If it is not, we will follow the spec and just + * not send the request to the platform after hitting the MPAR limit in + * any 60s window + */ + if (ras2_ctx->pcc_mpar) { + if (mpar_count == 0) { + time_delta = ktime_ms_delta(ktime_get(), last_mpar_reset); + if (time_delta < 60 * MSEC_PER_SEC) { + dev_dbg(ras2_ctx->dev, + "PCC cmd not sent due to MPAR limit"); + return -EIO; + } + last_mpar_reset = ktime_get(); + mpar_count = ras2_ctx->pcc_mpar; + } + mpar_count--; + } + + /* Write to the shared comm region. */ + writew_relaxed(cmd, &generic_comm_base->command); + + /* Flip CMD COMPLETE bit */ + writew_relaxed(0, &generic_comm_base->status); + + /* Ring doorbell */ + ret = mbox_send_message(pcc_channel, &cmd); + if (ret < 0) { + dev_err(ras2_ctx->dev, + "Err sending PCC mbox message. cmd:%d, ret:%d\n", + cmd, ret); + return ret; + } + + /* + * For READs we need to ensure the cmd completed to ensure + * the ensuing read()s can proceed. For WRITEs we dont care + * because the actual write()s are done before coming here + * and the next READ or WRITE will check if the channel + * is busy/free at the entry of this call. + * + * If Minimum Request Turnaround Time is non-zero, we need + * to record the completion time of both READ and WRITE + * command for proper handling of MRTT, so we need to check + * for pcc_mrtt in addition to CMD_READ + */ + if (cmd == RAS2_PCC_CMD_EXEC || ras2_ctx->pcc_mrtt) { + ret = ras2_check_pcc_chan(ras2_ctx); + if (ras2_ctx->pcc_mrtt) + last_cmd_cmpl_time = ktime_get(); + } + + if (pcc_channel->mbox->txdone_irq) + mbox_chan_txdone(pcc_channel, ret); + else + mbox_client_txdone(pcc_channel, ret); + + return ret; +} +EXPORT_SYMBOL_GPL(ras2_send_pcc_cmd); + +/** + * ras2_register_pcc_channel() - Register PCC channel + * @ras2_ctx: pointer to the ras2 context structure + * + * Returns: 0 on success, an error otherwise + */ +int ras2_register_pcc_channel(struct ras2_context *ras2_ctx) +{ + u64 usecs_lat; + unsigned int len; + struct pcc_mbox_chan *pcc_chan; + struct mbox_client *ras2_mbox_cl; + struct acpi_pcct_hw_reduced *ras2_ss; + + ras2_mbox_cl = &ras2_ctx->mbox_client; + if (!ras2_mbox_cl || ras2_ctx->pcc_subspace_idx < 0) + return -EINVAL; + + pcc_chan = pcc_mbox_request_channel(ras2_mbox_cl, + ras2_ctx->pcc_subspace_idx); + + if (IS_ERR(pcc_chan)) { + dev_err(ras2_ctx->dev, + "Failed to find PCC channel for subspace %d\n", + ras2_ctx->pcc_subspace_idx); + return -ENODEV; + } + ras2_ctx->pcc_chan = pcc_chan; + /* + * The PCC mailbox controller driver should + * have parsed the PCCT (global table of all + * PCC channels) and stored pointers to the + * subspace communication region in con_priv. + */ + ras2_ss = pcc_chan->mchan->con_priv; + + if (!ras2_ss) { + dev_err(ras2_ctx->dev, "No PCC subspace found for RAS2\n"); + pcc_mbox_free_channel(ras2_ctx->pcc_chan); + return -ENODEV; + } + + /* + * This is the shared communication region + * for the OS and Platform to communicate over. + */ + ras2_ctx->comm_base_addr = ras2_ss->base_address; + len = ras2_ss->length; + dev_dbg(ras2_ctx->dev, "PCC subspace for RAS2=0x%llx len=%d\n", + ras2_ctx->comm_base_addr, len); + + /* + * ras2_ss->latency is just a Nominal value. In reality + * the remote processor could be much slower to reply. + * So add an arbitrary amount of wait on top of Nominal. + */ + usecs_lat = RAS2_NUM_RETRIES * ras2_ss->latency; + ras2_ctx->deadline = ns_to_ktime(usecs_lat * NSEC_PER_USEC); + ras2_ctx->pcc_mrtt = ras2_ss->min_turnaround_time; + ras2_ctx->pcc_mpar = ras2_ss->max_access_rate; + ras2_ctx->pcc_comm_addr = acpi_os_ioremap(ras2_ctx->comm_base_addr, + len); + dev_dbg(ras2_ctx->dev, "pcc_comm_addr=%p\n", + ras2_ctx->pcc_comm_addr); + + /* Set flag so that we dont come here for each CPU. */ + ras2_ctx->pcc_channel_acquired = true; + + return 0; +} +EXPORT_SYMBOL_GPL(ras2_register_pcc_channel); + +/** + * ras2_unregister_pcc_channel() - Unregister PCC channel + * @ras2_ctx: pointer to the ras2 context structure + * + * Returns: 0 on success, an error otherwise + */ +int ras2_unregister_pcc_channel(struct ras2_context *ras2_ctx) +{ + if (!ras2_ctx->pcc_chan) + return -EINVAL; + + pcc_mbox_free_channel(ras2_ctx->pcc_chan); + + return 0; +} +EXPORT_SYMBOL_GPL(ras2_unregister_pcc_channel); + +/** + * ras2_add_platform_device() - Add a platform device for RAS2 + * @name: name of the device we're adding + * @data: platform specific data for this platform device + * @size: size of platform specific data + * + * Returns: pointer to platform device on success, an error otherwise + */ +struct platform_device *ras2_add_platform_device(char *name, const void *data, + size_t size) +{ + int ret; + struct platform_device *pdev; + + pdev = platform_device_alloc(name, PLATFORM_DEVID_AUTO); + if (!pdev) + return NULL; + + ret = platform_device_add_data(pdev, data, size); + if (ret) + goto dev_put; + + ret = platform_device_add(pdev); + if (ret) + goto dev_put; + + return pdev; + +dev_put: + platform_device_put(pdev); + + return ERR_PTR(ret); +} diff --git a/include/acpi/ras2_acpi.h b/include/acpi/ras2_acpi.h new file mode 100644 index 000000000000..5e9ac788670a --- /dev/null +++ b/include/acpi/ras2_acpi.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * RAS2 ACPI driver header file + * + * (C) Copyright 2014, 2015 Hewlett-Packard Enterprises + * + * Copyright (c) 2024 HiSilicon Limited + */ + +#ifndef _RAS2_ACPI_H +#define _RAS2_ACPI_H + +#include +#include +#include +#include +#include + +#define RAS2_PCC_CMD_COMPLETE 1 + +/* RAS2 specific PCC commands */ +#define RAS2_PCC_CMD_EXEC 0x01 + +#define RAS2_FAILURE 0 +#define RAS2_SUCCESS 1 + +/* + * Arbitrary Retries for PCC commands because the + * remote processor could be much slower to reply. + */ +#define RAS2_NUM_RETRIES 600 + +/* + * Data structures for PCC communication and RAS2 table + */ +struct ras2_context { + struct device *dev; + int id; + struct mbox_client mbox_client; + struct pcc_mbox_chan *pcc_chan; + void __iomem *pcc_comm_addr; + u64 comm_base_addr; + int pcc_subspace_idx; + bool pcc_channel_acquired; + ktime_t deadline; + unsigned int pcc_mpar; + unsigned int pcc_mrtt; + /* Lock to provide mutually exclusive access to PCC channel */ + spinlock_t spinlock; + struct device *scrub_dev; + const struct ras2_hw_scrub_ops *ops; +}; + +struct platform_device *ras2_add_platform_device(char *name, const void *data, + size_t size); +int ras2_send_pcc_cmd(struct ras2_context *ras2_ctx, u16 cmd); +int ras2_register_pcc_channel(struct ras2_context *ras2_ctx); +int ras2_unregister_pcc_channel(struct ras2_context *ras2_ctx); +#endif /* _RAS2_ACPI_H */ From patchwork Fri Feb 23 14:37:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205430 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp626536dyb; Fri, 23 Feb 2024 06:41:00 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCU+NTesGorvPa9okpf85a3N8II7z1MJX9POHM7qidwMsm/Kupmfq9uX3LBxOnpA8/5CxjJvIkicHFAi9UvqKkmMFzJvdA== X-Google-Smtp-Source: AGHT+IHLe2xH9+Cyd6OUg3AEbKBgQiH3LHLWE6vPTyyFIEDewldXc+svK7E/sJHOp08tjOdLwKwG X-Received: by 2002:aa7:dd12:0:b0:564:c590:7b8c with SMTP id i18-20020aa7dd12000000b00564c5907b8cmr19978edv.37.1708699259899; Fri, 23 Feb 2024 06:40:59 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699259; cv=pass; d=google.com; s=arc-20160816; b=HRoar/PorQ9zJQc17+XhjjXKSesiWkox/YjEMM8Nm6+jcAD6qjiXreH+ET6ki+EQn0 WovzZbv44VkW80o5SAJR+UpQQM/CrTySewPrjsL3seAZ7gDbNsBHdcHtmZrW0Y/Hrgy0 vs4avOqS4ScN0Wz1zHCsoG/vxsRpSYkR37ZsuAHzbt7hxRPBseSKxn+Nl8T729dxFe9w 79ieqWs4mr7eorshzpoldphY+SE438WqE/lZPD65uGaYFH0UJJHATUXNeC1pfh+7uSwB PCmq+u/fS/bQ4MzI5fvqJF3sN8FJhA+ZI/g8HfbdfKHAqgs6CAPc5qgq+66pMCYfOKu2 s+GA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=JIOCLLamCEHkn7MNShQK6YZeDCles9pjQDpQCPEpGUY=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=mvHzduxpkkcJFlsPDud7rH3SvZJEn/ROaYEhKoOhdKHZ8ioNBlN7zQ9kYxE1O5jEFs fKKV3mwOcBgWKOXQOcbLD3dkCj8Si3ddMmUfsATZWh+wh2PDh09rEUeeHmF/aEKvlFND JrdzOYi3jpa6VjjlQZWZJuZb1i9DpGpS2hsC3sdCcspkSkX4wMZ17p99wwMgS+kGf81T tPZ1mj2PMbAkKnB+/looZrCeYOSjmSOQl6sok63XrZExVYxceAwTuT4Q9LxcZEM8B5W4 hl+pVk6t3+3UKAI4izzCFKpp9PD/FwIC7GKEMMFjNyNkBi9US87bTRpg0U6pOyxmR3F6 e26g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78500-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78500-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id h12-20020a50cdcc000000b005648a4cc39bsi4429185edj.115.2024.02.23.06.40.59 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:40:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78500-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78500-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78500-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 4FC771F2463C for ; Fri, 23 Feb 2024 14:40:59 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 10E7784A50; Fri, 23 Feb 2024 14:37:55 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAB0D82871; Fri, 23 Feb 2024 14:37:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699072; cv=none; b=OEX5OHqRNueLtMvOifRYStVaAxEiaT+lro3MzRFnazHkUNJj+bNpYYG69L1AZE/H4ztAs/0DnfNEQx3x6wWfQTP63qHHVdML9IVOpMtF93yzCblis8AptD814ITxbhAI82rBrkpLV7dDwmTu/9utYa7S3AVVMeIgnox1yUVq8l4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699072; c=relaxed/simple; bh=raPgUodaldD9KuFpvmwyvYiVBzI0yurRJQDZ5oEOQ7c=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Vp4w78HU2ebm/aVoVqUpABPskFDNk9MeM2VaXyb30FvQOdGKSEwbPosM+MLQJa3bG3r/MJiNaRD1NB+nRhOO79eMmIM/2gfqU7TOhFtGmQYYKNk/R6DiwFU06YnIdvU+VNJiwinqlkZXqGYy4IgEWLiWqC9A3dk+O5O09DQLE48= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.216]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCDg2Jg0z6K5ts; Fri, 23 Feb 2024 22:33:39 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id A19ED140D26; Fri, 23 Feb 2024 22:37:48 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:47 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 11/12] ACPI:RAS2: Add driver for ACPI RAS2 feature table (RAS2) Date: Fri, 23 Feb 2024 22:37:22 +0800 Message-ID: <20240223143723.1574-12-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791701034947780651 X-GMAIL-MSGID: 1791701034947780651 From: Shiju Jose Add support for ACPI RAS2 feature table (RAS2) defined in the ACPI 6.5 Specification, section 5.2.21. This driver contains RAS2 Init, which extracts the RAS2 table. Driver adds platform device, for each memory feature, which binds to the RAS2 memory driver. Signed-off-by: Shiju Jose --- drivers/acpi/Makefile | 2 +- drivers/acpi/ras2_acpi.c | 97 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 98 insertions(+), 1 deletion(-) create mode 100755 drivers/acpi/ras2_acpi.c diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile index b12fba9cff06..e3fd6feb3e54 100644 --- a/drivers/acpi/Makefile +++ b/drivers/acpi/Makefile @@ -105,7 +105,7 @@ obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o obj-$(CONFIG_ACPI_BGRT) += bgrt.o obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o obj-$(CONFIG_ACPI_SPCR_TABLE) += spcr.o -obj-$(CONFIG_ACPI_RAS2) += ras2_acpi_common.o +obj-$(CONFIG_ACPI_RAS2) += ras2_acpi_common.o ras2_acpi.o obj-$(CONFIG_ACPI_DEBUGGER_USER) += acpi_dbg.o obj-$(CONFIG_ACPI_PPTT) += pptt.o obj-$(CONFIG_ACPI_PFRUT) += pfr_update.o pfr_telemetry.o diff --git a/drivers/acpi/ras2_acpi.c b/drivers/acpi/ras2_acpi.c new file mode 100755 index 000000000000..cd2e8f5ad253 --- /dev/null +++ b/drivers/acpi/ras2_acpi.c @@ -0,0 +1,97 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * ras2_acpi.c - Implementation of ACPI RAS2 feature table processing + * functions. + * + * Copyright (c) 2023 HiSilicon Limited. + * + * Support for + * RAS2 - ACPI 6.5 Specification, section 5.2.21 + * + * Driver contains RAS2 init, which extracts the RAS2 table and + * registers the PCC channel for communicating with the ACPI compliant + * platform that contains RAS2 command support in hardware.Driver adds + * platform device which binds to the RAS2 memory driver. + */ + +#define pr_fmt(fmt) "ACPI RAS2: " fmt + +#include +#include +#include +#include +#include +#include + +#define RAS2_FEATURE_TYPE_MEMORY 0x00 + +static int __init ras2_acpi_init(void) +{ + u8 count; + acpi_status status; + acpi_size ras2_size; + int pcc_subspace_idx; + struct platform_device *pdev; + struct acpi_table_ras2 *pRas2Table; + struct acpi_ras2_pcc_desc *pcc_desc_list; + struct platform_device **pdev_list = NULL; + struct acpi_table_header *pAcpiTable = NULL; + + status = acpi_get_table("RAS2", 0, &pAcpiTable); + if (ACPI_FAILURE(status) || !pAcpiTable) { + pr_err("ACPI RAS2 driver failed to initialize, get table failed\n"); + return RAS2_FAILURE; + } + + ras2_size = pAcpiTable->length; + if (ras2_size < sizeof(struct acpi_table_ras2)) { + pr_err("ACPI RAS2 table present but broken (too short #1)\n"); + goto free_ras2_table; + } + + pRas2Table = (struct acpi_table_ras2 *)pAcpiTable; + + if (pRas2Table->num_pcc_descs <= 0) { + pr_err("ACPI RAS2 table does not contain PCC descriptors\n"); + goto free_ras2_table; + } + + pdev_list = kzalloc((pRas2Table->num_pcc_descs * sizeof(struct platform_device *)), + GFP_KERNEL); + if (!pdev_list) + goto free_ras2_table; + + pcc_desc_list = (struct acpi_ras2_pcc_desc *) + ((void *)pRas2Table + sizeof(struct acpi_table_ras2)); + count = 0; + while (count < pRas2Table->num_pcc_descs) { + if (pcc_desc_list->feature_type == RAS2_FEATURE_TYPE_MEMORY) { + pcc_subspace_idx = pcc_desc_list->channel_id; + /* Add the platform device and bind ras2 memory driver */ + pdev = ras2_add_platform_device("ras2", &pcc_subspace_idx, + sizeof(pcc_subspace_idx)); + if (!pdev) + goto free_ras2_pdev; + pdev_list[count] = pdev; + } + count++; + pcc_desc_list = pcc_desc_list + sizeof(struct acpi_ras2_pcc_desc); + } + + acpi_put_table(pAcpiTable); + return RAS2_SUCCESS; + +free_ras2_pdev: + count = 0; + while (count < pRas2Table->num_pcc_descs) { + if (pcc_desc_list->feature_type == + RAS2_FEATURE_TYPE_MEMORY) + platform_device_put(pdev_list[count++]); + } + kfree(pdev_list); + +free_ras2_table: + acpi_put_table(pAcpiTable); + return RAS2_FAILURE; +} +late_initcall(ras2_acpi_init) From patchwork Fri Feb 23 14:37:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shiju Jose X-Patchwork-Id: 205432 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp626718dyb; Fri, 23 Feb 2024 06:41:19 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVSyXINooDqKbdh8ZGulVTlP6CTyB5c5zwiBSGKXDr8a/4zKm4e8G5kCR5GxVY7dLBvWzxISPrRt8dgOoJ565BJuEcilA== X-Google-Smtp-Source: AGHT+IGKz2gjQ177qc0OSqOYHzGxMYyGfaa3vMCDSsv5+k1oih61F/D5+BGz7DXa85FoCW+pwd1q X-Received: by 2002:a17:906:e2cc:b0:a3d:7532:15ad with SMTP id gr12-20020a170906e2cc00b00a3d753215admr797088ejb.39.1708699279396; Fri, 23 Feb 2024 06:41:19 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708699279; cv=pass; d=google.com; s=arc-20160816; b=FttmqQr60yIOrLgFXM8Cedd6egIVKTsFmCJR4MATZ7bW6zCywyrLUlVs7dHsox+g3i L6WC/Z2qDsR9ZWqvgVmTOVJ5PobhtihCDkmuvZosUZ0OBRox8U9+jhn3MRvTmLpNHolq Q2QNGHAWqNNj21pIsLgVkhWN+IDjX0hNs8pLGIyo1nQs8/5XpSAbyQoLAcxLpWK08DHd y76WzeW4b9ARkkgTvB3dHg+Su8mbGNjBAPmDqFhV+bFNPmUMsN2tNiig3hN/gKusErwF 7YDT4X35pnyjMM/nauTO/GKqWwB6DNyts53Rkh/dby0WjvDmMl2Nagjixs3iCLXK2WzG jEmQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=4UsJ1V2FT4z+dMaK2hHsJm/JT6M7MRn87+O24TqmNdI=; fh=HSG1GuUk1qrw7ZagzQBYl1ym3Aq+DwkDC5iYsmK8j3I=; b=0BKKmwppGETtTh/Tc3hbM5p/9yT19RUBIgGKrpKKqw/mvbZ/REHCPGUNi0Uc1mEH41 qBAMVcZCswSKcPGzQ95srcGizlqQ1KP57hvL82rjogufOyGTCV5AHeacskitDj/xjfT5 F7o5oerpbhG8CS1n9p8QIRfOl+StLg3KPBzbR6Hgwlh4pLcckhxqMxY44Lr7jTJ0V2A8 3+Nnrp0QI9ZgkUQDpAgwzG0D8qZjfeEOR0kpsGYQC1COLxtypQ1FOtvRuBCMSL8FICZJ 0+NejfrYqkJzfi8W1wleA9UbCVldFR4tPOHxH/yD/1b8CDb6t65qGyOMXZy+Plc7a/3Q 5ewA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78501-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78501-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id ji11-20020a170907980b00b00a3ee76836d1si3894828ejc.252.2024.02.23.06.41.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 06:41:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-78501-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-78501-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-78501-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 987681F2471E for ; Fri, 23 Feb 2024 14:41:18 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6708784FA9; Fri, 23 Feb 2024 14:38:00 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E93F182C88; Fri, 23 Feb 2024 14:37:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699074; cv=none; b=QbyUcVQpiGEP4LCDVjWVOwce6VegbVnzWRx47wBHVsQzJhc5XyOlXF1owGcPnONsy2LaqSw9WZ44g1nEbbKycJimySvWU3sVRhfXJkRZqNhKRRzMrmPGeMlQTPz+xT1woBTFWbUCXf9eeK37JYYZFmc7lobmo1e5pQvinwnS+ms= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708699074; c=relaxed/simple; bh=ng2+xCoimxTpUqvHJna0QOOSNJ2dtvt8TT9sWrKV3SM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=RafnPp6svR7A22A4RSoE7GGFowB0a3rUiQ5Bb8TS5CqGQ+grX0LsgISnjPhQOSNf05vKV2xKheSGjXDCvOnN8KhaIckyhxYX+2OfQ4Q7vl9qAQWmaeHFMoWV2PCoH5vjXm7JWxzvdQr+wmKZKJCOWoNWyGoC4VYfTTjCSrbuBak= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4ThCDh2YZGz6K5tV; Fri, 23 Feb 2024 22:33:40 +0800 (CST) Received: from lhrpeml500006.china.huawei.com (unknown [7.191.161.198]) by mail.maildlp.com (Postfix) with ESMTPS id A53EA140B33; Fri, 23 Feb 2024 22:37:49 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Fri, 23 Feb 2024 14:37:48 +0000 From: To: , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v7 12/12] memory: RAS2: Add memory RAS2 driver Date: Fri, 23 Feb 2024 22:37:23 +0800 Message-ID: <20240223143723.1574-13-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20240223143723.1574-1-shiju.jose@huawei.com> References: <20240223143723.1574-1-shiju.jose@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: lhrpeml500003.china.huawei.com (7.191.162.67) To lhrpeml500006.china.huawei.com (7.191.161.198) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791701055721990797 X-GMAIL-MSGID: 1791701055721990797 From: Shiju Jose Memory RAS2 driver binds to the platform device add by the ACPI RAS2 driver. Driver registers the PCC channel for communicating with the ACPI compliant platform that contains RAS2 command support in the hardware. Add interface functions to support configuring the parameters of HW patrol scrubs in the system, which exposed to the kernel via the RAS2 and PCC, using the RAS2 commands. Add support for RAS2 platform devices to register with scrub subsystem driver. This enables user to configure the parameters of HW patrol scrubs, which exposed to the kernel via the RAS2 table, through the scrub sysfs attributes. Open Question: Sysfs scrub control attribute "enable_background_scrub" is added for RAS2, based on the feedback from Bill Schwartz --- drivers/memory/Kconfig | 14 ++ drivers/memory/Makefile | 2 + drivers/memory/ras2.c | 364 +++++++++++++++++++++++++++++++++++ drivers/memory/ras2_common.c | 282 +++++++++++++++++++++++++++ include/memory/ras2.h | 88 +++++++++ 5 files changed, 750 insertions(+) create mode 100644 drivers/memory/ras2.c create mode 100644 drivers/memory/ras2_common.c create mode 100755 include/memory/ras2.h diff --git a/drivers/memory/Kconfig b/drivers/memory/Kconfig index d2e015c09d83..705f346f23de 100644 --- a/drivers/memory/Kconfig +++ b/drivers/memory/Kconfig @@ -225,6 +225,20 @@ config STM32_FMC2_EBI devices (like SRAM, ethernet adapters, FPGAs, LCD displays, ...) on SOCs containing the FMC2 External Bus Interface. +config MEM_RAS2 + bool "Memory RAS2 driver" + depends on ACPI_RAS2 + depends on SCRUB + help + The driver bound to the platform device added by the ACPI RAS2 + driver. Driver registers the PCC channel for communicating with + the ACPI compliant platform that contains RAS2 command support + in the hardware. + Registers with the scrub configure driver to provide sysfs interfaces + for configuring the hw patrol scrubber in the system, which exposed + via the ACPI RAS2 table and PCC. Provides the interface functions + support configuring the HW patrol scrubbers in the system. + source "drivers/memory/samsung/Kconfig" source "drivers/memory/tegra/Kconfig" source "drivers/memory/scrub/Kconfig" diff --git a/drivers/memory/Makefile b/drivers/memory/Makefile index 4b37312cb342..52afd9d2259a 100644 --- a/drivers/memory/Makefile +++ b/drivers/memory/Makefile @@ -7,6 +7,8 @@ obj-$(CONFIG_DDR) += jedec_ddr_data.o ifeq ($(CONFIG_DDR),y) obj-$(CONFIG_OF) += of_memory.o endif +obj-$(CONFIG_MEM_RAS2) += ras2_common.o ras2.o + obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o obj-$(CONFIG_BRCMSTB_DPFE) += brcmstb_dpfe.o diff --git a/drivers/memory/ras2.c b/drivers/memory/ras2.c new file mode 100644 index 000000000000..12fd1f4580d4 --- /dev/null +++ b/drivers/memory/ras2.c @@ -0,0 +1,364 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * ras2.c - ACPI RAS2 memory driver + * + * Copyright (c) 2023 HiSilicon Limited. + * + * - Registers the PCC channel for communicating with the + * ACPI compliant platform that contains RAS2 command + * support in the hardware. + * - Provides functions to configure HW patrol scrubs + * in the system. + * - Registers with the scrub configure driver for the + * hw patrol scrub in the system, which exposed via + * the ACPI RAS2 table and PCC. + */ + +#define pr_fmt(fmt) "MEMORY RAS2: " fmt + +#include +#include +#include +#include + +#include +#include + +/* RAS2 specific definitions. */ +#define RAS2_SCRUB "ras2_scrub" +#define RAS2_ID_FORMAT RAS2_SCRUB "%d" +#define RAS2_SUPPORT_HW_PARTOL_SCRUB BIT(0) +#define RAS2_TYPE_PATROL_SCRUB 0x0000 + +#define RAS2_GET_PATROL_PARAMETERS 0x01 +#define RAS2_START_PATROL_SCRUBBER 0x02 +#define RAS2_STOP_PATROL_SCRUBBER 0x03 + +#define RAS2_PATROL_SCRUB_RATE_VALID BIT(0) +#define RAS2_PATROL_SCRUB_RATE_IN_MASK GENMASK(15, 8) +#define RAS2_PATROL_SCRUB_EN_BACKGROUND BIT(0) +#define RAS2_PATROL_SCRUB_RATE_OUT_MASK GENMASK(7, 0) +#define RAS2_PATROL_SCRUB_MIN_RATE_OUT_MASK GENMASK(15, 8) +#define RAS2_PATROL_SCRUB_MAX_RATE_OUT_MASK GENMASK(23, 16) + +static void ras2_tx_done(struct mbox_client *cl, void *msg, int ret) +{ + if (ret) { + dev_dbg(cl->dev, "TX did not complete: CMD sent:%x, ret:%d\n", + *(u16 *)msg, ret); + } else { + dev_dbg(cl->dev, "TX completed. CMD sent:%x, ret:%d\n", + *(u16 *)msg, ret); + } +} + +/* + * The below functions are exposed to OSPM, to query, configure and + * initiate memory patrol scrub. + */ +static int ras2_is_patrol_scrub_support(struct ras2_context *ras2_ctx) +{ + int ret; + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + generic_comm_base->set_capabilities[0] = 0; + + /* send command for reading RAS2 capabilities */ + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + dev_err(ras2_ctx->dev, + "%s: ras2_send_pcc_cmd failed\n", __func__); + return ret; + } + + return generic_comm_base->features[0] & RAS2_SUPPORT_HW_PARTOL_SCRUB; +} + +static int ras2_get_patrol_scrub_params(struct ras2_context *ras2_ctx, + struct ras2_scrub_params *params) +{ + int ret = 0; + u8 min_supp_scrub_rate, max_supp_scrub_rate; + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + struct acpi_ras2_patrol_scrub_parameter __iomem *patrol_scrub_params; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + patrol_scrub_params = ras2_ctx->pcc_comm_addr + sizeof(*generic_comm_base); + + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + generic_comm_base->set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + /* send command for reading RAS2 capabilities */ + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + dev_err(ras2_ctx->dev, + "%s: ras2_send_pcc_cmd failed\n", __func__); + return ret; + } + + if (!(generic_comm_base->features[0] & RAS2_SUPPORT_HW_PARTOL_SCRUB) || + !(generic_comm_base->num_parameter_blocks)) { + dev_err(ras2_ctx->dev, + "%s: Platform does not support HW Patrol Scrubber\n", __func__); + return -EOPNOTSUPP; + } + + if (!patrol_scrub_params->requested_address_range[1]) { + dev_err(ras2_ctx->dev, + "%s: Invalid requested address range, \ + requested_address_range[0]=0x%llx \ + requested_address_range[1]=0x%llx\n", + __func__, + patrol_scrub_params->requested_address_range[0], + patrol_scrub_params->requested_address_range[1]); + return -EOPNOTSUPP; + } + + generic_comm_base->set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + patrol_scrub_params->header.type = RAS2_TYPE_PATROL_SCRUB; + patrol_scrub_params->patrol_scrub_command = RAS2_GET_PATROL_PARAMETERS; + + /* send command for reading the HW patrol scrub parameters */ + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + dev_err(ras2_ctx->dev, + "%s: failed to read HW patrol scrub parameters\n", + __func__); + return ret; + } + + /* copy output scrub parameters */ + params->addr_base = patrol_scrub_params->actual_address_range[0]; + params->addr_size = patrol_scrub_params->actual_address_range[1]; + params->flags = patrol_scrub_params->flags; + params->rate = FIELD_GET(RAS2_PATROL_SCRUB_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + min_supp_scrub_rate = FIELD_GET(RAS2_PATROL_SCRUB_MIN_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + max_supp_scrub_rate = FIELD_GET(RAS2_PATROL_SCRUB_MAX_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + snprintf(params->rate_avail, RAS2_MAX_RATE_RANGE_LENGTH, + "%d-%d", min_supp_scrub_rate, max_supp_scrub_rate); + + return 0; +} + +static int ras2_enable_patrol_scrub(struct ras2_context *ras2_ctx, bool enable) +{ + int ret = 0; + struct ras2_scrub_params params; + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + u8 scrub_rate_to_set, min_supp_scrub_rate, max_supp_scrub_rate; + struct acpi_ras2_patrol_scrub_parameter __iomem *patrol_scrub_params; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + patrol_scrub_params = ras2_ctx->pcc_comm_addr + sizeof(*generic_comm_base); + + if (enable) { + ret = ras2_get_patrol_scrub_params(ras2_ctx, ¶ms); + if (ret) + return ret; + } + + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + generic_comm_base->set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + patrol_scrub_params->header.type = RAS2_TYPE_PATROL_SCRUB; + + if (enable) { + patrol_scrub_params->patrol_scrub_command = RAS2_START_PATROL_SCRUBBER; + patrol_scrub_params->requested_address_range[0] = params.addr_base; + patrol_scrub_params->requested_address_range[1] = params.addr_size; + + scrub_rate_to_set = FIELD_GET(RAS2_PATROL_SCRUB_RATE_IN_MASK, + patrol_scrub_params->scrub_params_in); + min_supp_scrub_rate = FIELD_GET(RAS2_PATROL_SCRUB_MIN_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + max_supp_scrub_rate = FIELD_GET(RAS2_PATROL_SCRUB_MAX_RATE_OUT_MASK, + patrol_scrub_params->scrub_params_out); + if (scrub_rate_to_set < min_supp_scrub_rate || + scrub_rate_to_set > max_supp_scrub_rate) { + dev_warn(ras2_ctx->dev, + "patrol scrub rate to set is out of the supported range\n"); + dev_warn(ras2_ctx->dev, + "min_supp_scrub_rate=%d max_supp_scrub_rate=%d\n", + min_supp_scrub_rate, max_supp_scrub_rate); + return -EINVAL; + } + } else { + patrol_scrub_params->patrol_scrub_command = RAS2_STOP_PATROL_SCRUBBER; + } + + /* send command for enable/disable HW patrol scrub */ + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + pr_err("%s: failed to enable/disable the HW patrol scrub\n", __func__); + return ret; + } + + return 0; +} + +static int ras2_enable_background_scrub(struct ras2_context *ras2_ctx, bool enable) +{ + int ret; + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + struct acpi_ras2_patrol_scrub_parameter __iomem *patrol_scrub_params; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + patrol_scrub_params = ras2_ctx->pcc_comm_addr + sizeof(*generic_comm_base); + + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + generic_comm_base->set_capabilities[0] = RAS2_SUPPORT_HW_PARTOL_SCRUB; + patrol_scrub_params->header.type = RAS2_TYPE_PATROL_SCRUB; + patrol_scrub_params->patrol_scrub_command = RAS2_START_PATROL_SCRUBBER; + + patrol_scrub_params->scrub_params_in &= ~RAS2_PATROL_SCRUB_EN_BACKGROUND; + patrol_scrub_params->scrub_params_in |= FIELD_PREP(RAS2_PATROL_SCRUB_EN_BACKGROUND, + enable); + + /* send command for enable/disable HW patrol scrub */ + ret = ras2_send_pcc_cmd(ras2_ctx, RAS2_PCC_CMD_EXEC); + if (ret) { + dev_err(ras2_ctx->dev, + "%s: failed to enable/disable background patrol scrubbing\n", + __func__); + return ret; + } + + return 0; +} +static int ras2_set_patrol_scrub_params(struct ras2_context *ras2_ctx, + struct ras2_scrub_params *params, u8 param_type) +{ + struct acpi_ras2_shared_memory __iomem *generic_comm_base; + struct acpi_ras2_patrol_scrub_parameter __iomem *patrol_scrub_params; + + if (!ras2_ctx || !ras2_ctx->pcc_comm_addr) + return -EFAULT; + + generic_comm_base = ras2_ctx->pcc_comm_addr; + patrol_scrub_params = ras2_ctx->pcc_comm_addr + sizeof(*generic_comm_base); + + guard(spinlock_irqsave)(&ras2_ctx->spinlock); + patrol_scrub_params->header.type = RAS2_TYPE_PATROL_SCRUB; + if (param_type == RAS2_MEM_SCRUB_PARAM_ADDR_BASE && params->addr_base) { + patrol_scrub_params->requested_address_range[0] = params->addr_base; + } else if (param_type == RAS2_MEM_SCRUB_PARAM_ADDR_SIZE && params->addr_size) { + patrol_scrub_params->requested_address_range[1] = params->addr_size; + } else if (param_type == RAS2_MEM_SCRUB_PARAM_RATE) { + patrol_scrub_params->scrub_params_in &= ~RAS2_PATROL_SCRUB_RATE_IN_MASK; + patrol_scrub_params->scrub_params_in |= FIELD_PREP(RAS2_PATROL_SCRUB_RATE_IN_MASK, + params->rate); + } else { + dev_err(ras2_ctx->dev, "Invalid patrol scrub parameter to set\n"); + return -EINVAL; + } + + return 0; +} + +static const struct ras2_hw_scrub_ops ras2_hw_ops = { + .enable_scrub = ras2_enable_patrol_scrub, + .enable_background_scrub = ras2_enable_background_scrub, + .get_scrub_params = ras2_get_patrol_scrub_params, + .set_scrub_params = ras2_set_patrol_scrub_params, +}; + +static const struct scrub_ops ras2_scrub_ops = { + .is_visible = ras2_hw_scrub_is_visible, + .read = ras2_hw_scrub_read, + .write = ras2_hw_scrub_write, + .read_string = ras2_hw_scrub_read_strings, +}; + +static DEFINE_IDA(ras2_ida); + +static void devm_ras2_release(void *ctx) +{ + struct ras2_context *ras2_ctx = ctx; + + ida_free(&ras2_ida, ras2_ctx->id); + ras2_unregister_pcc_channel(ras2_ctx); +} + +static int ras2_probe(struct platform_device *pdev) +{ + int ret, id; + struct mbox_client *cl; + struct device *hw_scrub_dev; + struct ras2_context *ras2_ctx; + char scrub_name[RAS2_MAX_NAME_LENGTH]; + + ras2_ctx = devm_kzalloc(&pdev->dev, sizeof(*ras2_ctx), GFP_KERNEL); + if (!ras2_ctx) + return -ENOMEM; + + ras2_ctx->dev = &pdev->dev; + ras2_ctx->ops = &ras2_hw_ops; + spin_lock_init(&ras2_ctx->spinlock); + platform_set_drvdata(pdev, ras2_ctx); + + cl = &ras2_ctx->mbox_client; + /* Request mailbox channel */ + cl->dev = &pdev->dev; + cl->tx_done = ras2_tx_done; + cl->knows_txdone = true; + ras2_ctx->pcc_subspace_idx = *((int *)pdev->dev.platform_data); + dev_dbg(&pdev->dev, "pcc-subspace-id=%d\n", ras2_ctx->pcc_subspace_idx); + ret = ras2_register_pcc_channel(ras2_ctx); + if (ret < 0) + return ret; + + ret = devm_add_action_or_reset(&pdev->dev, devm_ras2_release, ras2_ctx); + if (ret < 0) + return ret; + + if (ras2_is_patrol_scrub_support(ras2_ctx)) { + id = ida_alloc(&ras2_ida, GFP_KERNEL); + if (id < 0) + return id; + ras2_ctx->id = id; + snprintf(scrub_name, sizeof(scrub_name), "%s%d", RAS2_SCRUB, id); + dev_set_name(&pdev->dev, RAS2_ID_FORMAT, id); + hw_scrub_dev = devm_scrub_device_register(&pdev->dev, scrub_name, + ras2_ctx, &ras2_scrub_ops, + 0, NULL); + if (PTR_ERR_OR_ZERO(hw_scrub_dev)) + return PTR_ERR_OR_ZERO(hw_scrub_dev); + } + ras2_ctx->scrub_dev = hw_scrub_dev; + + return 0; +} + +static const struct platform_device_id ras2_id_table[] = { + { .name = "ras2", }, + { } +}; +MODULE_DEVICE_TABLE(platform, ras2_id_table); + +static struct platform_driver ras2_driver = { + .probe = ras2_probe, + .driver = { + .name = "ras2", + .suppress_bind_attrs = true, + }, + .id_table = ras2_id_table, +}; +module_driver(ras2_driver, platform_driver_register, platform_driver_unregister); + +MODULE_DESCRIPTION("ras2 memory driver"); +MODULE_LICENSE("GPL"); diff --git a/drivers/memory/ras2_common.c b/drivers/memory/ras2_common.c new file mode 100644 index 000000000000..97e1852e9fd7 --- /dev/null +++ b/drivers/memory/ras2_common.c @@ -0,0 +1,282 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Common functions for memory RAS2 driver + * + * Copyright (c) 2024 HiSilicon Limited. + * + * This driver implements call back functions for the scrub + * configure driver to configure the parameters of the hw patrol + * scrubbers in the system, which exposed via the ACPI AS2 + * table and PCC. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +static int enable_write(struct ras2_context *ras2_ctx, long val) +{ + int ret; + bool enable = val; + + ret = ras2_ctx->ops->enable_scrub(ras2_ctx, enable); + if (ret) { + dev_err(ras2_ctx->dev, + "enable patrol scrub fail, enable=%d ret=%d\n", + enable, ret); + return ret; + } + + return 0; +} + +static int enable_background_scrub_write(struct ras2_context *ras2_ctx, long val) +{ + int ret; + bool enable = val; + + ret = ras2_ctx->ops->enable_background_scrub(ras2_ctx, enable); + if (ret) { + dev_err(ras2_ctx->dev, + "enable background patrol scrub fail, enable=%d ret=%d\n", + enable, ret); + return ret; + } + + return 0; +} + +static int addr_base_read(struct ras2_context *ras2_ctx, u64 *val) +{ + int ret; + struct ras2_scrub_params params; + + ret = ras2_ctx->ops->get_scrub_params(ras2_ctx, ¶ms); + if (ret) { + dev_err(ras2_ctx->dev, + "get patrol scrub params fail ret=%d\n", ret); + return ret; + } + *val = params.addr_base; + + return 0; +} + +static int addr_base_write(struct ras2_context *ras2_ctx, u64 val) +{ + int ret; + struct ras2_scrub_params params; + + params.addr_base = val; + ret = ras2_ctx->ops->set_scrub_params(ras2_ctx, ¶ms, + RAS2_MEM_SCRUB_PARAM_ADDR_BASE); + if (ret) { + dev_err(ras2_ctx->dev, + "set patrol scrub params for addr_base fail ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static int addr_size_read(struct ras2_context *ras2_ctx, u64 *val) +{ + int ret; + struct ras2_scrub_params params; + + ret = ras2_ctx->ops->get_scrub_params(ras2_ctx, ¶ms); + if (ret) { + dev_err(ras2_ctx->dev, + "get patrol scrub params fail ret=%d\n", ret); + return ret; + } + *val = params.addr_size; + + return 0; +} + +static int addr_size_write(struct ras2_context *ras2_ctx, u64 val) +{ + int ret; + struct ras2_scrub_params params; + + params.addr_size = val; + ret = ras2_ctx->ops->set_scrub_params(ras2_ctx, ¶ms, + RAS2_MEM_SCRUB_PARAM_ADDR_SIZE); + if (ret) { + dev_err(ras2_ctx->dev, + "set patrol scrub params for addr_size fail ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static int rate_read(struct ras2_context *ras2_ctx, u64 *val) +{ + int ret; + struct ras2_scrub_params params; + + ret = ras2_ctx->ops->get_scrub_params(ras2_ctx, ¶ms); + if (ret) { + dev_err(ras2_ctx->dev, "get patrol scrub params fail ret=%d\n", + ret); + return ret; + } + *val = params.rate; + + return 0; +} + +static int rate_write(struct ras2_context *ras2_ctx, long val) +{ + int ret; + struct ras2_scrub_params params; + + params.rate = val; + ret = ras2_ctx->ops->set_scrub_params(ras2_ctx, ¶ms, + RAS2_MEM_SCRUB_PARAM_RATE); + if (ret) { + dev_err(ras2_ctx->dev, + "set patrol scrub params for rate fail ret=%d\n", ret); + return ret; + } + + return 0; +} + +static int rate_available_read(struct ras2_context *ras2_ctx, char *buf) +{ + int ret; + struct ras2_scrub_params params; + + ret = ras2_ctx->ops->get_scrub_params(ras2_ctx, ¶ms); + if (ret) { + dev_err(ras2_ctx->dev, + "get patrol scrub params fail ret=%d\n", ret); + return ret; + } + + sprintf(buf, "%s\n", params.rate_avail); + + return 0; +} + +/** + * ras2_hw_scrub_is_visible() - Callback to return attribute visibility + * @drv_data: Pointer to driver-private data structure passed + * as argument to devm_scrub_device_register(). + * @attr_id: Scrub attribute + * @mode: attribute's mode + * @region_id: ID of the memory region + * + * Returns: 0 on success, an error otherwise + */ +umode_t ras2_hw_scrub_is_visible(struct device *dev, u32 attr_id, + umode_t mode, int region_id) +{ + switch (attr_id) { + case scrub_rate_available: + case scrub_enable: + case scrub_enable_background_scrub: + case scrub_addr_base: + case scrub_addr_size: + case scrub_rate: + return mode; + default: + return 0; + } +} + +/** + * ras2_hw_scrub_read() - Read callback for data attributes + * @device: Pointer to scrub device + * @attr_id: Scrub attribute + * @region_id: ID of the memory region + * @val: Pointer to the returned data + * + * Returns: 0 on success, an error otherwise + */ +int ras2_hw_scrub_read(struct device *device, u32 attr_id, + int region_id, u64 *val) +{ + struct ras2_context *ras2_ctx; + + ras2_ctx = dev_get_drvdata(device); + + switch (attr_id) { + case scrub_addr_base: + return addr_base_read(ras2_ctx, val); + case scrub_addr_size: + return addr_size_read(ras2_ctx, val); + case scrub_rate: + return rate_read(ras2_ctx, val); + default: + return -EOPNOTSUPP; + } +} + +/** + * ras2_hw_scrub_write() - Write callback for data attributes + * @device: Pointer to scrub device + * @attr_id: Scrub attribute + * @region_id: ID of the memory region + * @val: Value to write + * + * Returns: 0 on success, an error otherwise + */ +int ras2_hw_scrub_write(struct device *device, u32 attr_id, + int region_id, u64 val) +{ + struct ras2_context *ras2_ctx; + + ras2_ctx = dev_get_drvdata(device); + + switch (attr_id) { + case scrub_addr_base: + return addr_base_write(ras2_ctx, val); + case scrub_addr_size: + return addr_size_write(ras2_ctx, val); + case scrub_enable: + return enable_write(ras2_ctx, val); + case scrub_enable_background_scrub: + return enable_background_scrub_write(ras2_ctx, val); + case scrub_rate: + return rate_write(ras2_ctx, val); + default: + return -EOPNOTSUPP; + } +} + +/** + * ras2_hw_scrub_read_strings() - Read callback for string attributes + * @device: Pointer to scrub device + * @attr_id: Scrub attribute + * @region_id: ID of the memory region + * @buf: Pointer to the buffer for copying returned string + * + * Returns: 0 on success, an error otherwise + */ +int ras2_hw_scrub_read_strings(struct device *dev, u32 attr_id, + int region_id, char *buf) +{ + struct ras2_context *ras2_ctx; + + ras2_ctx = dev_get_drvdata(dev); + + switch (attr_id) { + case scrub_rate_available: + return rate_available_read(ras2_ctx, buf); + default: + return -EOPNOTSUPP; + } +} diff --git a/include/memory/ras2.h b/include/memory/ras2.h new file mode 100755 index 000000000000..3db1dce5dd34 --- /dev/null +++ b/include/memory/ras2.h @@ -0,0 +1,88 @@ +/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 */ +/* + * Memory RAS2 driver header file + * + * Copyright (c) 2024 HiSilicon Limited + */ + +#ifndef _RAS2_H +#define _RAS2_H + +#include + +#define RAS2_MAX_NAME_LENGTH 64 +#define RAS2_MAX_RATE_RANGE_LENGTH 64 + +/* + * Data structures RAS2 + */ + +/** + * struct ras2_scrub_params- RAS2 scrub parameter data structure. + * @addr_base: [IN] Base address of the address range to be patrol scrubbed. + * [OUT] Base address of the actual address range. + * @addr_size: [IN] Size of the address range to be patrol scrubbed. + * [OUT] Size of the actual address range. + * @flags: [OUT] The platform returns this value in response to + * GET_PATROL_PARAMETERS. + * For RAS2: + * Bit [0]: Will be set if memory scrubber is already + * running for address range specified in “Actual Address Range”. + * @rate: [IN] Requested patrol scrub rate. + * [OUT] Current patrol scrub rate. + * @rate_avail:[OUT] Supported patrol rates. + */ +struct ras2_scrub_params { + u64 addr_base; + u64 addr_size; + u16 flags; + u32 rate; + char rate_avail[RAS2_MAX_RATE_RANGE_LENGTH]; +}; + +enum { + RAS2_MEM_SCRUB_PARAM_ADDR_BASE = 0, + RAS2_MEM_SCRUB_PARAM_ADDR_SIZE, + RAS2_MEM_SCRUB_PARAM_RATE, +}; + +/** + * struct ras2_hw_scrub_ops - ras2 hw scrub device operations + * @enable_scrub: Function to enable/disable RAS2 scrubber. + * Parameters are: + * @ras2_ctx: Pointer to RAS2 context structure. + * @enable: enable/disable RAS2 patrol scrubber. + * The function returns 0 on success or a negative error number. + * @enable_background_scrub: Function to enable/disable background scrubbing. + * Parameters are: + * @ras2_ctx: Pointer to RAS2 context structure. + * @enable: enable/disable background patrol scrubbing. + * The function returns 0 on success or a negative error number. + * @get_scrub_params: Read scrubber parameters. Mandatory + * Parameters are: + * @ras2_ctx: Pointer to RAS2 context structure. + * @params: Pointer to scrub params data structure. + * The function returns 0 on success or a negative error number. + * @set_scrub_params: Set scrubber parameters. Mandatory. + * Parameters are: + * @ras2_ctx: Pointer to RAS2 context structure. + * @params: Pointer to scrub params data structure. + * @param_type: Scrub parameter type to set. + * The function returns 0 on success or a negative error number. + */ +struct ras2_hw_scrub_ops { + int (*enable_scrub)(struct ras2_context *ras2_ctx, bool enable); + int (*enable_background_scrub)(struct ras2_context *ras2_ctx, bool enable); + int (*get_scrub_params)(struct ras2_context *ras2_ctx, + struct ras2_scrub_params *params); + int (*set_scrub_params)(struct ras2_context *ras2_ctx, + struct ras2_scrub_params *params, u8 param_type); +}; + +umode_t ras2_hw_scrub_is_visible(struct device *dev, u32 attr_id, + umode_t mode, int region_id); +int ras2_hw_scrub_read(struct device *dev, u32 attr_id, int region_id, u64 *val); +int ras2_hw_scrub_write(struct device *dev, u32 attr_id, int region_id, u64 val); +int ras2_hw_scrub_read_strings(struct device *dev, u32 attr_id, + int region_id, char *buf); +#endif /* _RAS2_H */