Message ID | 20221212070627.1372402-3-ira.weiny@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2088729wrr; Sun, 11 Dec 2022 23:07:31 -0800 (PST) X-Google-Smtp-Source: AA0mqf4VBLg/uzQSaGGZrXA9gEJmg9VtkAmF5Go6JYb8hkIc/TyrQADQvxH4ZKNHKJk+6U+zEY30 X-Received: by 2002:a17:90a:f2d6:b0:219:f351:d55 with SMTP id gt22-20020a17090af2d600b00219f3510d55mr14712115pjb.44.1670828851313; Sun, 11 Dec 2022 23:07:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670828851; cv=none; d=google.com; s=arc-20160816; b=CDY2r4uwGPuUPHh+uhD7FfwMBr51OqXzlHh3HhPFZpUS5Bj7iS+8fF75r1GbD99OFb ulrAezWPcOaVWy5S/4yn0/EHRSbE9vtkfX0COsBuZXq7wgimbViead0KoWcKXIP6tkO/ YIHOzZHybecjDJgDNPMnHa6cM4DBwChVyiG9qBLs2Ap8XmPYSAnnCW3+WPY2sBOnfbnk uHnjoRRgvxfptjInowG5tUHKkXEWEZbcVsivBQZPLUlQV8L4xG1olejm/fQdxJquAZb9 kY7ZMvsZ/ZIQkPqfWKe4twgboEFtl7d1UnQ/a5cuxv9JyBUPCdcRLAQAyVE+IBnrQYkI IKkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9qn2Af7M4nQOdt5vHl9OQVc7Enl67mp7IyO2AL7sNa4=; b=ZOOPv6AmxWc2NBZWg0079R78hyr/Ajuq8FZ/YzwjZMNOhybCQbrSp+eJwpYDcViucd YcyYqCdxu4wm0YxfrqMbzpWRS7t0HWDPeEFPEmaYpf0AZVfJn6VSRRwjo3x/qeShjnx2 4/iS/VCdlKttSjnrv1+JH874SeKae1/B4h8aCEljS5IVx1cpK5zapRt1cj8SrqCOjzNE 4G/ggfEzLh7Bi7oFLv2VHCiup7EP0R632x3bZOiWsmDbpDciqHxYWZVIYfdjMElTrmWy pZdv55eG39R9LTu8Akft9I2qFSEgH46JFfCu04PannTRZv64Y+BJNVhUzitjnd+/yCHu 2VlQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mz6Nhs8w; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lt21-20020a17090b355500b0021a0b5c5f6asi8390737pjb.156.2022.12.11.23.07.18; Sun, 11 Dec 2022 23:07:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mz6Nhs8w; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231467AbiLLHGq (ORCPT <rfc822;jeantsuru.cumc.mandola@gmail.com> + 99 others); Mon, 12 Dec 2022 02:06:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231422AbiLLHGf (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 12 Dec 2022 02:06:35 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA09F5592; Sun, 11 Dec 2022 23:06:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670828793; x=1702364793; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ru6K8BkmzAn2O8M+N8vZ535qlhYGy3BPIhbOEPkv5NY=; b=mz6Nhs8wnH1JtnE9o+WJKpQA5dgqu5N+2hBWdM2LxDtTrblvbYAhALiw Wk6zUs6+SWJDKKbHGvO2KFvMYpo8mqquANqabWEOrlW19R3+cIwYtp9ce 9Keo39xj0186LJbE/zgTLwXs/xOtG0g/NztFG7bCvaoxuQv2dQPovamL1 LA7qfU5aZ0xLO/QJzzm4wqKSCvBqKahVzHrMOYO+4t/jY+SJjpHwrnHn0 b8sVHslcfzVyLHNiWrFpLrjykyYufmfmGPSJtPjeQN4eqjXOyVcy1cvVD cbQFBp+eakyH6ndbqrxa+Q1zX/8aqrol6eqmMRbtY7jJqpE6I/Qqs4Uq3 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="317811832" X-IronPort-AV: E=Sophos;i="5.96,237,1665471600"; d="scan'208";a="317811832" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2022 23:06:33 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="641643059" X-IronPort-AV: E=Sophos;i="5.96,237,1665471600"; d="scan'208";a="641643059" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.209.168.6]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2022 23:06:32 -0800 From: ira.weiny@intel.com To: Dan Williams <dan.j.williams@intel.com> Cc: Ira Weiny <ira.weiny@intel.com>, Bjorn Helgaas <bhelgaas@google.com>, Alison Schofield <alison.schofield@intel.com>, Vishal Verma <vishal.l.verma@intel.com>, Davidlohr Bueso <dave@stgolabs.net>, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Dave Jiang <dave.jiang@intel.com>, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-acpi@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH V4 2/9] cxl/mem: Read, trace, and clear events on driver load Date: Sun, 11 Dec 2022 23:06:20 -0800 Message-Id: <20221212070627.1372402-3-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221212070627.1372402-1-ira.weiny@intel.com> References: <20221212070627.1372402-1-ira.weiny@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751991033710757052?= X-GMAIL-MSGID: =?utf-8?q?1751991033710757052?= |
Series |
CXL: Process event logs
|
|
Commit Message
Ira Weiny
Dec. 12, 2022, 7:06 a.m. UTC
From: Ira Weiny <ira.weiny@intel.com> CXL devices have multiple event logs which can be queried for CXL event records. Devices are required to support the storage of at least one event record in each event log type. Devices track event log overflow by incrementing a counter and tracking the time of the first and last overflow event seen. Software queries events via the Get Event Record mailbox command; CXL rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section 8.2.9.2.3 Clear Event Records mailbox command. If the result of negotiating CXL Error Reporting Control is OS control, read and clear all event logs on driver load. Ensure a clean slate of events by reading and clearing the events on driver load. The status register is not used because a device may continue to trigger events and the only requirement is to empty the log at least once. This allows for the required transition from empty to non-empty for interrupt generation. Handling of interrupts is in a follow on patch. The device can return up to 1MB worth of event records per query. Allocate a shared large buffer to handle the max number of records based on the mailbox payload size. This patch traces a raw event record and leaves specific event record type tracing to subsequent patches. Macros are created to aid in tracing the common CXL Event header fields. Each record is cleared explicitly. A clear all bit is specified but is only valid when the log overflows. Signed-off-by: Ira Weiny <ira.weiny@intel.com> --- Changes from V3: Dan Split off _OSC pcie bits Use existing style for host bridge flag in that patch Clean up event processing loop Use dev_err_ratelimited() Clean up version change log Delete 'EVENT LOG OVERFLOW' Remove cxl_clear_event_logs() Add comment for native cxl control Fail driver load on event buf allocation failure Comment why events are not processed without _OSC flag --- drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ drivers/cxl/cxl.h | 12 ++++ drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ drivers/cxl/pci.c | 40 ++++++++++++ 5 files changed, 392 insertions(+)
Comments
On Sun, Dec 11, 2022 at 11:06:20PM -0800, ira.weiny@intel.com wrote: > From: Ira Weiny <ira.weiny@intel.com> > > CXL devices have multiple event logs which can be queried for CXL event > records. Devices are required to support the storage of at least one > event record in each event log type. > > Devices track event log overflow by incrementing a counter and tracking > the time of the first and last overflow event seen. > > Software queries events via the Get Event Record mailbox command; CXL > rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section > 8.2.9.2.3 Clear Event Records mailbox command. > > If the result of negotiating CXL Error Reporting Control is OS control, > read and clear all event logs on driver load. > > Ensure a clean slate of events by reading and clearing the events on > driver load. > > The status register is not used because a device may continue to trigger > events and the only requirement is to empty the log at least once. This > allows for the required transition from empty to non-empty for interrupt > generation. Handling of interrupts is in a follow on patch. > > The device can return up to 1MB worth of event records per query. > Allocate a shared large buffer to handle the max number of records based > on the mailbox payload size. > > This patch traces a raw event record and leaves specific event record > type tracing to subsequent patches. Macros are created to aid in > tracing the common CXL Event header fields. > > Each record is cleared explicitly. A clear all bit is specified but is > only valid when the log overflows. > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > --- > Changes from V3: > Dan > Split off _OSC pcie bits > Use existing style for host bridge flag in that > patch > Clean up event processing loop > Use dev_err_ratelimited() > Clean up version change log > Delete 'EVENT LOG OVERFLOW' > Remove cxl_clear_event_logs() > Add comment for native cxl control > Fail driver load on event buf allocation failure > Comment why events are not processed without _OSC flag > --- > drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ > drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ > drivers/cxl/cxl.h | 12 ++++ > drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ > drivers/cxl/pci.c | 40 ++++++++++++ > 5 files changed, 392 insertions(+) > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > index b03fba212799..9fb327370e08 100644 > --- a/drivers/cxl/core/mbox.c > +++ b/drivers/cxl/core/mbox.c > @@ -8,6 +8,7 @@ > #include <cxl.h> > > #include "core.h" > +#include "trace.h" > > static bool cxl_raw_allow_all; > > @@ -717,6 +718,140 @@ int cxl_enumerate_cmds(struct cxl_dev_state *cxlds) > } > EXPORT_SYMBOL_NS_GPL(cxl_enumerate_cmds, CXL); > > +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, > + enum cxl_event_log_type log, > + struct cxl_get_event_payload *get_pl) > +{ > + struct cxl_mbox_clear_event_payload payload = { > + .event_log = log, > + }; > + u16 total = le16_to_cpu(get_pl->record_count); > + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > + size_t pl_size = sizeof(payload); > + struct cxl_mbox_cmd mbox_cmd; > + u16 cnt; > + int rc; > + int i; > + > + /* Payload size may limit the max handles */ > + if (pl_size > cxlds->payload_size) { > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > + pl_size = cxlds->payload_size; > + } > + > + mbox_cmd = (struct cxl_mbox_cmd) { > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > + .payload_in = &payload, > + .size_in = pl_size, > + }; > + > + /* > + * Clear Event Records uses u8 for the handle cnt while Get Event > + * Record can return up to 0xffff records. > + */ > + i = 0; > + for (cnt = 0; cnt < total; cnt++) { > + payload.handle[i++] = get_pl->records[cnt].hdr.handle; > + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", > + log, le16_to_cpu(payload.handle[i])); > + > + if (i == max_handles) { > + payload.nr_recs = i; > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > + if (rc) > + return rc; > + i = 0; > + } > + } > + > + /* Clear what is left if any */ > + if (i) { > + payload.nr_recs = i; > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > + if (rc) > + return rc; > + } > + > + return 0; > +} > + > +static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, > + enum cxl_event_log_type type) > +{ > + struct cxl_get_event_payload *payload; > + struct cxl_mbox_cmd mbox_cmd; > + u8 log_type = type; > + u16 nr_rec; > + > + mutex_lock(&cxlds->event.log_lock); > + payload = cxlds->event.buf; > + > + mbox_cmd = (struct cxl_mbox_cmd) { > + .opcode = CXL_MBOX_OP_GET_EVENT_RECORD, > + .payload_in = &log_type, > + .size_in = sizeof(log_type), > + .payload_out = payload, > + .size_out = cxlds->payload_size, > + .min_out = struct_size(payload, records, 0), > + }; > + > + do { > + int rc, i; > + > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > + if (rc) { > + dev_err_ratelimited(cxlds->dev, "Event log '%d': Failed to query event records : %d", > + type, rc); > + break; > + } > + > + nr_rec = le16_to_cpu(payload->record_count); > + if (!nr_rec) > + break; > + > + for (i = 0; i < nr_rec; i++) > + trace_cxl_generic_event(cxlds->dev, type, > + &payload->records[i]); > + > + if (payload->flags & CXL_GET_EVENT_FLAG_OVERFLOW) > + trace_cxl_overflow(cxlds->dev, type, payload); > + > + rc = cxl_clear_event_record(cxlds, type, payload); > + if (rc) { > + dev_err_ratelimited(cxlds->dev, "Event log '%d': Failed to clear events : %d", > + type, rc); > + break; > + } > + } while (nr_rec); > + > + mutex_unlock(&cxlds->event.log_lock); > +} > + > +/** > + * cxl_mem_get_event_records - Get Event Records from the device > + * @cxlds: The device data for the operation > + * > + * Retrieve all event records available on the device, report them as trace > + * events, and clear them. > + * > + * See CXL rev 3.0 @8.2.9.2.2 Get Event Records > + * See CXL rev 3.0 @8.2.9.2.3 Clear Event Records > + */ > +void cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status) > +{ > + dev_dbg(cxlds->dev, "Reading event logs: %x\n", status); > + > + if (status & CXLDEV_EVENT_STATUS_FATAL) > + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FATAL); > + if (status & CXLDEV_EVENT_STATUS_FAIL) > + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FAIL); > + if (status & CXLDEV_EVENT_STATUS_WARN) > + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_WARN); > + if (status & CXLDEV_EVENT_STATUS_INFO) > + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_INFO); > +} > +EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); > + > /** > * cxl_mem_get_partition_info - Get partition info > * @cxlds: The device data for the operation > @@ -868,6 +1003,7 @@ struct cxl_dev_state *cxl_dev_state_create(struct device *dev) > } > > mutex_init(&cxlds->mbox_mutex); > + mutex_init(&cxlds->event.log_lock); > cxlds->dev = dev; > > return cxlds; > diff --git a/drivers/cxl/core/trace.h b/drivers/cxl/core/trace.h > index 20ca2fe2ca8e..6898212fcb47 100644 > --- a/drivers/cxl/core/trace.h > +++ b/drivers/cxl/core/trace.h > @@ -6,7 +6,9 @@ > #if !defined(_CXL_EVENTS_H) || defined(TRACE_HEADER_MULTI_READ) > #define _CXL_EVENTS_H > > +#include <asm-generic/unaligned.h> > #include <cxl.h> > +#include <cxlmem.h> > #include <linux/tracepoint.h> > > #define CXL_RAS_UC_CACHE_DATA_PARITY BIT(0) > @@ -103,6 +105,124 @@ TRACE_EVENT(cxl_aer_correctable_error, > ) > ); > > +#include <linux/tracepoint.h> > + > +#define cxl_event_log_type_str(type) \ > + __print_symbolic(type, \ > + { CXL_EVENT_TYPE_INFO, "Informational" }, \ > + { CXL_EVENT_TYPE_WARN, "Warning" }, \ > + { CXL_EVENT_TYPE_FAIL, "Failure" }, \ > + { CXL_EVENT_TYPE_FATAL, "Fatal" }) > + > +TRACE_EVENT(cxl_overflow, > + > + TP_PROTO(const struct device *dev, enum cxl_event_log_type log, > + struct cxl_get_event_payload *payload), > + > + TP_ARGS(dev, log, payload), > + > + TP_STRUCT__entry( > + __string(dev_name, dev_name(dev)) > + __field(int, log) > + __field(u64, first_ts) > + __field(u64, last_ts) > + __field(u16, count) > + ), > + > + TP_fast_assign( > + __assign_str(dev_name, dev_name(dev)); > + __entry->log = log; > + __entry->count = le16_to_cpu(payload->overflow_err_count); > + __entry->first_ts = le64_to_cpu(payload->first_overflow_timestamp); > + __entry->last_ts = le64_to_cpu(payload->last_overflow_timestamp); > + ), > + > + TP_printk("%s: log=%s : %u records from %llu to %llu", > + __get_str(dev_name), cxl_event_log_type_str(__entry->log), > + __entry->count, __entry->first_ts, __entry->last_ts) > + > +); > + > +/* > + * Common Event Record Format > + * CXL 3.0 section 8.2.9.2.1; Table 8-42 > + */ > +#define CXL_EVENT_RECORD_FLAG_PERMANENT BIT(2) > +#define CXL_EVENT_RECORD_FLAG_MAINT_NEEDED BIT(3) > +#define CXL_EVENT_RECORD_FLAG_PERF_DEGRADED BIT(4) > +#define CXL_EVENT_RECORD_FLAG_HW_REPLACE BIT(5) > +#define show_hdr_flags(flags) __print_flags(flags, " | ", \ > + { CXL_EVENT_RECORD_FLAG_PERMANENT, "PERMANENT_CONDITION" }, \ > + { CXL_EVENT_RECORD_FLAG_MAINT_NEEDED, "MAINTENANCE_NEEDED" }, \ > + { CXL_EVENT_RECORD_FLAG_PERF_DEGRADED, "PERFORMANCE_DEGRADED" }, \ > + { CXL_EVENT_RECORD_FLAG_HW_REPLACE, "HARDWARE_REPLACEMENT_NEEDED" } \ > +) > + > +/* > + * Define macros for the common header of each CXL event. > + * > + * Tracepoints using these macros must do 3 things: > + * > + * 1) Add CXL_EVT_TP_entry to TP_STRUCT__entry > + * 2) Use CXL_EVT_TP_fast_assign within TP_fast_assign; > + * pass the dev, log, and CXL event header > + * 3) Use CXL_EVT_TP_printk() instead of TP_printk() > + * > + * See the generic_event tracepoint as an example. > + */ > +#define CXL_EVT_TP_entry \ > + __string(dev_name, dev_name(dev)) \ > + __field(int, log) \ > + __field_struct(uuid_t, hdr_uuid) \ > + __field(u32, hdr_flags) \ > + __field(u16, hdr_handle) \ > + __field(u16, hdr_related_handle) \ > + __field(u64, hdr_timestamp) \ > + __field(u8, hdr_length) \ > + __field(u8, hdr_maint_op_class) > + > +#define CXL_EVT_TP_fast_assign(dev, l, hdr) \ > + __assign_str(dev_name, dev_name(dev)); \ > + __entry->log = (l); \ > + memcpy(&__entry->hdr_uuid, &(hdr).id, sizeof(uuid_t)); \ > + __entry->hdr_length = (hdr).length; \ > + __entry->hdr_flags = get_unaligned_le24((hdr).flags); \ > + __entry->hdr_handle = le16_to_cpu((hdr).handle); \ > + __entry->hdr_related_handle = le16_to_cpu((hdr).related_handle); \ > + __entry->hdr_timestamp = le64_to_cpu((hdr).timestamp); \ > + __entry->hdr_maint_op_class = (hdr).maint_op_class > + > +#define CXL_EVT_TP_printk(fmt, ...) \ > + TP_printk("%s log=%s : time=%llu uuid=%pUb len=%d flags='%s' " \ > + "handle=%x related_handle=%x maint_op_class=%u" \ > + " : " fmt, \ > + __get_str(dev_name), cxl_event_log_type_str(__entry->log), \ > + __entry->hdr_timestamp, &__entry->hdr_uuid, __entry->hdr_length,\ > + show_hdr_flags(__entry->hdr_flags), __entry->hdr_handle, \ > + __entry->hdr_related_handle, __entry->hdr_maint_op_class, \ > + ##__VA_ARGS__) > + > +TRACE_EVENT(cxl_generic_event, > + > + TP_PROTO(const struct device *dev, enum cxl_event_log_type log, > + struct cxl_event_record_raw *rec), > + > + TP_ARGS(dev, log, rec), > + > + TP_STRUCT__entry( > + CXL_EVT_TP_entry > + __array(u8, data, CXL_EVENT_RECORD_DATA_LENGTH) > + ), > + > + TP_fast_assign( > + CXL_EVT_TP_fast_assign(dev, log, rec->hdr); > + memcpy(__entry->data, &rec->data, CXL_EVENT_RECORD_DATA_LENGTH); > + ), > + > + CXL_EVT_TP_printk("%s", > + __print_hex(__entry->data, CXL_EVENT_RECORD_DATA_LENGTH)) > +); > + > #endif /* _CXL_EVENTS_H */ > > #define TRACE_INCLUDE_FILE trace > diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h > index aa3af3bb73b2..5974d1082210 100644 > --- a/drivers/cxl/cxl.h > +++ b/drivers/cxl/cxl.h > @@ -156,6 +156,18 @@ static inline int ways_to_eiw(unsigned int ways, u8 *eiw) > #define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3 > #define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000 > > +/* CXL 3.0 8.2.8.3.1 Event Status Register */ > +#define CXLDEV_DEV_EVENT_STATUS_OFFSET 0x00 > +#define CXLDEV_EVENT_STATUS_INFO BIT(0) > +#define CXLDEV_EVENT_STATUS_WARN BIT(1) > +#define CXLDEV_EVENT_STATUS_FAIL BIT(2) > +#define CXLDEV_EVENT_STATUS_FATAL BIT(3) > + > +#define CXLDEV_EVENT_STATUS_ALL (CXLDEV_EVENT_STATUS_INFO | \ > + CXLDEV_EVENT_STATUS_WARN | \ > + CXLDEV_EVENT_STATUS_FAIL | \ > + CXLDEV_EVENT_STATUS_FATAL) > + > /* CXL 2.0 8.2.8.4 Mailbox Registers */ > #define CXLDEV_MBOX_CAPS_OFFSET 0x00 > #define CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0) > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > index ab138004f644..dd9aa3dd738e 100644 > --- a/drivers/cxl/cxlmem.h > +++ b/drivers/cxl/cxlmem.h > @@ -4,6 +4,7 @@ > #define __CXL_MEM_H__ > #include <uapi/linux/cxl_mem.h> > #include <linux/cdev.h> > +#include <linux/uuid.h> > #include "cxl.h" > > /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ > @@ -193,6 +194,17 @@ struct cxl_endpoint_dvsec_info { > struct range dvsec_range[2]; > }; > > +/** > + * struct cxl_event_state - Event log driver state > + * > + * @event_buf: Buffer to receive event data > + * @event_log_lock: Serialize event_buf and log use > + */ > +struct cxl_event_state { > + struct cxl_get_event_payload *buf; > + struct mutex log_lock; > +}; > + > /** > * struct cxl_dev_state - The driver device state > * > @@ -266,12 +278,16 @@ struct cxl_dev_state { > > struct xarray doe_mbs; > > + struct cxl_event_state event; > + > int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); > }; > > enum cxl_opcode { > CXL_MBOX_OP_INVALID = 0x0000, > CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID, > + CXL_MBOX_OP_GET_EVENT_RECORD = 0x0100, > + CXL_MBOX_OP_CLEAR_EVENT_RECORD = 0x0101, > CXL_MBOX_OP_GET_FW_INFO = 0x0200, > CXL_MBOX_OP_ACTIVATE_FW = 0x0202, > CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, > @@ -347,6 +363,73 @@ struct cxl_mbox_identify { > u8 qos_telemetry_caps; > } __packed; > > +/* > + * Common Event Record Format > + * CXL rev 3.0 section 8.2.9.2.1; Table 8-42 > + */ > +struct cxl_event_record_hdr { > + uuid_t id; > + u8 length; > + u8 flags[3]; > + __le16 handle; > + __le16 related_handle; > + __le64 timestamp; > + u8 maint_op_class; > + u8 reserved[15]; > +} __packed; > + > +#define CXL_EVENT_RECORD_DATA_LENGTH 0x50 > +struct cxl_event_record_raw { > + struct cxl_event_record_hdr hdr; > + u8 data[CXL_EVENT_RECORD_DATA_LENGTH]; > +} __packed; > + > +/* > + * Get Event Records output payload > + * CXL rev 3.0 section 8.2.9.2.2; Table 8-50 > + */ > +#define CXL_GET_EVENT_FLAG_OVERFLOW BIT(0) > +#define CXL_GET_EVENT_FLAG_MORE_RECORDS BIT(1) I don't see any code consumes this more flag, is anything I miss? Device shall set this more flag when single output payload can not fit in all records > +struct cxl_get_event_payload { > + u8 flags; > + u8 reserved1; > + __le16 overflow_err_count; > + __le64 first_overflow_timestamp; > + __le64 last_overflow_timestamp; > + __le16 record_count; > + u8 reserved2[10]; > + struct cxl_event_record_raw records[]; > +} __packed; > + > +/* > + * CXL rev 3.0 section 8.2.9.2.2; Table 8-49 > + */ > +enum cxl_event_log_type { > + CXL_EVENT_TYPE_INFO = 0x00, > + CXL_EVENT_TYPE_WARN, > + CXL_EVENT_TYPE_FAIL, > + CXL_EVENT_TYPE_FATAL, > + CXL_EVENT_TYPE_MAX > +}; > + > +/* > + * Clear Event Records input payload > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > + */ > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > +struct cxl_mbox_clear_event_payload { > + u8 event_log; /* enum cxl_event_log_type */ > + u8 clear_flags; > + u8 nr_recs; > + u8 reserved[3]; > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; > +} __packed; > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > + (((payload_size) - \ > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ > + sizeof(__le16)) > + > struct cxl_mbox_get_partition_info { > __le64 active_volatile_cap; > __le64 active_persistent_cap; > @@ -441,6 +524,7 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds); > struct cxl_dev_state *cxl_dev_state_create(struct device *dev); > void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); > void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); > +void cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status); > #ifdef CONFIG_CXL_SUSPEND > void cxl_mem_active_inc(void); > void cxl_mem_active_dec(void); > diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c > index 3a66aadb4df0..a2d8382bc593 100644 > --- a/drivers/cxl/pci.c > +++ b/drivers/cxl/pci.c > @@ -417,8 +417,37 @@ static void disable_aer(void *pdev) > pci_disable_pcie_error_reporting(pdev); > } > > +static void cxl_mem_free_event_buffer(void *buf) > +{ > + kvfree(buf); > +} > + > +/* > + * There is a single buffer for reading event logs from the mailbox. All logs > + * share this buffer protected by the cxlds->event_log_lock. > + */ > +static int cxl_mem_alloc_event_buf(struct cxl_dev_state *cxlds) > +{ > + struct cxl_get_event_payload *buf; > + > + dev_dbg(cxlds->dev, "Allocating event buffer size %zu\n", > + cxlds->payload_size); > + > + buf = kvmalloc(cxlds->payload_size, GFP_KERNEL); > + if (!buf) > + return -ENOMEM; > + > + if (devm_add_action_or_reset(cxlds->dev, cxl_mem_free_event_buffer, > + buf)) > + return -ENOMEM; > + > + cxlds->event.buf = buf; > + return 0; > +} > + > static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) > { > + struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); > struct cxl_register_map map; > struct cxl_memdev *cxlmd; > struct cxl_dev_state *cxlds; > @@ -494,6 +523,17 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) > if (IS_ERR(cxlmd)) > return PTR_ERR(cxlmd); > > + rc = cxl_mem_alloc_event_buf(cxlds); > + if (rc) > + return rc; > + > + /* > + * When BIOS maintains CXL error reporting control, it will process > + * event records. Only one agent can do so. > + */ > + if (host_bridge->native_cxl_error) > + cxl_mem_get_event_records(cxlds, CXLDEV_EVENT_STATUS_ALL); > + > if (cxlds->regs.ras) { > pci_enable_pcie_error_reporting(pdev); > rc = devm_add_action_or_reset(&pdev->dev, disable_aer, pdev); > -- > 2.37.2 > >
On Tue, Dec 13, 2022 at 02:49:02PM +0800, johnny wrote: > On Sun, Dec 11, 2022 at 11:06:20PM -0800, ira.weiny@intel.com wrote: > > From: Ira Weiny <ira.weiny@intel.com> > > [snip] > > + > > +#define CXL_EVENT_RECORD_DATA_LENGTH 0x50 > > +struct cxl_event_record_raw { > > + struct cxl_event_record_hdr hdr; > > + u8 data[CXL_EVENT_RECORD_DATA_LENGTH]; > > +} __packed; > > + > > +/* > > + * Get Event Records output payload > > + * CXL rev 3.0 section 8.2.9.2.2; Table 8-50 > > + */ > > +#define CXL_GET_EVENT_FLAG_OVERFLOW BIT(0) > > +#define CXL_GET_EVENT_FLAG_MORE_RECORDS BIT(1) > I don't see any code consumes this more flag, is anything I miss? > Device shall set this more flag when single output payload can not fit in all records I should have removed this flag and put something in the cover letter. I left it in for completeness but you are correct it is unused. We determined back in V1 that the more bit was useless in this particular looping of Get Events Records.[1] The net-net is that if the driver does not see the number of records go to 0 it can't be sure it will get an interrupt for the next set of events. Therefore it loops until it sees the number of records go to 0. Ira [1] https://lore.kernel.org/all/Y4blpk%2FesXJMe79Y@iweiny-desk3/ > > +struct cxl_get_event_payload { > > + u8 flags; > > + u8 reserved1; > > + __le16 overflow_err_count; > > + __le64 first_overflow_timestamp; > > + __le64 last_overflow_timestamp; > > + __le16 record_count; > > + u8 reserved2[10]; > > + struct cxl_event_record_raw records[]; > > +} __packed; > > + > > +/* > > + * CXL rev 3.0 section 8.2.9.2.2; Table 8-49 > > + */ > > +enum cxl_event_log_type { > > + CXL_EVENT_TYPE_INFO = 0x00, > > + CXL_EVENT_TYPE_WARN, > > + CXL_EVENT_TYPE_FAIL, > > + CXL_EVENT_TYPE_FATAL, > > + CXL_EVENT_TYPE_MAX > > +}; > > + > > +/* > > + * Clear Event Records input payload > > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > > + */ > > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > > +struct cxl_mbox_clear_event_payload { > > + u8 event_log; /* enum cxl_event_log_type */ > > + u8 clear_flags; > > + u8 nr_recs; > > + u8 reserved[3]; > > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; > > +} __packed; > > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > > + (((payload_size) - \ > > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ > > + sizeof(__le16)) > > + > > struct cxl_mbox_get_partition_info { > > __le64 active_volatile_cap; > > __le64 active_persistent_cap; > > @@ -441,6 +524,7 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds); > > struct cxl_dev_state *cxl_dev_state_create(struct device *dev); > > void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); > > void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); > > +void cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status); > > #ifdef CONFIG_CXL_SUSPEND > > void cxl_mem_active_inc(void); > > void cxl_mem_active_dec(void); > > diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c > > index 3a66aadb4df0..a2d8382bc593 100644 > > --- a/drivers/cxl/pci.c > > +++ b/drivers/cxl/pci.c > > @@ -417,8 +417,37 @@ static void disable_aer(void *pdev) > > pci_disable_pcie_error_reporting(pdev); > > } > > > > +static void cxl_mem_free_event_buffer(void *buf) > > +{ > > + kvfree(buf); > > +} > > + > > +/* > > + * There is a single buffer for reading event logs from the mailbox. All logs > > + * share this buffer protected by the cxlds->event_log_lock. > > + */ > > +static int cxl_mem_alloc_event_buf(struct cxl_dev_state *cxlds) > > +{ > > + struct cxl_get_event_payload *buf; > > + > > + dev_dbg(cxlds->dev, "Allocating event buffer size %zu\n", > > + cxlds->payload_size); > > + > > + buf = kvmalloc(cxlds->payload_size, GFP_KERNEL); > > + if (!buf) > > + return -ENOMEM; > > + > > + if (devm_add_action_or_reset(cxlds->dev, cxl_mem_free_event_buffer, > > + buf)) > > + return -ENOMEM; > > + > > + cxlds->event.buf = buf; > > + return 0; > > +} > > + > > static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) > > { > > + struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); > > struct cxl_register_map map; > > struct cxl_memdev *cxlmd; > > struct cxl_dev_state *cxlds; > > @@ -494,6 +523,17 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) > > if (IS_ERR(cxlmd)) > > return PTR_ERR(cxlmd); > > > > + rc = cxl_mem_alloc_event_buf(cxlds); > > + if (rc) > > + return rc; > > + > > + /* > > + * When BIOS maintains CXL error reporting control, it will process > > + * event records. Only one agent can do so. > > + */ > > + if (host_bridge->native_cxl_error) > > + cxl_mem_get_event_records(cxlds, CXLDEV_EVENT_STATUS_ALL); > > + > > if (cxlds->regs.ras) { > > pci_enable_pcie_error_reporting(pdev); > > rc = devm_add_action_or_reset(&pdev->dev, disable_aer, pdev); > > -- > > 2.37.2 > > > > >
On Sun, 11 Dec 2022 23:06:20 -0800 ira.weiny@intel.com wrote: > From: Ira Weiny <ira.weiny@intel.com> > > CXL devices have multiple event logs which can be queried for CXL event > records. Devices are required to support the storage of at least one > event record in each event log type. > > Devices track event log overflow by incrementing a counter and tracking > the time of the first and last overflow event seen. > > Software queries events via the Get Event Record mailbox command; CXL > rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section > 8.2.9.2.3 Clear Event Records mailbox command. > > If the result of negotiating CXL Error Reporting Control is OS control, > read and clear all event logs on driver load. > > Ensure a clean slate of events by reading and clearing the events on > driver load. > > The status register is not used because a device may continue to trigger > events and the only requirement is to empty the log at least once. This > allows for the required transition from empty to non-empty for interrupt > generation. Handling of interrupts is in a follow on patch. > > The device can return up to 1MB worth of event records per query. > Allocate a shared large buffer to handle the max number of records based > on the mailbox payload size. > > This patch traces a raw event record and leaves specific event record > type tracing to subsequent patches. Macros are created to aid in > tracing the common CXL Event header fields. > > Each record is cleared explicitly. A clear all bit is specified but is > only valid when the log overflows. > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> A few things noticed inline. I've tightened the QEMU code to reject the case of the input payload claims to be bigger than the mailbox size and hacked the size down to 256 bytes so it triggers the problem highlighted below. > > --- > Changes from V3: > Dan > Split off _OSC pcie bits > Use existing style for host bridge flag in that > patch > Clean up event processing loop > Use dev_err_ratelimited() > Clean up version change log > Delete 'EVENT LOG OVERFLOW' > Remove cxl_clear_event_logs() > Add comment for native cxl control > Fail driver load on event buf allocation failure > Comment why events are not processed without _OSC flag > --- > drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ > drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ > drivers/cxl/cxl.h | 12 ++++ > drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ > drivers/cxl/pci.c | 40 ++++++++++++ > 5 files changed, 392 insertions(+) > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > index b03fba212799..9fb327370e08 100644 > --- a/drivers/cxl/core/mbox.c > +++ b/drivers/cxl/core/mbox.c > +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, > + enum cxl_event_log_type log, > + struct cxl_get_event_payload *get_pl) > +{ > + struct cxl_mbox_clear_event_payload payload = { > + .event_log = log, > + }; > + u16 total = le16_to_cpu(get_pl->record_count); > + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > + size_t pl_size = sizeof(payload); > + struct cxl_mbox_cmd mbox_cmd; > + u16 cnt; > + int rc; > + int i; > + > + /* Payload size may limit the max handles */ > + if (pl_size > cxlds->payload_size) { > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > + pl_size = cxlds->payload_size; > + } > + > + mbox_cmd = (struct cxl_mbox_cmd) { > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > + .payload_in = &payload, > + .size_in = pl_size, This payload size should be whatever we need to store the records, not the max size possible. Particularly as that size is currently bigger than the mailbox might be. It shouldn't fail (I think) simply because a later version of the spec might add more to this message and things should still work, but definitely not good practice to tell the hardware this is much longer than it actually is. > + }; > + > + /* > + * Clear Event Records uses u8 for the handle cnt while Get Event > + * Record can return up to 0xffff records. > + */ > + i = 0; > + for (cnt = 0; cnt < total; cnt++) { > + payload.handle[i++] = get_pl->records[cnt].hdr.handle; > + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", > + log, le16_to_cpu(payload.handle[i])); > + > + if (i == max_handles) { > + payload.nr_recs = i; > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > + if (rc) > + return rc; > + i = 0; > + } > + } > + > + /* Clear what is left if any */ > + if (i) { > + payload.nr_recs = i; > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > + if (rc) > + return rc; > + } > + > + return 0; > +} ... > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > index ab138004f644..dd9aa3dd738e 100644 > --- a/drivers/cxl/cxlmem.h > +++ b/drivers/cxl/cxlmem.h ... > + > +/* > + * Clear Event Records input payload > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > + */ > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > +struct cxl_mbox_clear_event_payload { > + u8 event_log; /* enum cxl_event_log_type */ > + u8 clear_flags; > + u8 nr_recs; > + u8 reserved[3]; > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; Doesn't fit in the smallest possible payload buffer. It's 526 bytes long. Payload buffer might be 256 bytes in total. (8.2.8.4.3 Mailbox capabilities) Lazy approach, make this smaller and do more loops when clearing. If we want to optimize this later can expand it to this size. > +} __packed; > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > + (((payload_size) - \ > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ > + sizeof(__le16)) > + ...
On Fri, Dec 16, 2022 at 03:39:39PM +0000, Jonathan Cameron wrote: > On Sun, 11 Dec 2022 23:06:20 -0800 > ira.weiny@intel.com wrote: > > > From: Ira Weiny <ira.weiny@intel.com> > > > > CXL devices have multiple event logs which can be queried for CXL event > > records. Devices are required to support the storage of at least one > > event record in each event log type. > > > > Devices track event log overflow by incrementing a counter and tracking > > the time of the first and last overflow event seen. > > > > Software queries events via the Get Event Record mailbox command; CXL > > rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section > > 8.2.9.2.3 Clear Event Records mailbox command. > > > > If the result of negotiating CXL Error Reporting Control is OS control, > > read and clear all event logs on driver load. > > > > Ensure a clean slate of events by reading and clearing the events on > > driver load. > > > > The status register is not used because a device may continue to trigger > > events and the only requirement is to empty the log at least once. This > > allows for the required transition from empty to non-empty for interrupt > > generation. Handling of interrupts is in a follow on patch. > > > > The device can return up to 1MB worth of event records per query. > > Allocate a shared large buffer to handle the max number of records based > > on the mailbox payload size. > > > > This patch traces a raw event record and leaves specific event record > > type tracing to subsequent patches. Macros are created to aid in > > tracing the common CXL Event header fields. > > > > Each record is cleared explicitly. A clear all bit is specified but is > > only valid when the log overflows. > > > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > A few things noticed inline. I've tightened the QEMU code to reject the > case of the input payload claims to be bigger than the mailbox size > and hacked the size down to 256 bytes so it triggers the problem > highlighted below. I'm not sure what you did here. > > > > > --- > > Changes from V3: > > Dan > > Split off _OSC pcie bits > > Use existing style for host bridge flag in that > > patch > > Clean up event processing loop > > Use dev_err_ratelimited() > > Clean up version change log > > Delete 'EVENT LOG OVERFLOW' > > Remove cxl_clear_event_logs() > > Add comment for native cxl control > > Fail driver load on event buf allocation failure > > Comment why events are not processed without _OSC flag > > --- > > drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ > > drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ > > drivers/cxl/cxl.h | 12 ++++ > > drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ > > drivers/cxl/pci.c | 40 ++++++++++++ > > 5 files changed, 392 insertions(+) > > > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > > index b03fba212799..9fb327370e08 100644 > > --- a/drivers/cxl/core/mbox.c > > +++ b/drivers/cxl/core/mbox.c > > > +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, > > + enum cxl_event_log_type log, > > + struct cxl_get_event_payload *get_pl) > > +{ > > + struct cxl_mbox_clear_event_payload payload = { > > + .event_log = log, > > + }; > > + u16 total = le16_to_cpu(get_pl->record_count); > > + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > > + size_t pl_size = sizeof(payload); > > + struct cxl_mbox_cmd mbox_cmd; > > + u16 cnt; > > + int rc; > > + int i; > > + > > + /* Payload size may limit the max handles */ > > + if (pl_size > cxlds->payload_size) { > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > > + pl_size = cxlds->payload_size; pl_size is only the max size possible if that size was smaller than the size of the record [sizeof(payload) above]. > > + } > > + > > + mbox_cmd = (struct cxl_mbox_cmd) { > > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > > + .payload_in = &payload, > > + .size_in = pl_size, > > This payload size should be whatever we need to store the records, > not the max size possible. Particularly as that size is currently > bigger than the mailbox might be. But the above check and set ensures that does not happen. > > It shouldn't fail (I think) simply because a later version of the spec might > add more to this message and things should still work, but definitely not > good practice to tell the hardware this is much longer than it actually is. I don't follow. The full payload is going to be sent even if we are just clearing 1 record which is inefficient but it should never overflow the hardware because it is limited by the check above. So why would this be a problem? > > > > + }; > > + > > + /* > > + * Clear Event Records uses u8 for the handle cnt while Get Event > > + * Record can return up to 0xffff records. > > + */ > > + i = 0; > > + for (cnt = 0; cnt < total; cnt++) { > > + payload.handle[i++] = get_pl->records[cnt].hdr.handle; > > + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", > > + log, le16_to_cpu(payload.handle[i])); > > + > > + if (i == max_handles) { > > + payload.nr_recs = i; > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > + if (rc) > > + return rc; > > + i = 0; > > + } > > + } > > + > > + /* Clear what is left if any */ > > + if (i) { > > + payload.nr_recs = i; > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > + if (rc) > > + return rc; > > + } > > + > > + return 0; > > +} > > > ... > > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > > index ab138004f644..dd9aa3dd738e 100644 > > --- a/drivers/cxl/cxlmem.h > > +++ b/drivers/cxl/cxlmem.h > > ... > > > + > > +/* > > + * Clear Event Records input payload > > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > > + */ > > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > > +struct cxl_mbox_clear_event_payload { > > + u8 event_log; /* enum cxl_event_log_type */ > > + u8 clear_flags; > > + u8 nr_recs; > > + u8 reserved[3]; > > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; > > Doesn't fit in the smallest possible payload buffer. > It's 526 bytes long. Payload buffer might be 256 bytes in total. > (8.2.8.4.3 Mailbox capabilities) > > Lazy approach, make this smaller and do more loops when clearing. > If we want to optimize this later can expand it to this size. I agree but the code already checks for and adjusts this on the fly based on cxlds->payload_size? + /* Payload size may limit the max handles */ + if (pl_size > cxlds->payload_size) { + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); + pl_size = cxlds->payload_size; + } Why is this not ok? [Other than being potentially inefficient.] Do you have a patch to qemu which causes this? Ira > > +} __packed; > > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > > + (((payload_size) - \ > > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ > > + sizeof(__le16)) > > + > > ... >
On Fri, 16 Dec 2022 13:54:01 -0800 Ira Weiny <ira.weiny@intel.com> wrote: > On Fri, Dec 16, 2022 at 03:39:39PM +0000, Jonathan Cameron wrote: > > On Sun, 11 Dec 2022 23:06:20 -0800 > > ira.weiny@intel.com wrote: > > > > > From: Ira Weiny <ira.weiny@intel.com> > > > > > > CXL devices have multiple event logs which can be queried for CXL event > > > records. Devices are required to support the storage of at least one > > > event record in each event log type. > > > > > > Devices track event log overflow by incrementing a counter and tracking > > > the time of the first and last overflow event seen. > > > > > > Software queries events via the Get Event Record mailbox command; CXL > > > rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section > > > 8.2.9.2.3 Clear Event Records mailbox command. > > > > > > If the result of negotiating CXL Error Reporting Control is OS control, > > > read and clear all event logs on driver load. > > > > > > Ensure a clean slate of events by reading and clearing the events on > > > driver load. > > > > > > The status register is not used because a device may continue to trigger > > > events and the only requirement is to empty the log at least once. This > > > allows for the required transition from empty to non-empty for interrupt > > > generation. Handling of interrupts is in a follow on patch. > > > > > > The device can return up to 1MB worth of event records per query. > > > Allocate a shared large buffer to handle the max number of records based > > > on the mailbox payload size. > > > > > > This patch traces a raw event record and leaves specific event record > > > type tracing to subsequent patches. Macros are created to aid in > > > tracing the common CXL Event header fields. > > > > > > Each record is cleared explicitly. A clear all bit is specified but is > > > only valid when the log overflows. > > > > > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > > > A few things noticed inline. I've tightened the QEMU code to reject the > > case of the input payload claims to be bigger than the mailbox size > > and hacked the size down to 256 bytes so it triggers the problem > > highlighted below. > > I'm not sure what you did here. Nor am I. I think this might have been a case of chasing the undersized length bug in QEMU because it was the CXL 3.0 issue and misunderstanding one of the debug prints I got. Friday silliness. Sorry about that! However, the over sized payload communicated to the hardware is still a potential problem. See below. > > > > > > > > > --- > > > Changes from V3: > > > Dan > > > Split off _OSC pcie bits > > > Use existing style for host bridge flag in that > > > patch > > > Clean up event processing loop > > > Use dev_err_ratelimited() > > > Clean up version change log > > > Delete 'EVENT LOG OVERFLOW' > > > Remove cxl_clear_event_logs() > > > Add comment for native cxl control > > > Fail driver load on event buf allocation failure > > > Comment why events are not processed without _OSC flag > > > --- > > > drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ > > > drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ > > > drivers/cxl/cxl.h | 12 ++++ > > > drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ > > > drivers/cxl/pci.c | 40 ++++++++++++ > > > 5 files changed, 392 insertions(+) > > > > > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > > > index b03fba212799..9fb327370e08 100644 > > > --- a/drivers/cxl/core/mbox.c > > > +++ b/drivers/cxl/core/mbox.c > > > > > +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, > > > + enum cxl_event_log_type log, > > > + struct cxl_get_event_payload *get_pl) > > > +{ > > > + struct cxl_mbox_clear_event_payload payload = { > > > + .event_log = log, > > > + }; > > > + u16 total = le16_to_cpu(get_pl->record_count); > > > + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > > > + size_t pl_size = sizeof(payload); > > > + struct cxl_mbox_cmd mbox_cmd; > > > + u16 cnt; > > > + int rc; > > > + int i; > > > + > > > + /* Payload size may limit the max handles */ > > > + if (pl_size > cxlds->payload_size) { > > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); Definition of that is more complex than it needs to be - see below. > > > + pl_size = cxlds->payload_size; > > pl_size is only the max size possible if that size was smaller than the size of > the record [sizeof(payload) above]. Sorry. For some reason my eyes skipped over this completely. So we are fine for all my comments on overflowing. On plus side will now check if that happens in QEMU and return an error which we weren't doing before. > > > > + } > > > + > > > + mbox_cmd = (struct cxl_mbox_cmd) { > > > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > > > + .payload_in = &payload, > > > + .size_in = pl_size, > > > > This payload size should be whatever we need to store the records, > > not the max size possible. Particularly as that size is currently > > bigger than the mailbox might be. > > But the above check and set ensures that does not happen. > > > > > It shouldn't fail (I think) simply because a later version of the spec might > > add more to this message and things should still work, but definitely not > > good practice to tell the hardware this is much longer than it actually is. > > I don't follow. > > The full payload is going to be sent even if we are just clearing 1 record > which is inefficient but it should never overflow the hardware because it is > limited by the check above. > > So why would this be a problem? I'm struggling to find a clear spec statement on if this allowed, so the following is a thought experiment. There is language in definition of the "invalid payload length" error code "The input payload length is not valid for the specified command", but it doesn't go into what counts as valid. What you have looks fine because a device can't fail on the basis it's told the payload is longer than it expects, because you might be sending a CXL 4.0 spec payload that is backwards compatible with CXL 3.0 - hence the fact the sizes don't match up with that expected can't be considered an error. So far so good... However, we may have a situation not dissimilar to the change in record length for the set event interrupt policy payload between CXL 2.0 and CXL 3.0. The only way the endpoint knows what version of message it got is because the record is 4 bytes or 5 bytes. If we have extra stuff on the end of this record in future the end point can assume that it is a new version of the spec and interpret what is in that payload space. Say the future structure looks like struct cxl_mbox_clear_event_payload_future { u8 event_log; /* enum cxl_event_log_type */ u8 clear_flags; u8 nr_recs; u8 reserved[3]; __le16 handle[nr_recs]; __le16 otherdata[nr_recs]; } Endpoint receiving your 'overly long payload' will assume all those otherdata fields are 0, not necessarily the same as non present. For the set event interrupt policy, if we sent an overlong payload like you've done here with assumption of the CXL 2.0 spec we would be turning off the DCD interrupt rather that doing nothing (unlikely to be a problem in that particularly case as that one doesn't have a FW Interrupt option - but that's more luck than design). I'm not sure why we'd have extra stuff for this payload, but it 'might' happen'. > > > > > > > > > > + }; > > > + > > > + /* > > > + * Clear Event Records uses u8 for the handle cnt while Get Event > > > + * Record can return up to 0xffff records. > > > + */ > > > + i = 0; > > > + for (cnt = 0; cnt < total; cnt++) { > > > + payload.handle[i++] = get_pl->records[cnt].hdr.handle; > > > + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", > > > + log, le16_to_cpu(payload.handle[i])); > > > + > > > + if (i == max_handles) { > > > + payload.nr_recs = i; > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > + if (rc) > > > + return rc; > > > + i = 0; > > > + } > > > + } > > > + > > > + /* Clear what is left if any */ > > > + if (i) { > > > + payload.nr_recs = i; > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > + if (rc) > > > + return rc; > > > + } > > > + > > > + return 0; > > > +} > > > > > > ... > > > > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > > > index ab138004f644..dd9aa3dd738e 100644 > > > --- a/drivers/cxl/cxlmem.h > > > +++ b/drivers/cxl/cxlmem.h > > > > ... > > > > > + > > > +/* > > > + * Clear Event Records input payload > > > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > > > + */ > > > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > > > +struct cxl_mbox_clear_event_payload { > > > + u8 event_log; /* enum cxl_event_log_type */ > > > + u8 clear_flags; > > > + u8 nr_recs; > > > + u8 reserved[3]; > > > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; > > > > Doesn't fit in the smallest possible payload buffer. > > It's 526 bytes long. Payload buffer might be 256 bytes in total. > > (8.2.8.4.3 Mailbox capabilities) > > > > Lazy approach, make this smaller and do more loops when clearing. > > If we want to optimize this later can expand it to this size. > > I agree but the code already checks for and adjusts this on the fly based on > cxlds->payload_size? > > + /* Payload size may limit the max handles */ > + if (pl_size > cxlds->payload_size) { > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > + pl_size = cxlds->payload_size; > + } > > Why is this not ok? [Other than being potentially inefficient.] > > Do you have a patch to qemu which causes this? Two issues crossing I think on my side and me thinking this one was obviously the problem when it wasn't. > > Ira > > > > +} __packed; > > > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > > > + (((payload_size) - \ > > > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > > > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ Could use offsetof() to simplify this > > > + sizeof(__le16)) > > > + > > > > ... > >
On Sat, Dec 17, 2022 at 04:38:50PM +0000, Jonathan Cameron wrote: > On Fri, 16 Dec 2022 13:54:01 -0800 > Ira Weiny <ira.weiny@intel.com> wrote: > > > On Fri, Dec 16, 2022 at 03:39:39PM +0000, Jonathan Cameron wrote: > > > On Sun, 11 Dec 2022 23:06:20 -0800 > > > ira.weiny@intel.com wrote: > > > > > > > From: Ira Weiny <ira.weiny@intel.com> > > > > > > > > CXL devices have multiple event logs which can be queried for CXL event > > > > records. Devices are required to support the storage of at least one > > > > event record in each event log type. > > > > > > > > Devices track event log overflow by incrementing a counter and tracking > > > > the time of the first and last overflow event seen. > > > > > > > > Software queries events via the Get Event Record mailbox command; CXL > > > > rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section > > > > 8.2.9.2.3 Clear Event Records mailbox command. > > > > > > > > If the result of negotiating CXL Error Reporting Control is OS control, > > > > read and clear all event logs on driver load. > > > > > > > > Ensure a clean slate of events by reading and clearing the events on > > > > driver load. > > > > > > > > The status register is not used because a device may continue to trigger > > > > events and the only requirement is to empty the log at least once. This > > > > allows for the required transition from empty to non-empty for interrupt > > > > generation. Handling of interrupts is in a follow on patch. > > > > > > > > The device can return up to 1MB worth of event records per query. > > > > Allocate a shared large buffer to handle the max number of records based > > > > on the mailbox payload size. > > > > > > > > This patch traces a raw event record and leaves specific event record > > > > type tracing to subsequent patches. Macros are created to aid in > > > > tracing the common CXL Event header fields. > > > > > > > > Each record is cleared explicitly. A clear all bit is specified but is > > > > only valid when the log overflows. > > > > > > > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > > > > > A few things noticed inline. I've tightened the QEMU code to reject the > > > case of the input payload claims to be bigger than the mailbox size > > > and hacked the size down to 256 bytes so it triggers the problem > > > highlighted below. > > > > I'm not sure what you did here. > > Nor am I. I think this might have been a case of chasing the undersized > length bug in QEMU because it was the CXL 3.0 issue and misunderstanding > one of the debug prints I got. > > Friday silliness. Sorry about that! NP but you did have me going. I've vowed to actually understand the spec better going forward! :-D > > However, the over sized payload communicated to the hardware is still > a potential problem. See below. I don't see where there is an oversized payload used either... > > > > > > > > > > > > > > --- > > > > Changes from V3: > > > > Dan > > > > Split off _OSC pcie bits > > > > Use existing style for host bridge flag in that > > > > patch > > > > Clean up event processing loop > > > > Use dev_err_ratelimited() > > > > Clean up version change log > > > > Delete 'EVENT LOG OVERFLOW' > > > > Remove cxl_clear_event_logs() > > > > Add comment for native cxl control > > > > Fail driver load on event buf allocation failure > > > > Comment why events are not processed without _OSC flag > > > > --- > > > > drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ > > > > drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ > > > > drivers/cxl/cxl.h | 12 ++++ > > > > drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ > > > > drivers/cxl/pci.c | 40 ++++++++++++ > > > > 5 files changed, 392 insertions(+) > > > > > > > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > > > > index b03fba212799..9fb327370e08 100644 > > > > --- a/drivers/cxl/core/mbox.c > > > > +++ b/drivers/cxl/core/mbox.c > > > > > > > +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, > > > > + enum cxl_event_log_type log, > > > > + struct cxl_get_event_payload *get_pl) > > > > +{ > > > > + struct cxl_mbox_clear_event_payload payload = { > > > > + .event_log = log, > > > > + }; > > > > + u16 total = le16_to_cpu(get_pl->record_count); > > > > + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > > > > + size_t pl_size = sizeof(payload); This line ensures the payload is only ever the size of the definition per the 3.0 spec. > > > > + struct cxl_mbox_cmd mbox_cmd; > > > > + u16 cnt; > > > > + int rc; > > > > + int i; > > > > + > > > > + /* Payload size may limit the max handles */ > > > > + if (pl_size > cxlds->payload_size) { > > > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > > Definition of that is more complex than it needs to be - see below. Then this ensures it is truncated if needed. > > > > > + pl_size = cxlds->payload_size; > > > > pl_size is only the max size possible if that size was smaller than the size of > > the record [sizeof(payload) above]. > > Sorry. For some reason my eyes skipped over this completely. > So we are fine for all my comments on overflowing. On plus side > will now check if that happens in QEMU and return an error which we > weren't doing before. > > > > > > > + } > > > > + > > > > + mbox_cmd = (struct cxl_mbox_cmd) { > > > > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > > > > + .payload_in = &payload, > > > > + .size_in = pl_size, > > > > > > This payload size should be whatever we need to store the records, > > > not the max size possible. Particularly as that size is currently > > > bigger than the mailbox might be. > > > > But the above check and set ensures that does not happen. > > > > > > > > It shouldn't fail (I think) simply because a later version of the spec might > > > add more to this message and things should still work, but definitely not > > > good practice to tell the hardware this is much longer than it actually is. > > > > I don't follow. > > > > The full payload is going to be sent even if we are just clearing 1 record > > which is inefficient but it should never overflow the hardware because it is > > limited by the check above. > > > > So why would this be a problem? > I'm struggling to find a clear spec statement on if this allowed, so the following > is a thought experiment. There is language in definition of the "invalid payload length" > error code "The input payload length is not valid for the specified command", but it > doesn't go into what counts as valid. I think the only thing which makes sense is if the payload length is smaller than: Header + nr_recs * 2 Anything up to header + (0xff * 2) should be fine per the 3.0 spec. > > What you have looks fine because a device can't fail on the basis it's told the > payload is longer than it expects, because you might be sending a CXL 4.0 spec > payload that is backwards compatible with CXL 3.0 - hence the fact the sizes > don't match up with that expected can't be considered an error. > So far so good... However, we may have a situation not dissimilar to the > change in record length for the set event interrupt policy payload between CXL 2.0 > and CXL 3.0. The only way the endpoint knows what version of message it got is because the > record is 4 bytes or 5 bytes. If we have extra stuff on the end of this record > in future the end point can assume that it is a new version of the spec and interpret > what is in that payload space. > > Say the future structure looks like > > struct cxl_mbox_clear_event_payload_future { > u8 event_log; /* enum cxl_event_log_type */ > u8 clear_flags; > u8 nr_recs; > u8 reserved[3]; > __le16 handle[nr_recs]; > __le16 otherdata[nr_recs]; otherdata should be ignored by a 3.0 device. a theoretical 4.0 device should handle otherdata not being there per some flag in the flags field I would suppose... That would have to be determined if this payload were extended. Otherwise this software will fail no matter what. Other mailbox commands do not 0 out from the command size to 1M either. > } > > Endpoint receiving your 'overly long payload' will assume all those otherdata fields > are 0, not necessarily the same as non present. But it is not 'overly long'. It is only the length of the current spec. See above. > For the set event interrupt policy, if we sent an overlong payload like you've done here > with assumption of the CXL 2.0 spec we would be turning off the DCD interrupt rather > that doing nothing (unlikely to be a problem in that particularly case as that one > doesn't have a FW Interrupt option - but that's more luck than design). > > I'm not sure why we'd have extra stuff for this payload, but it 'might' happen'. I'll have to check but I don't think I set the payload long in that message. It too should be sizeof(<set event int policy>) > > > > > > > > > > > > > > > > + }; > > > > + > > > > + /* > > > > + * Clear Event Records uses u8 for the handle cnt while Get Event > > > > + * Record can return up to 0xffff records. > > > > + */ > > > > + i = 0; > > > > + for (cnt = 0; cnt < total; cnt++) { > > > > + payload.handle[i++] = get_pl->records[cnt].hdr.handle; > > > > + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", > > > > + log, le16_to_cpu(payload.handle[i])); > > > > + > > > > + if (i == max_handles) { > > > > + payload.nr_recs = i; > > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > > + if (rc) > > > > + return rc; > > > > + i = 0; > > > > + } > > > > + } > > > > + > > > > + /* Clear what is left if any */ > > > > + if (i) { > > > > + payload.nr_recs = i; > > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > > + if (rc) > > > > + return rc; > > > > + } > > > > + > > > > + return 0; > > > > +} > > > > > > > > > ... > > > > > > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > > > > index ab138004f644..dd9aa3dd738e 100644 > > > > --- a/drivers/cxl/cxlmem.h > > > > +++ b/drivers/cxl/cxlmem.h > > > > > > ... > > > > > > > + > > > > +/* > > > > + * Clear Event Records input payload > > > > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > > > > + */ > > > > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > > > > +struct cxl_mbox_clear_event_payload { > > > > + u8 event_log; /* enum cxl_event_log_type */ > > > > + u8 clear_flags; > > > > + u8 nr_recs; > > > > + u8 reserved[3]; > > > > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; > > > > > > Doesn't fit in the smallest possible payload buffer. > > > It's 526 bytes long. Payload buffer might be 256 bytes in total. > > > (8.2.8.4.3 Mailbox capabilities) > > > > > > Lazy approach, make this smaller and do more loops when clearing. > > > If we want to optimize this later can expand it to this size. > > > > I agree but the code already checks for and adjusts this on the fly based on > > cxlds->payload_size? > > > > + /* Payload size may limit the max handles */ > > + if (pl_size > cxlds->payload_size) { > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > > + pl_size = cxlds->payload_size; > > + } > > > > Why is this not ok? [Other than being potentially inefficient.] > > > > Do you have a patch to qemu which causes this? > > Two issues crossing I think on my side and me thinking this one was obviously > the problem when it wasn't. My fault also for not at least throwing my Qemu test code out there. I've been busy with some things today. I'll try and get those changes cleaned up and at least another RFC set out ASAP. > > > > > Ira > > > > > > +} __packed; > > > > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > > > > + (((payload_size) - \ > > > > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > > > > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ > > Could use offsetof() to simplify this True. How about I submit a clean up patch to follow? I don't think this is broken. Ira > > > > > + sizeof(__le16)) > > > > + > > > > > > ... > > > >
On Fri, Dec 16, 2022 at 01:54:01PM -0800, Ira Weiny (ira.weiny@intel.com) wrote: > On Fri, Dec 16, 2022 at 03:39:39PM +0000, Jonathan Cameron wrote: > > On Sun, 11 Dec 2022 23:06:20 -0800 > > ira.weiny@intel.com wrote: > > > > > From: Ira Weiny <ira.weiny@intel.com> > > > > > > CXL devices have multiple event logs which can be queried for CXL event > > > records. Devices are required to support the storage of at least one > > > event record in each event log type. > > > > > > Devices track event log overflow by incrementing a counter and tracking > > > the time of the first and last overflow event seen. > > > > > > Software queries events via the Get Event Record mailbox command; CXL > > > rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section > > > 8.2.9.2.3 Clear Event Records mailbox command. > > > > > > If the result of negotiating CXL Error Reporting Control is OS control, > > > read and clear all event logs on driver load. > > > > > > Ensure a clean slate of events by reading and clearing the events on > > > driver load. > > > > > > The status register is not used because a device may continue to trigger > > > events and the only requirement is to empty the log at least once. This > > > allows for the required transition from empty to non-empty for interrupt > > > generation. Handling of interrupts is in a follow on patch. > > > > > > The device can return up to 1MB worth of event records per query. > > > Allocate a shared large buffer to handle the max number of records based > > > on the mailbox payload size. > > > > > > This patch traces a raw event record and leaves specific event record > > > type tracing to subsequent patches. Macros are created to aid in > > > tracing the common CXL Event header fields. > > > > > > Each record is cleared explicitly. A clear all bit is specified but is > > > only valid when the log overflows. > > > > > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > > > A few things noticed inline. I've tightened the QEMU code to reject the > > case of the input payload claims to be bigger than the mailbox size > > and hacked the size down to 256 bytes so it triggers the problem > > highlighted below. > > I'm not sure what you did here. > > > > > > > > > --- > > > Changes from V3: > > > Dan > > > Split off _OSC pcie bits > > > Use existing style for host bridge flag in that > > > patch > > > Clean up event processing loop > > > Use dev_err_ratelimited() > > > Clean up version change log > > > Delete 'EVENT LOG OVERFLOW' > > > Remove cxl_clear_event_logs() > > > Add comment for native cxl control > > > Fail driver load on event buf allocation failure > > > Comment why events are not processed without _OSC flag > > > --- > > > drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ > > > drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ > > > drivers/cxl/cxl.h | 12 ++++ > > > drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ > > > drivers/cxl/pci.c | 40 ++++++++++++ > > > 5 files changed, 392 insertions(+) > > > > > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > > > index b03fba212799..9fb327370e08 100644 > > > --- a/drivers/cxl/core/mbox.c > > > +++ b/drivers/cxl/core/mbox.c > > > > > +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, > > > + enum cxl_event_log_type log, > > > + struct cxl_get_event_payload *get_pl) > > > +{ > > > + struct cxl_mbox_clear_event_payload payload = { > > > + .event_log = log, > > > + }; > > > + u16 total = le16_to_cpu(get_pl->record_count); > > > + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > > > + size_t pl_size = sizeof(payload); > > > + struct cxl_mbox_cmd mbox_cmd; > > > + u16 cnt; > > > + int rc; > > > + int i; > > > + > > > + /* Payload size may limit the max handles */ > > > + if (pl_size > cxlds->payload_size) { > > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > > > + pl_size = cxlds->payload_size; > > pl_size is only the max size possible if that size was smaller than the size of > the record [sizeof(payload) above]. > > > > + } > > > + > > > + mbox_cmd = (struct cxl_mbox_cmd) { > > > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > > > + .payload_in = &payload, > > > + .size_in = pl_size, > > > > This payload size should be whatever we need to store the records, > > not the max size possible. Particularly as that size is currently > > bigger than the mailbox might be. > > But the above check and set ensures that does not happen. > > > > > It shouldn't fail (I think) simply because a later version of the spec might > > add more to this message and things should still work, but definitely not > > good practice to tell the hardware this is much longer than it actually is. > > I don't follow. > > The full payload is going to be sent even if we are just clearing 1 record > which is inefficient but it should never overflow the hardware because it is > limited by the check above. > > So why would this be a problem? > per spec3.0, Event Record Handles field is "A list of Event Record Handles the host has consumed and the device shall now remove from its internal Event Log store.". Extra unused handle list does not folow above description. And also spec mentions "All event record handles shall be nonzero value. A value of 0 shall be treated by the device as an invalid handle.". So if there is value 0 in extra unused handles, device shall return invalid handle error code > > > > > > > + }; > > > + > > > + /* > > > + * Clear Event Records uses u8 for the handle cnt while Get Event > > > + * Record can return up to 0xffff records. > > > + */ > > > + i = 0; > > > + for (cnt = 0; cnt < total; cnt++) { > > > + payload.handle[i++] = get_pl->records[cnt].hdr.handle; > > > + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", > > > + log, le16_to_cpu(payload.handle[i])); > > > + > > > + if (i == max_handles) { > > > + payload.nr_recs = i; > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > + if (rc) > > > + return rc; > > > + i = 0; > > > + } > > > + } > > > + > > > + /* Clear what is left if any */ > > > + if (i) { > > > + payload.nr_recs = i; > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > + if (rc) > > > + return rc; > > > + } > > > + > > > + return 0; > > > +} > > > > > > ... > > > > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > > > index ab138004f644..dd9aa3dd738e 100644 > > > --- a/drivers/cxl/cxlmem.h > > > +++ b/drivers/cxl/cxlmem.h > > > > ... > > > > > + > > > +/* > > > + * Clear Event Records input payload > > > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > > > + */ > > > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > > > +struct cxl_mbox_clear_event_payload { > > > + u8 event_log; /* enum cxl_event_log_type */ > > > + u8 clear_flags; > > > + u8 nr_recs; > > > + u8 reserved[3]; > > > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; > > > > Doesn't fit in the smallest possible payload buffer. > > It's 526 bytes long. Payload buffer might be 256 bytes in total. > > (8.2.8.4.3 Mailbox capabilities) > > > > Lazy approach, make this smaller and do more loops when clearing. > > If we want to optimize this later can expand it to this size. > > I agree but the code already checks for and adjusts this on the fly based on > cxlds->payload_size? > > + /* Payload size may limit the max handles */ > + if (pl_size > cxlds->payload_size) { > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > + pl_size = cxlds->payload_size; > + } > > Why is this not ok? [Other than being potentially inefficient.] > > Do you have a patch to qemu which causes this? > > Ira > > > > +} __packed; > > > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > > > + (((payload_size) - \ > > > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > > > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ > > > + sizeof(__le16)) > > > + > > > > ... > > >
On Sat, 17 Dec 2022 16:21:05 -0800 Ira Weiny <ira.weiny@intel.com> wrote: > On Sat, Dec 17, 2022 at 04:38:50PM +0000, Jonathan Cameron wrote: > > On Fri, 16 Dec 2022 13:54:01 -0800 > > Ira Weiny <ira.weiny@intel.com> wrote: > > > > > On Fri, Dec 16, 2022 at 03:39:39PM +0000, Jonathan Cameron wrote: > > > > On Sun, 11 Dec 2022 23:06:20 -0800 > > > > ira.weiny@intel.com wrote: > > > > > > > > > From: Ira Weiny <ira.weiny@intel.com> > > > > > > > > > > CXL devices have multiple event logs which can be queried for CXL event > > > > > records. Devices are required to support the storage of at least one > > > > > event record in each event log type. > > > > > > > > > > Devices track event log overflow by incrementing a counter and tracking > > > > > the time of the first and last overflow event seen. > > > > > > > > > > Software queries events via the Get Event Record mailbox command; CXL > > > > > rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section > > > > > 8.2.9.2.3 Clear Event Records mailbox command. > > > > > > > > > > If the result of negotiating CXL Error Reporting Control is OS control, > > > > > read and clear all event logs on driver load. > > > > > > > > > > Ensure a clean slate of events by reading and clearing the events on > > > > > driver load. > > > > > > > > > > The status register is not used because a device may continue to trigger > > > > > events and the only requirement is to empty the log at least once. This > > > > > allows for the required transition from empty to non-empty for interrupt > > > > > generation. Handling of interrupts is in a follow on patch. > > > > > > > > > > The device can return up to 1MB worth of event records per query. > > > > > Allocate a shared large buffer to handle the max number of records based > > > > > on the mailbox payload size. > > > > > > > > > > This patch traces a raw event record and leaves specific event record > > > > > type tracing to subsequent patches. Macros are created to aid in > > > > > tracing the common CXL Event header fields. > > > > > > > > > > Each record is cleared explicitly. A clear all bit is specified but is > > > > > only valid when the log overflows. > > > > > > > > > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > > > > > > > A few things noticed inline. I've tightened the QEMU code to reject the > > > > case of the input payload claims to be bigger than the mailbox size > > > > and hacked the size down to 256 bytes so it triggers the problem > > > > highlighted below. > > > > > > I'm not sure what you did here. > > > > Nor am I. I think this might have been a case of chasing the undersized > > length bug in QEMU because it was the CXL 3.0 issue and misunderstanding > > one of the debug prints I got. > > > > Friday silliness. Sorry about that! > > NP but you did have me going. I've vowed to actually understand the spec > better going forward! :-D > > > > > However, the over sized payload communicated to the hardware is still > > a potential problem. See below. > > I don't see where there is an oversized payload used either... We write a payload size into the mailbox register that includes much more than the payload we are sending. > > > > > > > > > > > > > > > > > > > > --- > > > > > Changes from V3: > > > > > Dan > > > > > Split off _OSC pcie bits > > > > > Use existing style for host bridge flag in that > > > > > patch > > > > > Clean up event processing loop > > > > > Use dev_err_ratelimited() > > > > > Clean up version change log > > > > > Delete 'EVENT LOG OVERFLOW' > > > > > Remove cxl_clear_event_logs() > > > > > Add comment for native cxl control > > > > > Fail driver load on event buf allocation failure > > > > > Comment why events are not processed without _OSC flag > > > > > --- > > > > > drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ > > > > > drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ > > > > > drivers/cxl/cxl.h | 12 ++++ > > > > > drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ > > > > > drivers/cxl/pci.c | 40 ++++++++++++ > > > > > 5 files changed, 392 insertions(+) > > > > > > > > > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > > > > > index b03fba212799..9fb327370e08 100644 > > > > > --- a/drivers/cxl/core/mbox.c > > > > > +++ b/drivers/cxl/core/mbox.c > > > > > > > > > +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, > > > > > + enum cxl_event_log_type log, > > > > > + struct cxl_get_event_payload *get_pl) > > > > > +{ > > > > > + struct cxl_mbox_clear_event_payload payload = { > > > > > + .event_log = log, > > > > > + }; > > > > > + u16 total = le16_to_cpu(get_pl->record_count); > > > > > + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > > > > > + size_t pl_size = sizeof(payload); > > This line ensures the payload is only ever the size of the definition per the > 3.0 spec. It doesn't (see later). It ensures it is the size of the maximum payload. The size should be 6 + 2 * total (truncated as necesxary) as that's the actual payload size. > > > > > > + struct cxl_mbox_cmd mbox_cmd; > > > > > + u16 cnt; > > > > > + int rc; > > > > > + int i; > > > > > + > > > > > + /* Payload size may limit the max handles */ > > > > > + if (pl_size > cxlds->payload_size) { > > > > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > > > > Definition of that is more complex than it needs to be - see below. > > Then this ensures it is truncated if needed. > > > > > > > > + pl_size = cxlds->payload_size; > > > > > > pl_size is only the max size possible if that size was smaller than the size of > > > the record [sizeof(payload) above]. > > > > Sorry. For some reason my eyes skipped over this completely. > > So we are fine for all my comments on overflowing. On plus side > > will now check if that happens in QEMU and return an error which we > > weren't doing before. > > > > > > > > > > + } > > > > > + > > > > > + mbox_cmd = (struct cxl_mbox_cmd) { > > > > > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > > > > > + .payload_in = &payload, > > > > > + .size_in = pl_size, > > > > > > > > This payload size should be whatever we need to store the records, > > > > not the max size possible. Particularly as that size is currently > > > > bigger than the mailbox might be. > > > > > > But the above check and set ensures that does not happen. > > > > > > > > > > > It shouldn't fail (I think) simply because a later version of the spec might > > > > add more to this message and things should still work, but definitely not > > > > good practice to tell the hardware this is much longer than it actually is. > > > > > > I don't follow. > > > > > > The full payload is going to be sent even if we are just clearing 1 record > > > which is inefficient but it should never overflow the hardware because it is > > > limited by the check above. > > > > > > So why would this be a problem? > > I'm struggling to find a clear spec statement on if this allowed, so the following > > is a thought experiment. There is language in definition of the "invalid payload length" > > error code "The input payload length is not valid for the specified command", but it > > doesn't go into what counts as valid. > > I think the only thing which makes sense is if the payload length is smaller > than: > > Header + nr_recs * 2 > > Anything up to > > header + (0xff * 2) should be fine per the 3.0 spec. nr_recs is in the structure, so the it could check that (the whole argument about future specs is to say that the device shouldn't enforce that limit even though it can know the structure is longer than expected). > > > > > What you have looks fine because a device can't fail on the basis it's told the > > payload is longer than it expects, because you might be sending a CXL 4.0 spec > > payload that is backwards compatible with CXL 3.0 - hence the fact the sizes > > don't match up with that expected can't be considered an error. > > So far so good... However, we may have a situation not dissimilar to the > > change in record length for the set event interrupt policy payload between CXL 2.0 > > and CXL 3.0. The only way the endpoint knows what version of message it got is because the > > record is 4 bytes or 5 bytes. If we have extra stuff on the end of this record > > in future the end point can assume that it is a new version of the spec and interpret > > what is in that payload space. > > > > Say the future structure looks like > > > > struct cxl_mbox_clear_event_payload_future { > > u8 event_log; /* enum cxl_event_log_type */ > > u8 clear_flags; > > u8 nr_recs; > > u8 reserved[3]; > > __le16 handle[nr_recs]; > > __le16 otherdata[nr_recs]; > > otherdata should be ignored by a 3.0 device. > > a theoretical 4.0 device should handle otherdata not being there per some flag > in the flags field I would suppose... That would have to be determined if this > payload were extended. Otherwise this software will fail no matter what. A flag isn't required though obviously nice to have. Length should be enough. We already have a version of this with you sending a CXL 2.0 command to a potentially CXL 3.0 device and not checking if DCD is supported before doing so (which I think is fine). > > Other mailbox commands do not 0 out from the command size to 1M either. > > > } > > > > Endpoint receiving your 'overly long payload' will assume all those otherdata fields > > are 0, not necessarily the same as non present. > > But it is not 'overly long'. It is only the length of the current spec. See > above. This is where we disagree. The current spec says: Table 8-36: Input payload size for Clear Event Records is 8+ (the size for when you are clearing one record) - Side note this is potentially wrong as a clear all wouldn't have event record handles, so I think it should be 6+. Table 8-51 has * Number of event record handles * Event record handles and no defined reserved space after that Event Record handles. Hence the size is precisely Header + nr_records * 2, not more. Not directly relevant but any record handles that are 0 are treated as invalid handles (error returned). (this also aligns with what Jonny raised) > > > For the set event interrupt policy, if we sent an overlong payload like you've done here > > with assumption of the CXL 2.0 spec we would be turning off the DCD interrupt rather > > that doing nothing (unlikely to be a problem in that particularly case as that one > > doesn't have a FW Interrupt option - but that's more luck than design). > > > > I'm not sure why we'd have extra stuff for this payload, but it 'might' happen'. > > I'll have to check but I don't think I set the payload long in that message. > It too should be sizeof(<set event int policy>) You haven't. the illustration there was about the fact that it is 4 bytes in your implementation (which is fine as that is the CXL 2.0 message) and 5 bytes in CXL 3.0 which you'll upgrade to when you add DCD support. > > > > > > > > > > > > > > > > > > > > > > > + }; > > > > > + > > > > > + /* > > > > > + * Clear Event Records uses u8 for the handle cnt while Get Event > > > > > + * Record can return up to 0xffff records. > > > > > + */ > > > > > + i = 0; > > > > > + for (cnt = 0; cnt < total; cnt++) { > > > > > + payload.handle[i++] = get_pl->records[cnt].hdr.handle; > > > > > + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", > > > > > + log, le16_to_cpu(payload.handle[i])); > > > > > + > > > > > + if (i == max_handles) { > > > > > + payload.nr_recs = i; > > > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > > > + if (rc) > > > > > + return rc; > > > > > + i = 0; > > > > > + } > > > > > + } > > > > > + > > > > > + /* Clear what is left if any */ > > > > > + if (i) { > > > > > + payload.nr_recs = i; > > > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > > > + if (rc) > > > > > + return rc; > > > > > + } > > > > > + > > > > > + return 0; > > > > > +} > > > > > > > > > > > > ... > > > > > > > > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > > > > > index ab138004f644..dd9aa3dd738e 100644 > > > > > --- a/drivers/cxl/cxlmem.h > > > > > +++ b/drivers/cxl/cxlmem.h > > > > > > > > ... > > > > > > > > > + > > > > > +/* > > > > > + * Clear Event Records input payload > > > > > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > > > > > + */ > > > > > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > > > > > +struct cxl_mbox_clear_event_payload { > > > > > + u8 event_log; /* enum cxl_event_log_type */ > > > > > + u8 clear_flags; > > > > > + u8 nr_recs; > > > > > + u8 reserved[3]; > > > > > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; > > > > > > > > Doesn't fit in the smallest possible payload buffer. > > > > It's 526 bytes long. Payload buffer might be 256 bytes in total. > > > > (8.2.8.4.3 Mailbox capabilities) > > > > > > > > Lazy approach, make this smaller and do more loops when clearing. > > > > If we want to optimize this later can expand it to this size. > > > > > > I agree but the code already checks for and adjusts this on the fly based on > > > cxlds->payload_size? > > > > > > + /* Payload size may limit the max handles */ > > > + if (pl_size > cxlds->payload_size) { > > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > > > + pl_size = cxlds->payload_size; > > > + } > > > > > > Why is this not ok? [Other than being potentially inefficient.] > > > > > > Do you have a patch to qemu which causes this? > > > > Two issues crossing I think on my side and me thinking this one was obviously > > the problem when it wasn't. > > My fault also for not at least throwing my Qemu test code out there. I've been > busy with some things today. I'll try and get those changes cleaned up and at > least another RFC set out ASAP. > > > > > > > > > Ira > > > > > > > > +} __packed; > > > > > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > > > > > + (((payload_size) - \ > > > > > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > > > > > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ > > > > Could use offsetof() to simplify this > > True. How about I submit a clean up patch to follow? I don't think this is > broken. I think you'll be changing the patch anyway - so might as well fix this up too :) Jonathan > > Ira > > > > > > > > + sizeof(__le16)) > > > > > + > > > > > > > > ... > > > > > >
On Sun, 18 Dec 2022 08:25:34 +0800 johnny <johnny.li@montage-tech.com> wrote: > On Fri, Dec 16, 2022 at 01:54:01PM -0800, Ira Weiny (ira.weiny@intel.com) wrote: > > On Fri, Dec 16, 2022 at 03:39:39PM +0000, Jonathan Cameron wrote: > > > On Sun, 11 Dec 2022 23:06:20 -0800 > > > ira.weiny@intel.com wrote: > > > > > > > From: Ira Weiny <ira.weiny@intel.com> > > > > > > > > CXL devices have multiple event logs which can be queried for CXL event > > > > records. Devices are required to support the storage of at least one > > > > event record in each event log type. > > > > > > > > Devices track event log overflow by incrementing a counter and tracking > > > > the time of the first and last overflow event seen. > > > > > > > > Software queries events via the Get Event Record mailbox command; CXL > > > > rev 3.0 section 8.2.9.2.2 and clears events via CXL rev 3.0 section > > > > 8.2.9.2.3 Clear Event Records mailbox command. > > > > > > > > If the result of negotiating CXL Error Reporting Control is OS control, > > > > read and clear all event logs on driver load. > > > > > > > > Ensure a clean slate of events by reading and clearing the events on > > > > driver load. > > > > > > > > The status register is not used because a device may continue to trigger > > > > events and the only requirement is to empty the log at least once. This > > > > allows for the required transition from empty to non-empty for interrupt > > > > generation. Handling of interrupts is in a follow on patch. > > > > > > > > The device can return up to 1MB worth of event records per query. > > > > Allocate a shared large buffer to handle the max number of records based > > > > on the mailbox payload size. > > > > > > > > This patch traces a raw event record and leaves specific event record > > > > type tracing to subsequent patches. Macros are created to aid in > > > > tracing the common CXL Event header fields. > > > > > > > > Each record is cleared explicitly. A clear all bit is specified but is > > > > only valid when the log overflows. > > > > > > > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > > > > > A few things noticed inline. I've tightened the QEMU code to reject the > > > case of the input payload claims to be bigger than the mailbox size > > > and hacked the size down to 256 bytes so it triggers the problem > > > highlighted below. > > > > I'm not sure what you did here. > > > > > > > > > > > > > --- > > > > Changes from V3: > > > > Dan > > > > Split off _OSC pcie bits > > > > Use existing style for host bridge flag in that > > > > patch > > > > Clean up event processing loop > > > > Use dev_err_ratelimited() > > > > Clean up version change log > > > > Delete 'EVENT LOG OVERFLOW' > > > > Remove cxl_clear_event_logs() > > > > Add comment for native cxl control > > > > Fail driver load on event buf allocation failure > > > > Comment why events are not processed without _OSC flag > > > > --- > > > > drivers/cxl/core/mbox.c | 136 +++++++++++++++++++++++++++++++++++++++ > > > > drivers/cxl/core/trace.h | 120 ++++++++++++++++++++++++++++++++++ > > > > drivers/cxl/cxl.h | 12 ++++ > > > > drivers/cxl/cxlmem.h | 84 ++++++++++++++++++++++++ > > > > drivers/cxl/pci.c | 40 ++++++++++++ > > > > 5 files changed, 392 insertions(+) > > > > > > > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > > > > index b03fba212799..9fb327370e08 100644 > > > > --- a/drivers/cxl/core/mbox.c > > > > +++ b/drivers/cxl/core/mbox.c > > > > > > > +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, > > > > + enum cxl_event_log_type log, > > > > + struct cxl_get_event_payload *get_pl) > > > > +{ > > > > + struct cxl_mbox_clear_event_payload payload = { > > > > + .event_log = log, > > > > + }; > > > > + u16 total = le16_to_cpu(get_pl->record_count); > > > > + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > > > > + size_t pl_size = sizeof(payload); > > > > + struct cxl_mbox_cmd mbox_cmd; > > > > + u16 cnt; > > > > + int rc; > > > > + int i; > > > > + > > > > + /* Payload size may limit the max handles */ > > > > + if (pl_size > cxlds->payload_size) { > > > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > > > > + pl_size = cxlds->payload_size; > > > > pl_size is only the max size possible if that size was smaller than the size of > > the record [sizeof(payload) above]. > > > > > > + } > > > > + > > > > + mbox_cmd = (struct cxl_mbox_cmd) { > > > > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > > > > + .payload_in = &payload, > > > > + .size_in = pl_size, > > > > > > This payload size should be whatever we need to store the records, > > > not the max size possible. Particularly as that size is currently > > > bigger than the mailbox might be. > > > > But the above check and set ensures that does not happen. > > > > > > > > It shouldn't fail (I think) simply because a later version of the spec might > > > add more to this message and things should still work, but definitely not > > > good practice to tell the hardware this is much longer than it actually is. > > > > I don't follow. > > > > The full payload is going to be sent even if we are just clearing 1 record > > which is inefficient but it should never overflow the hardware because it is > > limited by the check above. > > > > So why would this be a problem? > > > > per spec3.0, Event Record Handles field is "A list of Event Record Handles the > host has consumed and the device shall now remove from its internal Event Log > store.". Extra unused handle list does not folow above description. And also > spec mentions "All event record handles shall be nonzero value. A value of 0 > shall be treated by the device as an invalid handle.". So if there is value 0 > in extra unused handles, device shall return invalid handle error code I don't think we call into that particular corner as the number of event record handles is set correctly. Otherwise I agree this isn't following the spec - though I think key here is that it won't be broken against CXL 3.0 devices (with that rather roundabout argument that a CXL 3.0 devices should handle later spec messages as those should be backwards compatible) but it might be broken against CXL 3.0+ ones that interpret the 0s at the end as having meaning. Thanks, Jonathan > > > > > > > > > > > > + }; > > > > + > > > > + /* > > > > + * Clear Event Records uses u8 for the handle cnt while Get Event > > > > + * Record can return up to 0xffff records. > > > > + */ > > > > + i = 0; > > > > + for (cnt = 0; cnt < total; cnt++) { > > > > + payload.handle[i++] = get_pl->records[cnt].hdr.handle; > > > > + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", > > > > + log, le16_to_cpu(payload.handle[i])); > > > > + > > > > + if (i == max_handles) { > > > > + payload.nr_recs = i; > > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > > + if (rc) > > > > + return rc; > > > > + i = 0; > > > > + } > > > > + } > > > > + > > > > + /* Clear what is left if any */ > > > > + if (i) { > > > > + payload.nr_recs = i; > > > > + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); > > > > + if (rc) > > > > + return rc; > > > > + } > > > > + > > > > + return 0; > > > > +} > > > > > > > > > ... > > > > > > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > > > > index ab138004f644..dd9aa3dd738e 100644 > > > > --- a/drivers/cxl/cxlmem.h > > > > +++ b/drivers/cxl/cxlmem.h > > > > > > ... > > > > > > > + > > > > +/* > > > > + * Clear Event Records input payload > > > > + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 > > > > + */ > > > > +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) > > > > +struct cxl_mbox_clear_event_payload { > > > > + u8 event_log; /* enum cxl_event_log_type */ > > > > + u8 clear_flags; > > > > + u8 nr_recs; > > > > + u8 reserved[3]; > > > > + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; > > > > > > Doesn't fit in the smallest possible payload buffer. > > > It's 526 bytes long. Payload buffer might be 256 bytes in total. > > > (8.2.8.4.3 Mailbox capabilities) > > > > > > Lazy approach, make this smaller and do more loops when clearing. > > > If we want to optimize this later can expand it to this size. > > > > I agree but the code already checks for and adjusts this on the fly based on > > cxlds->payload_size? > > > > + /* Payload size may limit the max handles */ > > + if (pl_size > cxlds->payload_size) { > > + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); > > + pl_size = cxlds->payload_size; > > + } > > > > Why is this not ok? [Other than being potentially inefficient.] > > > > Do you have a patch to qemu which causes this? > > > > Ira > > > > > > +} __packed; > > > > +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ > > > > + (((payload_size) - \ > > > > + (sizeof(struct cxl_mbox_clear_event_payload) - \ > > > > + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ > > > > + sizeof(__le16)) > > > > + > > > > > > ... > > > > > >
On Sun, Dec 18, 2022 at 03:55:53PM +0000, Jonathan Cameron wrote: > On Sun, 18 Dec 2022 08:25:34 +0800 > johnny <johnny.li@montage-tech.com> wrote: > [snip] > > > > > > > > + } > > > > > + > > > > > + mbox_cmd = (struct cxl_mbox_cmd) { > > > > > + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, > > > > > + .payload_in = &payload, > > > > > + .size_in = pl_size, > > > > > > > > This payload size should be whatever we need to store the records, > > > > not the max size possible. Particularly as that size is currently > > > > bigger than the mailbox might be. > > > > > > But the above check and set ensures that does not happen. > > > > > > > > > > > It shouldn't fail (I think) simply because a later version of the spec might > > > > add more to this message and things should still work, but definitely not > > > > good practice to tell the hardware this is much longer than it actually is. > > > > > > I don't follow. > > > > > > The full payload is going to be sent even if we are just clearing 1 record > > > which is inefficient but it should never overflow the hardware because it is > > > limited by the check above. > > > > > > So why would this be a problem? > > > > > > > per spec3.0, Event Record Handles field is "A list of Event Record Handles the > > host has consumed and the device shall now remove from its internal Event Log > > store.". Extra unused handle list does not folow above description. And also > > spec mentions "All event record handles shall be nonzero value. A value of 0 > > shall be treated by the device as an invalid handle.". So if there is value 0 > > in extra unused handles, device shall return invalid handle error code > > I don't think we call into that particular corner as the number of event > record handles is set correctly. Otherwise I agree this isn't following the > spec - though I think key here is that it won't be broken against CXL 3.0 devices > (with that rather roundabout argument that a CXL 3.0 devices should handle later > spec messages as those should be backwards compatible) but it might be broken > against CXL 3.0+ ones that interpret the 0s at the end as having meaning. I'm respining this to add the pci_set_master() anyway. So I'm going to change this as well. I really don't see how hardware would go off anything but the number of records to process the handles I could see some overly strict firmware wanting to validate the size being exactly equal to the number specified rather than just less than (which is what I would anticipate an issue with). Dan has agreed to land the movement of the trace point definition to drivers/cxl patch I need to cxl/next. After that I will rebase and send out. Ira > > Thanks, > > Jonathan >
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index b03fba212799..9fb327370e08 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -8,6 +8,7 @@ #include <cxl.h> #include "core.h" +#include "trace.h" static bool cxl_raw_allow_all; @@ -717,6 +718,140 @@ int cxl_enumerate_cmds(struct cxl_dev_state *cxlds) } EXPORT_SYMBOL_NS_GPL(cxl_enumerate_cmds, CXL); +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, + enum cxl_event_log_type log, + struct cxl_get_event_payload *get_pl) +{ + struct cxl_mbox_clear_event_payload payload = { + .event_log = log, + }; + u16 total = le16_to_cpu(get_pl->record_count); + u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; + size_t pl_size = sizeof(payload); + struct cxl_mbox_cmd mbox_cmd; + u16 cnt; + int rc; + int i; + + /* Payload size may limit the max handles */ + if (pl_size > cxlds->payload_size) { + max_handles = CXL_CLEAR_EVENT_LIMIT_HANDLES(cxlds->payload_size); + pl_size = cxlds->payload_size; + } + + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_CLEAR_EVENT_RECORD, + .payload_in = &payload, + .size_in = pl_size, + }; + + /* + * Clear Event Records uses u8 for the handle cnt while Get Event + * Record can return up to 0xffff records. + */ + i = 0; + for (cnt = 0; cnt < total; cnt++) { + payload.handle[i++] = get_pl->records[cnt].hdr.handle; + dev_dbg(cxlds->dev, "Event log '%d': Clearing %u\n", + log, le16_to_cpu(payload.handle[i])); + + if (i == max_handles) { + payload.nr_recs = i; + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); + if (rc) + return rc; + i = 0; + } + } + + /* Clear what is left if any */ + if (i) { + payload.nr_recs = i; + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); + if (rc) + return rc; + } + + return 0; +} + +static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, + enum cxl_event_log_type type) +{ + struct cxl_get_event_payload *payload; + struct cxl_mbox_cmd mbox_cmd; + u8 log_type = type; + u16 nr_rec; + + mutex_lock(&cxlds->event.log_lock); + payload = cxlds->event.buf; + + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_EVENT_RECORD, + .payload_in = &log_type, + .size_in = sizeof(log_type), + .payload_out = payload, + .size_out = cxlds->payload_size, + .min_out = struct_size(payload, records, 0), + }; + + do { + int rc, i; + + rc = cxl_internal_send_cmd(cxlds, &mbox_cmd); + if (rc) { + dev_err_ratelimited(cxlds->dev, "Event log '%d': Failed to query event records : %d", + type, rc); + break; + } + + nr_rec = le16_to_cpu(payload->record_count); + if (!nr_rec) + break; + + for (i = 0; i < nr_rec; i++) + trace_cxl_generic_event(cxlds->dev, type, + &payload->records[i]); + + if (payload->flags & CXL_GET_EVENT_FLAG_OVERFLOW) + trace_cxl_overflow(cxlds->dev, type, payload); + + rc = cxl_clear_event_record(cxlds, type, payload); + if (rc) { + dev_err_ratelimited(cxlds->dev, "Event log '%d': Failed to clear events : %d", + type, rc); + break; + } + } while (nr_rec); + + mutex_unlock(&cxlds->event.log_lock); +} + +/** + * cxl_mem_get_event_records - Get Event Records from the device + * @cxlds: The device data for the operation + * + * Retrieve all event records available on the device, report them as trace + * events, and clear them. + * + * See CXL rev 3.0 @8.2.9.2.2 Get Event Records + * See CXL rev 3.0 @8.2.9.2.3 Clear Event Records + */ +void cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status) +{ + dev_dbg(cxlds->dev, "Reading event logs: %x\n", status); + + if (status & CXLDEV_EVENT_STATUS_FATAL) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FATAL); + if (status & CXLDEV_EVENT_STATUS_FAIL) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FAIL); + if (status & CXLDEV_EVENT_STATUS_WARN) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_WARN); + if (status & CXLDEV_EVENT_STATUS_INFO) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_INFO); +} +EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); + /** * cxl_mem_get_partition_info - Get partition info * @cxlds: The device data for the operation @@ -868,6 +1003,7 @@ struct cxl_dev_state *cxl_dev_state_create(struct device *dev) } mutex_init(&cxlds->mbox_mutex); + mutex_init(&cxlds->event.log_lock); cxlds->dev = dev; return cxlds; diff --git a/drivers/cxl/core/trace.h b/drivers/cxl/core/trace.h index 20ca2fe2ca8e..6898212fcb47 100644 --- a/drivers/cxl/core/trace.h +++ b/drivers/cxl/core/trace.h @@ -6,7 +6,9 @@ #if !defined(_CXL_EVENTS_H) || defined(TRACE_HEADER_MULTI_READ) #define _CXL_EVENTS_H +#include <asm-generic/unaligned.h> #include <cxl.h> +#include <cxlmem.h> #include <linux/tracepoint.h> #define CXL_RAS_UC_CACHE_DATA_PARITY BIT(0) @@ -103,6 +105,124 @@ TRACE_EVENT(cxl_aer_correctable_error, ) ); +#include <linux/tracepoint.h> + +#define cxl_event_log_type_str(type) \ + __print_symbolic(type, \ + { CXL_EVENT_TYPE_INFO, "Informational" }, \ + { CXL_EVENT_TYPE_WARN, "Warning" }, \ + { CXL_EVENT_TYPE_FAIL, "Failure" }, \ + { CXL_EVENT_TYPE_FATAL, "Fatal" }) + +TRACE_EVENT(cxl_overflow, + + TP_PROTO(const struct device *dev, enum cxl_event_log_type log, + struct cxl_get_event_payload *payload), + + TP_ARGS(dev, log, payload), + + TP_STRUCT__entry( + __string(dev_name, dev_name(dev)) + __field(int, log) + __field(u64, first_ts) + __field(u64, last_ts) + __field(u16, count) + ), + + TP_fast_assign( + __assign_str(dev_name, dev_name(dev)); + __entry->log = log; + __entry->count = le16_to_cpu(payload->overflow_err_count); + __entry->first_ts = le64_to_cpu(payload->first_overflow_timestamp); + __entry->last_ts = le64_to_cpu(payload->last_overflow_timestamp); + ), + + TP_printk("%s: log=%s : %u records from %llu to %llu", + __get_str(dev_name), cxl_event_log_type_str(__entry->log), + __entry->count, __entry->first_ts, __entry->last_ts) + +); + +/* + * Common Event Record Format + * CXL 3.0 section 8.2.9.2.1; Table 8-42 + */ +#define CXL_EVENT_RECORD_FLAG_PERMANENT BIT(2) +#define CXL_EVENT_RECORD_FLAG_MAINT_NEEDED BIT(3) +#define CXL_EVENT_RECORD_FLAG_PERF_DEGRADED BIT(4) +#define CXL_EVENT_RECORD_FLAG_HW_REPLACE BIT(5) +#define show_hdr_flags(flags) __print_flags(flags, " | ", \ + { CXL_EVENT_RECORD_FLAG_PERMANENT, "PERMANENT_CONDITION" }, \ + { CXL_EVENT_RECORD_FLAG_MAINT_NEEDED, "MAINTENANCE_NEEDED" }, \ + { CXL_EVENT_RECORD_FLAG_PERF_DEGRADED, "PERFORMANCE_DEGRADED" }, \ + { CXL_EVENT_RECORD_FLAG_HW_REPLACE, "HARDWARE_REPLACEMENT_NEEDED" } \ +) + +/* + * Define macros for the common header of each CXL event. + * + * Tracepoints using these macros must do 3 things: + * + * 1) Add CXL_EVT_TP_entry to TP_STRUCT__entry + * 2) Use CXL_EVT_TP_fast_assign within TP_fast_assign; + * pass the dev, log, and CXL event header + * 3) Use CXL_EVT_TP_printk() instead of TP_printk() + * + * See the generic_event tracepoint as an example. + */ +#define CXL_EVT_TP_entry \ + __string(dev_name, dev_name(dev)) \ + __field(int, log) \ + __field_struct(uuid_t, hdr_uuid) \ + __field(u32, hdr_flags) \ + __field(u16, hdr_handle) \ + __field(u16, hdr_related_handle) \ + __field(u64, hdr_timestamp) \ + __field(u8, hdr_length) \ + __field(u8, hdr_maint_op_class) + +#define CXL_EVT_TP_fast_assign(dev, l, hdr) \ + __assign_str(dev_name, dev_name(dev)); \ + __entry->log = (l); \ + memcpy(&__entry->hdr_uuid, &(hdr).id, sizeof(uuid_t)); \ + __entry->hdr_length = (hdr).length; \ + __entry->hdr_flags = get_unaligned_le24((hdr).flags); \ + __entry->hdr_handle = le16_to_cpu((hdr).handle); \ + __entry->hdr_related_handle = le16_to_cpu((hdr).related_handle); \ + __entry->hdr_timestamp = le64_to_cpu((hdr).timestamp); \ + __entry->hdr_maint_op_class = (hdr).maint_op_class + +#define CXL_EVT_TP_printk(fmt, ...) \ + TP_printk("%s log=%s : time=%llu uuid=%pUb len=%d flags='%s' " \ + "handle=%x related_handle=%x maint_op_class=%u" \ + " : " fmt, \ + __get_str(dev_name), cxl_event_log_type_str(__entry->log), \ + __entry->hdr_timestamp, &__entry->hdr_uuid, __entry->hdr_length,\ + show_hdr_flags(__entry->hdr_flags), __entry->hdr_handle, \ + __entry->hdr_related_handle, __entry->hdr_maint_op_class, \ + ##__VA_ARGS__) + +TRACE_EVENT(cxl_generic_event, + + TP_PROTO(const struct device *dev, enum cxl_event_log_type log, + struct cxl_event_record_raw *rec), + + TP_ARGS(dev, log, rec), + + TP_STRUCT__entry( + CXL_EVT_TP_entry + __array(u8, data, CXL_EVENT_RECORD_DATA_LENGTH) + ), + + TP_fast_assign( + CXL_EVT_TP_fast_assign(dev, log, rec->hdr); + memcpy(__entry->data, &rec->data, CXL_EVENT_RECORD_DATA_LENGTH); + ), + + CXL_EVT_TP_printk("%s", + __print_hex(__entry->data, CXL_EVENT_RECORD_DATA_LENGTH)) +); + #endif /* _CXL_EVENTS_H */ #define TRACE_INCLUDE_FILE trace diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index aa3af3bb73b2..5974d1082210 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -156,6 +156,18 @@ static inline int ways_to_eiw(unsigned int ways, u8 *eiw) #define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3 #define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000 +/* CXL 3.0 8.2.8.3.1 Event Status Register */ +#define CXLDEV_DEV_EVENT_STATUS_OFFSET 0x00 +#define CXLDEV_EVENT_STATUS_INFO BIT(0) +#define CXLDEV_EVENT_STATUS_WARN BIT(1) +#define CXLDEV_EVENT_STATUS_FAIL BIT(2) +#define CXLDEV_EVENT_STATUS_FATAL BIT(3) + +#define CXLDEV_EVENT_STATUS_ALL (CXLDEV_EVENT_STATUS_INFO | \ + CXLDEV_EVENT_STATUS_WARN | \ + CXLDEV_EVENT_STATUS_FAIL | \ + CXLDEV_EVENT_STATUS_FATAL) + /* CXL 2.0 8.2.8.4 Mailbox Registers */ #define CXLDEV_MBOX_CAPS_OFFSET 0x00 #define CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index ab138004f644..dd9aa3dd738e 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -4,6 +4,7 @@ #define __CXL_MEM_H__ #include <uapi/linux/cxl_mem.h> #include <linux/cdev.h> +#include <linux/uuid.h> #include "cxl.h" /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ @@ -193,6 +194,17 @@ struct cxl_endpoint_dvsec_info { struct range dvsec_range[2]; }; +/** + * struct cxl_event_state - Event log driver state + * + * @event_buf: Buffer to receive event data + * @event_log_lock: Serialize event_buf and log use + */ +struct cxl_event_state { + struct cxl_get_event_payload *buf; + struct mutex log_lock; +}; + /** * struct cxl_dev_state - The driver device state * @@ -266,12 +278,16 @@ struct cxl_dev_state { struct xarray doe_mbs; + struct cxl_event_state event; + int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); }; enum cxl_opcode { CXL_MBOX_OP_INVALID = 0x0000, CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID, + CXL_MBOX_OP_GET_EVENT_RECORD = 0x0100, + CXL_MBOX_OP_CLEAR_EVENT_RECORD = 0x0101, CXL_MBOX_OP_GET_FW_INFO = 0x0200, CXL_MBOX_OP_ACTIVATE_FW = 0x0202, CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, @@ -347,6 +363,73 @@ struct cxl_mbox_identify { u8 qos_telemetry_caps; } __packed; +/* + * Common Event Record Format + * CXL rev 3.0 section 8.2.9.2.1; Table 8-42 + */ +struct cxl_event_record_hdr { + uuid_t id; + u8 length; + u8 flags[3]; + __le16 handle; + __le16 related_handle; + __le64 timestamp; + u8 maint_op_class; + u8 reserved[15]; +} __packed; + +#define CXL_EVENT_RECORD_DATA_LENGTH 0x50 +struct cxl_event_record_raw { + struct cxl_event_record_hdr hdr; + u8 data[CXL_EVENT_RECORD_DATA_LENGTH]; +} __packed; + +/* + * Get Event Records output payload + * CXL rev 3.0 section 8.2.9.2.2; Table 8-50 + */ +#define CXL_GET_EVENT_FLAG_OVERFLOW BIT(0) +#define CXL_GET_EVENT_FLAG_MORE_RECORDS BIT(1) +struct cxl_get_event_payload { + u8 flags; + u8 reserved1; + __le16 overflow_err_count; + __le64 first_overflow_timestamp; + __le64 last_overflow_timestamp; + __le16 record_count; + u8 reserved2[10]; + struct cxl_event_record_raw records[]; +} __packed; + +/* + * CXL rev 3.0 section 8.2.9.2.2; Table 8-49 + */ +enum cxl_event_log_type { + CXL_EVENT_TYPE_INFO = 0x00, + CXL_EVENT_TYPE_WARN, + CXL_EVENT_TYPE_FAIL, + CXL_EVENT_TYPE_FATAL, + CXL_EVENT_TYPE_MAX +}; + +/* + * Clear Event Records input payload + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 + */ +#define CXL_CLEAR_EVENT_MAX_HANDLES (0xff) +struct cxl_mbox_clear_event_payload { + u8 event_log; /* enum cxl_event_log_type */ + u8 clear_flags; + u8 nr_recs; + u8 reserved[3]; + __le16 handle[CXL_CLEAR_EVENT_MAX_HANDLES]; +} __packed; +#define CXL_CLEAR_EVENT_LIMIT_HANDLES(payload_size) \ + (((payload_size) - \ + (sizeof(struct cxl_mbox_clear_event_payload) - \ + (sizeof(__le16) * CXL_CLEAR_EVENT_MAX_HANDLES))) / \ + sizeof(__le16)) + struct cxl_mbox_get_partition_info { __le64 active_volatile_cap; __le64 active_persistent_cap; @@ -441,6 +524,7 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds); struct cxl_dev_state *cxl_dev_state_create(struct device *dev); void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); +void cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status); #ifdef CONFIG_CXL_SUSPEND void cxl_mem_active_inc(void); void cxl_mem_active_dec(void); diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 3a66aadb4df0..a2d8382bc593 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -417,8 +417,37 @@ static void disable_aer(void *pdev) pci_disable_pcie_error_reporting(pdev); } +static void cxl_mem_free_event_buffer(void *buf) +{ + kvfree(buf); +} + +/* + * There is a single buffer for reading event logs from the mailbox. All logs + * share this buffer protected by the cxlds->event_log_lock. + */ +static int cxl_mem_alloc_event_buf(struct cxl_dev_state *cxlds) +{ + struct cxl_get_event_payload *buf; + + dev_dbg(cxlds->dev, "Allocating event buffer size %zu\n", + cxlds->payload_size); + + buf = kvmalloc(cxlds->payload_size, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + if (devm_add_action_or_reset(cxlds->dev, cxl_mem_free_event_buffer, + buf)) + return -ENOMEM; + + cxlds->event.buf = buf; + return 0; +} + static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { + struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); struct cxl_register_map map; struct cxl_memdev *cxlmd; struct cxl_dev_state *cxlds; @@ -494,6 +523,17 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (IS_ERR(cxlmd)) return PTR_ERR(cxlmd); + rc = cxl_mem_alloc_event_buf(cxlds); + if (rc) + return rc; + + /* + * When BIOS maintains CXL error reporting control, it will process + * event records. Only one agent can do so. + */ + if (host_bridge->native_cxl_error) + cxl_mem_get_event_records(cxlds, CXLDEV_EVENT_STATUS_ALL); + if (cxlds->regs.ras) { pci_enable_pcie_error_reporting(pdev); rc = devm_add_action_or_reset(&pdev->dev, disable_aer, pdev);