From patchwork Thu Dec 8 14:21:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenchao Hao X-Patchwork-Id: 31379 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp225471wrr; Thu, 8 Dec 2022 06:25:54 -0800 (PST) X-Google-Smtp-Source: AA0mqf4Buof7P/7ngXk8C7PMvs93BETcsfTGus0cqISlKkx4shpcNaKIilZmpKFj7iBxhufJvYDi X-Received: by 2002:a17:90a:7649:b0:219:3e67:39dd with SMTP id s9-20020a17090a764900b002193e6739ddmr52452115pjl.156.1670509554209; Thu, 08 Dec 2022 06:25:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670509554; cv=none; d=google.com; s=arc-20160816; b=T3/OVDoVmQlqDz47wsTmEp1Edhizl5zTW/61GoQuw0P421KJGet021S3GV4O5iIMqq nXlfewXuuHpkrus7474eMcRaz7aM5k5mqsgE70PtXbNG/hA8zbrZTzcgSxuAWdTH6KNG aKE96X0VExyA9oBRZjR+v7LkwMygSV4VnhVfuLiJPnNYDaWGDYIMA0FUQ+2pcCe9DcVc UGkfhdYprF+WQvUQJ/Jy4X+ZDuzt5OLksD/fSCSLFg+Mu9rmWIL1QYLB5m4PyifPbnux fVdQ8kcd8QkkHpxiGZzbZ5XVry67lj9TEH09iglOHNN2wMgu3Vhj/yVdbWxzQzuhRR6p RYgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=awkCMJfui5JXntKNBPmTRfwdJIskPwSnNDmoSFudBxg=; b=AnaxU5K2r4gnNFvyzhCRzb0blc12TGeZc2JKE1ZCWxpRiKCtSjxdwyCHf+q78Hhz5+ APbEMz9EEcbc9PfoqS1dVvTKaAXsFD2qCqOA5a4BIjaeDPZIj3md4LyScDyk/TRGpToE SnXHOgv6HrZNd6MTsD+5EGcZoITGjg3rG4akr9TLYlX7r9UB4Cc4F3OHb0ZFXG6u2GL1 209BMtDYrlFDln1prA1BTKJ2dQI1PyPT5sm8XK4mdnEPo7PTWRa0MNrdzeiwVdiYLt9u VDv5+a482T+PdyhaVnmpt+sClHdgbwpk6pFynyuj2KNDwkmZw8GDZ707swfQ8ntsyZDw zrTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q17-20020a17090311d100b0018996404dc2si18015568plh.268.2022.12.08.06.25.40; Thu, 08 Dec 2022 06:25:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229901AbiLHOXm (ORCPT + 99 others); Thu, 8 Dec 2022 09:23:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230150AbiLHOXN (ORCPT ); Thu, 8 Dec 2022 09:23:13 -0500 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3505A98EBF for ; Thu, 8 Dec 2022 06:22:26 -0800 (PST) Received: from dggpemm500017.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4NSbqd15PNzJpDN; Thu, 8 Dec 2022 22:18:53 +0800 (CST) Received: from build.huawei.com (10.175.101.6) by dggpemm500017.china.huawei.com (7.185.36.178) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Thu, 8 Dec 2022 22:22:24 +0800 From: Wenchao Hao To: Steven Rostedt , Masami Hiramatsu , Andrew Morton , Wenchao Hao , , CC: , Subject: [PATCH] cma:tracing: Print alloc result in trace_cma_alloc_finish Date: Thu, 8 Dec 2022 22:21:30 +0800 Message-ID: <20221208142130.1501195-1-haowenchao@huawei.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 X-Originating-IP: [10.175.101.6] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500017.china.huawei.com (7.185.36.178) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751656226765217614?= X-GMAIL-MSGID: =?utf-8?q?1751656226765217614?= The result of allocation is not printed in trace_cma_alloc_finish now, while it's important to do it so we can set filters to catch specific error on allocation or trigger some operations on specific error. Although we have printed the result in log, but the log is conditional and could not be filtered by tracing event. What's more, it introduce little overhead to print this result. The result of allocation is named as errorno in trace. Signed-off-by: Wenchao Hao Reviewed-by: Steven Rostedt (Google) --- include/trace/events/cma.h | 32 +++++++++++++++++++++++++++++--- mm/cma.c | 2 +- 2 files changed, 30 insertions(+), 4 deletions(-) diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h index 3d708dae1542..ef75ea606ab2 100644 --- a/include/trace/events/cma.h +++ b/include/trace/events/cma.h @@ -91,12 +91,38 @@ TRACE_EVENT(cma_alloc_start, __entry->align) ); -DEFINE_EVENT(cma_alloc_class, cma_alloc_finish, +TRACE_EVENT(cma_alloc_finish, TP_PROTO(const char *name, unsigned long pfn, const struct page *page, - unsigned long count, unsigned int align), + unsigned long count, unsigned int align, int errorno), - TP_ARGS(name, pfn, page, count, align) + TP_ARGS(name, pfn, page, count, align, errorno), + + TP_STRUCT__entry( + __string(name, name) + __field(unsigned long, pfn) + __field(const struct page *, page) + __field(unsigned long, count) + __field(unsigned int, align) + __field(int, errorno) + ), + + TP_fast_assign( + __assign_str(name, name); + __entry->pfn = pfn; + __entry->page = page; + __entry->count = count; + __entry->align = align; + __entry->errorno = errorno; + ), + + TP_printk("name=%s pfn=0x%lx page=%p count=%lu align=%u errorno=%d", + __get_str(name), + __entry->pfn, + __entry->page, + __entry->count, + __entry->align, + __entry->errorno) ); DEFINE_EVENT(cma_alloc_class, cma_alloc_busy_retry, diff --git a/mm/cma.c b/mm/cma.c index 4a978e09547a..a75b17b03b66 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -491,7 +491,7 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, start = bitmap_no + mask + 1; } - trace_cma_alloc_finish(cma->name, pfn, page, count, align); + trace_cma_alloc_finish(cma->name, pfn, page, count, align, ret); /* * CMA can allocate multiple page blocks, which results in different