From patchwork Thu Sep 14 03:11:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kewen.Lin" X-Patchwork-Id: 139279 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp81218vqi; Wed, 13 Sep 2023 20:18:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE61akwXdTsaVKZvzqex5acUBucdr+k0cuV3gS95v3Xz2C7TvCx8INdZlPg/lU1EZb8HBeo X-Received: by 2002:a17:906:ef8b:b0:9a5:cab0:b061 with SMTP id ze11-20020a170906ef8b00b009a5cab0b061mr3549157ejb.51.1694661483050; Wed, 13 Sep 2023 20:18:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694661483; cv=none; d=google.com; s=arc-20160816; b=Zg6gphRCripOebQCyGu4ocs848crl6iIVOTdkvQ+JEoWfdR32T4mxncZRxKsmAI401 R6tNNZeOWB7X+/EHZpkQuMjZ4w0y16Z5+cgBjwQ+wcH8Dl7kENbxB7Mg4kPHelt7esAo BXMYFRxeu4uh8BnLXpE7d4inRzXyLQ61aL2+i7X0NJ/3cdHDc96Cw/IpgTWFdm2309TB fPx1zNS8ax3EkB25SHQBw8wIaCbNYwavwq7Tq3R9vxuqTUkhYzQlcoXkBtU3P9BqlLfN HHE0nwM1SeVGLlYYd4I8+yODb/NYPHAN9S/B/E8V6Qi6NGDU1qFDGNwMPagHA/RRsJJL PJsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=6l68TugUHT9n0p00kuhrHFYdDmKgcJC1VWtgmGj7kuc=; fh=se0EChbRuDzWeQLxh4ma2F8om/5/For7fEVL6Npc7+A=; b=XjRVWUsvoQZ0YddPP/gVWG2LswGGYFzMOCWZcwLPjQdlrKsmZZaSPdg3UTfvXqh/ew 8+jPnQZkQlKxQNmt5todnrwsLD5e4uETkpj5SCjsih8kaFvPfT0x7u6BHgwWPNNccvgU HeEtxPO7kdh3Ho1p6ruZZOubc+vx+UPJI27FH4n3oNV8i+qoJVUaZy50AcE1ENIPyZXr iJVT0kWv3XiqRawYAWre/YnWhiiY7VbxHLByl8qxCse2/OMjArr23GOvwgt73ankXvx8 Evl/38MkWVo85CWLazymZ+4HC8qleB8BLh0rW2q2TRrfBfRqpnKczwap/go9M+/6JB71 Eu0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=R1ctldg1; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id dk6-20020a170906f0c600b0099d61f05e05si414535ejb.1023.2023.09.13.20.18.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Sep 2023 20:18:03 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=R1ctldg1; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 179DC3882107 for ; Thu, 14 Sep 2023 03:15:19 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 179DC3882107 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1694661319; bh=6l68TugUHT9n0p00kuhrHFYdDmKgcJC1VWtgmGj7kuc=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=R1ctldg1YhaJEs9vjGhhsxXEC6260gHqaHZn7crQADeZVjy0TSmoUsUxch3kEEphB ySU8gLA/Nrw1pK6eNAZxp8KiYL+RIqpnxJIV/IWCfZJL+ws10KtBx+DBdjoCbwRPKB frMQe9Hq2t+a+hJeEySmIFxsc6JnpO15QYy68wj4= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id 1E111385842A for ; Thu, 14 Sep 2023 03:12:18 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 1E111385842A Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38E39boq021124; Thu, 14 Sep 2023 03:12:14 GMT Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3s628wmc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 14 Sep 2023 03:12:14 +0000 Received: from m0360083.ppops.net (m0360083.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 38E3AIjV023165; Thu, 14 Sep 2023 03:12:14 GMT Received: from ppma13.dal12v.mail.ibm.com (dd.9e.1632.ip4.static.sl-reverse.com [50.22.158.221]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3s628wm4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 14 Sep 2023 03:12:13 +0000 Received: from pps.filterd (ppma13.dal12v.mail.ibm.com [127.0.0.1]) by ppma13.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38E0S9kT002352; Thu, 14 Sep 2023 03:12:13 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma13.dal12v.mail.ibm.com (PPS) with ESMTPS id 3t158kffj1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 14 Sep 2023 03:12:12 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38E3CBx044827092 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 14 Sep 2023 03:12:11 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 392CA20040; Thu, 14 Sep 2023 03:12:11 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6EFCB2004B; Thu, 14 Sep 2023 03:12:10 +0000 (GMT) Received: from trout.aus.stglabs.ibm.com (unknown [9.40.194.100]) by smtpav06.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 14 Sep 2023 03:12:10 +0000 (GMT) To: gcc-patches@gcc.gnu.org Cc: richard.guenther@gmail.com, richard.sandiford@arm.com Subject: [PATCH 09/10] vect: Get rid of vect_model_store_cost Date: Wed, 13 Sep 2023 22:11:58 -0500 Message-Id: X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: _ljovXMqTxoxOOpH2eryTEMea9vxCyYr X-Proofpoint-ORIG-GUID: W3DzvZd6zhSwjPuWNtOU-mxbGdJGKxcq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_19,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 spamscore=0 mlxscore=0 bulkscore=0 suspectscore=0 lowpriorityscore=0 malwarescore=0 priorityscore=1501 impostorscore=0 clxscore=1015 phishscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309140025 X-Spam-Status: No, score=-12.3 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Kewen Lin via Gcc-patches From: "Kewen.Lin" Reply-To: Kewen Lin Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1776981359273686726 X-GMAIL-MSGID: 1776981359273686726 This patch is to eventually get rid of vect_model_store_cost, it adjusts the costing for the remaining memory access types VMAT_CONTIGUOUS{, _DOWN, _REVERSE} by moving costing close to the transform code. Note that in vect_model_store_cost, there is one special handling for vectorizing a store into the function result, since it's extra penalty and the transform part doesn't have it, this patch keep it alone. gcc/ChangeLog: * tree-vect-stmts.cc (vect_model_store_cost): Remove. (vectorizable_store): Adjust the costing for the remaining memory access types VMAT_CONTIGUOUS{, _DOWN, _REVERSE}. --- gcc/tree-vect-stmts.cc | 137 +++++++++++++---------------------------- 1 file changed, 44 insertions(+), 93 deletions(-) diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index e3ba8077091..3d451c80bca 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -951,81 +951,6 @@ cfun_returns (tree decl) return false; } -/* Function vect_model_store_cost - - Models cost for stores. In the case of grouped accesses, one access - has the overhead of the grouped access attributed to it. */ - -static void -vect_model_store_cost (vec_info *vinfo, stmt_vec_info stmt_info, int ncopies, - vect_memory_access_type memory_access_type, - dr_alignment_support alignment_support_scheme, - int misalignment, - vec_load_store_type vls_type, slp_tree slp_node, - stmt_vector_for_cost *cost_vec) -{ - gcc_assert (memory_access_type != VMAT_GATHER_SCATTER - && memory_access_type != VMAT_ELEMENTWISE - && memory_access_type != VMAT_STRIDED_SLP - && memory_access_type != VMAT_LOAD_STORE_LANES - && memory_access_type != VMAT_CONTIGUOUS_PERMUTE); - - unsigned int inside_cost = 0, prologue_cost = 0; - - /* ??? Somehow we need to fix this at the callers. */ - if (slp_node) - ncopies = SLP_TREE_NUMBER_OF_VEC_STMTS (slp_node); - - if (vls_type == VLS_STORE_INVARIANT) - { - if (!slp_node) - prologue_cost += record_stmt_cost (cost_vec, 1, scalar_to_vec, - stmt_info, 0, vect_prologue); - } - - - /* Costs of the stores. */ - vect_get_store_cost (vinfo, stmt_info, ncopies, alignment_support_scheme, - misalignment, &inside_cost, cost_vec); - - /* When vectorizing a store into the function result assign - a penalty if the function returns in a multi-register location. - In this case we assume we'll end up with having to spill the - vector result and do piecewise loads as a conservative estimate. */ - tree base = get_base_address (STMT_VINFO_DATA_REF (stmt_info)->ref); - if (base - && (TREE_CODE (base) == RESULT_DECL - || (DECL_P (base) && cfun_returns (base))) - && !aggregate_value_p (base, cfun->decl)) - { - rtx reg = hard_function_value (TREE_TYPE (base), cfun->decl, 0, 1); - /* ??? Handle PARALLEL in some way. */ - if (REG_P (reg)) - { - int nregs = hard_regno_nregs (REGNO (reg), GET_MODE (reg)); - /* Assume that a single reg-reg move is possible and cheap, - do not account for vector to gp register move cost. */ - if (nregs > 1) - { - /* Spill. */ - prologue_cost += record_stmt_cost (cost_vec, ncopies, - vector_store, - stmt_info, 0, vect_epilogue); - /* Loads. */ - prologue_cost += record_stmt_cost (cost_vec, ncopies * nregs, - scalar_load, - stmt_info, 0, vect_epilogue); - } - } - } - - if (dump_enabled_p ()) - dump_printf_loc (MSG_NOTE, vect_location, - "vect_model_store_cost: inside_cost = %d, " - "prologue_cost = %d .\n", inside_cost, prologue_cost); -} - - /* Calculate cost of DR's memory access. */ void vect_get_store_cost (vec_info *, stmt_vec_info stmt_info, int ncopies, @@ -9223,6 +9148,11 @@ vectorizable_store (vec_info *vinfo, return true; } + gcc_assert (memory_access_type == VMAT_CONTIGUOUS + || memory_access_type == VMAT_CONTIGUOUS_DOWN + || memory_access_type == VMAT_CONTIGUOUS_PERMUTE + || memory_access_type == VMAT_CONTIGUOUS_REVERSE); + unsigned inside_cost = 0, prologue_cost = 0; auto_vec result_chain (group_size); auto_vec vec_oprnds; @@ -9257,10 +9187,9 @@ vectorizable_store (vec_info *vinfo, that there is no interleaving, DR_GROUP_SIZE is 1, and only one iteration of the loop will be executed. */ op = vect_get_store_rhs (next_stmt_info); - if (costing_p - && memory_access_type == VMAT_CONTIGUOUS_PERMUTE) + if (costing_p) update_prologue_cost (&prologue_cost, op); - else if (!costing_p) + else { vect_get_vec_defs_for_operand (vinfo, next_stmt_info, ncopies, op, @@ -9352,10 +9281,9 @@ vectorizable_store (vec_info *vinfo, { if (costing_p) { - if (memory_access_type == VMAT_CONTIGUOUS_PERMUTE) - vect_get_store_cost (vinfo, stmt_info, 1, - alignment_support_scheme, misalignment, - &inside_cost, cost_vec); + vect_get_store_cost (vinfo, stmt_info, 1, + alignment_support_scheme, misalignment, + &inside_cost, cost_vec); if (!slp) { @@ -9550,18 +9478,41 @@ vectorizable_store (vec_info *vinfo, if (costing_p) { - if (memory_access_type == VMAT_CONTIGUOUS_PERMUTE) - { - if (dump_enabled_p ()) - dump_printf_loc (MSG_NOTE, vect_location, - "vect_model_store_cost: inside_cost = %d, " - "prologue_cost = %d .\n", - inside_cost, prologue_cost); + /* When vectorizing a store into the function result assign + a penalty if the function returns in a multi-register location. + In this case we assume we'll end up with having to spill the + vector result and do piecewise loads as a conservative estimate. */ + tree base = get_base_address (STMT_VINFO_DATA_REF (stmt_info)->ref); + if (base + && (TREE_CODE (base) == RESULT_DECL + || (DECL_P (base) && cfun_returns (base))) + && !aggregate_value_p (base, cfun->decl)) + { + rtx reg = hard_function_value (TREE_TYPE (base), cfun->decl, 0, 1); + /* ??? Handle PARALLEL in some way. */ + if (REG_P (reg)) + { + int nregs = hard_regno_nregs (REGNO (reg), GET_MODE (reg)); + /* Assume that a single reg-reg move is possible and cheap, + do not account for vector to gp register move cost. */ + if (nregs > 1) + { + /* Spill. */ + prologue_cost + += record_stmt_cost (cost_vec, ncopies, vector_store, + stmt_info, 0, vect_epilogue); + /* Loads. */ + prologue_cost + += record_stmt_cost (cost_vec, ncopies * nregs, scalar_load, + stmt_info, 0, vect_epilogue); + } + } } - else - vect_model_store_cost (vinfo, stmt_info, ncopies, memory_access_type, - alignment_support_scheme, misalignment, vls_type, - slp_node, cost_vec); + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "vect_model_store_cost: inside_cost = %d, " + "prologue_cost = %d .\n", + inside_cost, prologue_cost); } return true;