From patchwork Wed Oct 18 08:32:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: liuhongt X-Patchwork-Id: 154755 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4644528vqb; Wed, 18 Oct 2023 01:33:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHRh+uj1BTngjMKnX/B6hVVSIQmcNeYPfRSD8JisQhNsAYWQX/qg5Gye0538yFEx9nN6S63 X-Received: by 2002:a05:6214:1947:b0:66d:14ca:4bcb with SMTP id q7-20020a056214194700b0066d14ca4bcbmr6845769qvk.62.1697618017791; Wed, 18 Oct 2023 01:33:37 -0700 (PDT) Received: from server2.sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id n4-20020a0ce944000000b0065cfd573687si2302657qvo.292.2023.10.18.01.33.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Oct 2023 01:33:37 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=fail header.i=@intel.com header.s=Intel header.b=RIDJx+qP; arc=fail (signature failed); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 84CFA3857806 for ; Wed, 18 Oct 2023 08:33:37 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by sourceware.org (Postfix) with ESMTPS id 554F93858CDB for ; Wed, 18 Oct 2023 08:33:03 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 554F93858CDB Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=intel.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 554F93858CDB Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=134.134.136.20 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1697617985; cv=none; b=kCtBr2eH2VB1LcB9dVd3RorM+D1ZoC9qNmef39hwCEg6lNHIzKaeG5NFjylEqka+/CuNjpMJPo5rkP8UwRu3aLgGBYs8wtvsUjmqdvg1BGqLQSnGwys0LMO23NY70o4xYkpSUwi89GTwVGw1fFMZESmP7nHloJ+/jN7MPwEmVKM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1697617985; c=relaxed/simple; bh=qwKT0cTtHiPKokgbI9hG/fM92TTYHYYaErPJ3diK/lU=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=VUJywkXdYa8c9+rpXyTv/UACoN/7TJg9Y8ZHnXERnsx1WcVmEuoPMez43/d2fwkWk/lH0xFHgNA4egqidJWbO4Z2PxHk43SZWyb3B5msAlvv4B9mb1YBSXfHtE+YO/lB/Tka3xjp6ODBIqtdoyxFafNZJvFPJA0v3Rhnt8diX8E= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697617983; x=1729153983; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=qwKT0cTtHiPKokgbI9hG/fM92TTYHYYaErPJ3diK/lU=; b=RIDJx+qP8EjsxyK5GfcMgJWj83aK5tPPmMUf6pSeGNNe24w0fgg6cYRl jcsd6SeypC+zlMcgs6TWehH9rGWE3gOqQ2G/4YrYWZGmlowmXQhLSk+q/ SB9KL1d9R76HShYK28kNHHEryF2e5ee5/im281aiwuAIbVJuQIlgDB5uG T3TKcpHa6VonqNTf3mhWIbuHlC0cC7EdfCH0qlmo2RqiQfWqOtIlV+t1n rlcr8DzpSWUo/GRwC9e5UGtOo4gWRuj3ecqfIyqSrFeIGpVba2uzx0+ZI tK7MCalmX0LJMJciW5eHrI/c6i+SAkB6IdeiYpCb1h3YHv3SxCN82JvzH w==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="376338886" X-IronPort-AV: E=Sophos;i="6.03,234,1694761200"; d="scan'208";a="376338886" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2023 01:33:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="706348033" X-IronPort-AV: E=Sophos;i="6.03,234,1694761200"; d="scan'208";a="706348033" Received: from shvmail03.sh.intel.com ([10.239.245.20]) by orsmga003.jf.intel.com with ESMTP; 18 Oct 2023 01:33:00 -0700 Received: from shliclel4217.sh.intel.com (shliclel4217.sh.intel.com [10.239.240.127]) by shvmail03.sh.intel.com (Postfix) with ESMTP id 54BFD1005706; Wed, 18 Oct 2023 16:32:59 +0800 (CST) From: liuhongt To: gcc-patches@gcc.gnu.org Cc: rguenther@suse.de Subject: [PATCH] Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big. 65; 6800; 1c There's loop in vect_peel_nonlinear_iv_init to get init_expr * pow (step_expr, skip_niters). When skipn_iters is too big, compile time hogs. To avoid that, optimize init_expr * pow (step_expr, skip_niters) to init_expr << (exact_log2 (step_expr) * skip_niters) when step_expr is pow of 2, otherwise give up vectorization when skip_niters >= TYPE_PRECISION (TREE_TYPE (init_expr)). Date: Wed, 18 Oct 2023 16:32:59 +0800 Message-Id: <20231018083259.2386650-1-hongtao.liu@intel.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Spam-Status: No, score=-10.7 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_HUGESUBJECT, KAM_SHORT, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780081510121857286 X-GMAIL-MSGID: 1780081510121857286 Also give up vectorization when niters_skip is negative which will be used for fully masked loop. Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}. Ok for trunk? gcc/ChangeLog: PR tree-optimization/111820 PR tree-optimization/111833 * tree-vect-loop-manip.cc (vect_can_peel_nonlinear_iv_p): Give up vectorization for nonlinear iv vect_step_op_mul when step_expr is not exact_log2 and niters is greater than TYPE_PRECISION (TREE_TYPE (step_expr)). Also don't vectorize for nagative niters_skip which will be used by fully masked loop. (vect_can_advance_ivs_p): Pass whole phi_info to vect_can_peel_nonlinear_iv_p. * tree-vect-loop.cc (vect_peel_nonlinear_iv_init): Optimize init_expr * pow (step_expr, skipn) to init_expr << (log2 (step_expr) * skipn) when step_expr is exact_log2. gcc/testsuite/ChangeLog: * gcc.target/i386/pr111820-1.c: New test. * gcc.target/i386/pr111820-2.c: New test. * gcc.target/i386/pr103144-mul-1.c: Adjust testcase. --- .../gcc.target/i386/pr103144-mul-1.c | 6 ++-- gcc/testsuite/gcc.target/i386/pr111820-1.c | 16 ++++++++++ gcc/testsuite/gcc.target/i386/pr111820-2.c | 17 ++++++++++ gcc/tree-vect-loop-manip.cc | 28 ++++++++++++++-- gcc/tree-vect-loop.cc | 32 ++++++++++++++++--- 5 files changed, 88 insertions(+), 11 deletions(-) create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-1.c create mode 100644 gcc/testsuite/gcc.target/i386/pr111820-2.c diff --git a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c index 640c34fd959..f80d1094097 100644 --- a/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c +++ b/gcc/testsuite/gcc.target/i386/pr103144-mul-1.c @@ -23,7 +23,7 @@ foo_mul_const (int* a) for (int i = 0; i != N; i++) { a[i] = b; - b *= 3; + b *= 4; } } @@ -34,7 +34,7 @@ foo_mul_peel (int* a, int b) for (int i = 0; i != 39; i++) { a[i] = b; - b *= 3; + b *= 4; } } @@ -46,6 +46,6 @@ foo_mul_peel_const (int* a) for (int i = 0; i != 39; i++) { a[i] = b; - b *= 3; + b *= 4; } } diff --git a/gcc/testsuite/gcc.target/i386/pr111820-1.c b/gcc/testsuite/gcc.target/i386/pr111820-1.c new file mode 100644 index 00000000000..50e960c39d4 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/pr111820-1.c @@ -0,0 +1,16 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -Wno-aggressive-loop-optimizations -fdump-tree-vect-details" } */ +/* { dg-final { scan-tree-dump "Avoid compile time hog on vect_peel_nonlinear_iv_init for nonlinear induction vec_step_op_mul when iteration count is too big" "vect" } } */ + +int r; +int r_0; + +void f1 (void) +{ + int n = 0; + while (-- n) + { + r_0 += r; + r *= 3; + } +} diff --git a/gcc/testsuite/gcc.target/i386/pr111820-2.c b/gcc/testsuite/gcc.target/i386/pr111820-2.c new file mode 100644 index 00000000000..bbdb40798c6 --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/pr111820-2.c @@ -0,0 +1,17 @@ +/* { dg-do compile } */ +/* { dg-options "-O3 -mavx2 -fno-tree-vrp -fdump-tree-vect-details -Wno-aggressive-loop-optimizations" } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" } } */ + +int r; +int r_0; + +void f (void) +{ + int n = 0; + while (-- n) + { + r_0 += r ; + r *= 2; + } +} + diff --git a/gcc/tree-vect-loop-manip.cc b/gcc/tree-vect-loop-manip.cc index 2608c286e5d..a530088b61d 100644 --- a/gcc/tree-vect-loop-manip.cc +++ b/gcc/tree-vect-loop-manip.cc @@ -1783,8 +1783,10 @@ iv_phi_p (stmt_vec_info stmt_info) /* Return true if vectorizer can peel for nonlinear iv. */ static bool vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo, - enum vect_induction_op_type induction_type) + stmt_vec_info stmt_info) { + enum vect_induction_op_type induction_type + = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (stmt_info); tree niters_skip; /* Init_expr will be update by vect_update_ivs_after_vectorizer, if niters or vf is unkown: @@ -1805,11 +1807,31 @@ vect_can_peel_nonlinear_iv_p (loop_vec_info loop_vinfo, return false; } + /* Avoid compile time hog on vect_peel_nonlinear_iv_init. */ + if (induction_type == vect_step_op_mul) + { + tree step_expr = STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_info); + tree type = TREE_TYPE (step_expr); + + if (wi::exact_log2 (wi::to_wide (step_expr)) == -1 + && LOOP_VINFO_INT_NITERS(loop_vinfo) >= TYPE_PRECISION (type)) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "Avoid compile time hog on" + " vect_peel_nonlinear_iv_init" + " for nonlinear induction vec_step_op_mul" + " when iteration count is too big.\n"); + return false; + } + } + /* Also doens't support peel for neg when niter is variable. ??? generate something like niter_expr & 1 ? init_expr : -init_expr? */ niters_skip = LOOP_VINFO_MASK_SKIP_NITERS (loop_vinfo); if ((niters_skip != NULL_TREE - && TREE_CODE (niters_skip) != INTEGER_CST) + && (TREE_CODE (niters_skip) != INTEGER_CST + || (HOST_WIDE_INT) TREE_INT_CST_LOW (niters_skip) < 0)) || (!vect_use_loop_mask_for_alignment_p (loop_vinfo) && LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)) { @@ -1870,7 +1892,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo) induction_type = STMT_VINFO_LOOP_PHI_EVOLUTION_TYPE (phi_info); if (induction_type != vect_step_op_add) { - if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, induction_type)) + if (!vect_can_peel_nonlinear_iv_p (loop_vinfo, phi_info)) return false; continue; diff --git a/gcc/tree-vect-loop.cc b/gcc/tree-vect-loop.cc index 89bdcaa0910..6bb1f3dc462 100644 --- a/gcc/tree-vect-loop.cc +++ b/gcc/tree-vect-loop.cc @@ -9134,11 +9134,33 @@ vect_peel_nonlinear_iv_init (gimple_seq* stmts, tree init_expr, init_expr = gimple_convert (stmts, utype, init_expr); unsigned skipn = TREE_INT_CST_LOW (skip_niters); wide_int begin = wi::to_wide (step_expr); - for (unsigned i = 0; i != skipn - 1; i++) - begin = wi::mul (begin, wi::to_wide (step_expr)); - tree mult_expr = wide_int_to_tree (utype, begin); - init_expr = gimple_build (stmts, MULT_EXPR, utype, init_expr, mult_expr); - init_expr = gimple_convert (stmts, type, init_expr); + int pow2_step = wi::exact_log2 (begin); + /* Optimize init_expr * pow (step_expr, skipn) to + init_expr << (log2 (step_expr) * skipn). */ + if (pow2_step != -1) + { + if (skipn >= TYPE_PRECISION (type) + || skipn > (UINT_MAX / (unsigned) pow2_step) + || skipn * (unsigned) pow2_step >= TYPE_PRECISION (type)) + init_expr = build_zero_cst (type); + else + { + tree lshc = build_int_cst (utype, skipn * (unsigned) pow2_step); + init_expr = gimple_build (stmts, LSHIFT_EXPR, utype, + init_expr, lshc); + } + } + /* Any better way for init_expr * pow (step_expr, skipn)???. */ + else + { + gcc_assert (skipn < TYPE_PRECISION (type)); + for (unsigned i = 0; i != skipn - 1; i++) + begin = wi::mul (begin, wi::to_wide (step_expr)); + tree mult_expr = wide_int_to_tree (utype, begin); + init_expr = gimple_build (stmts, MULT_EXPR, utype, + init_expr, mult_expr); + } + init_expr = gimple_convert (stmts, type, init_expr); } break;