From patchwork Tue Aug 22 08:52:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kewen.Lin" X-Patchwork-Id: 136475 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b82d:0:b0:3f2:4152:657d with SMTP id z13csp3492535vqi; Tue, 22 Aug 2023 01:53:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHu5YdOAL6ooYYsFT46Tdshegy7fsdMhbgRsQr7orL/zqYiDlbUexZcG01VYFhOQ0rci4/9 X-Received: by 2002:a17:906:3d21:b0:99b:d1d8:d747 with SMTP id l1-20020a1709063d2100b0099bd1d8d747mr7742914ejf.27.1692694432417; Tue, 22 Aug 2023 01:53:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692694432; cv=none; d=google.com; s=arc-20160816; b=t27SKiuqzTB+lp6DN4xxziu+yL0kUKqLR5T5c2Phc7OzrhBhYXikCs+yW85yDx5M4/ GMvLcZVggy5rjdf/SBPhntQeOwc4Ex9CoaG3aI80M9cs93lQN1NCYM9KDbLSWuidRnrR BDBOBOnCQPjxrbcpZoH+WNIgA9rEbGk25t7hPNMIcMmccT/2ZPo6shlU6VxlRkGQrhJo 5M8hvHGFGIQhNiuWuSMIXif6aHMSUo8qyZtd66KxxBWtzFaFGIpDEc4Wa1Fm2kLKxmdt BhyKgYVuBmpBUxFsevn2UCCeP5Z2oXZtObm4q+/XtKv5rh93lBLOJG4ywxtMT0qPxM80 9xIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:in-reply-to:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :dmarc-filter:delivered-to:dkim-signature:dkim-filter; bh=TGIgfpCuSZAsJ599ubIW/pqMpvxMX3kpDQRkk4GNG/Y=; fh=Gv6PAYpoZePgnyLjqy3TiguihZUSZfKa6GhguGNpSZ8=; b=lhYf7XXOmJYU0KOYnkIACsnojPauNtHy0cuw6CiNaJPpMUGPK+4fH0CeGUCAyD8s2X 0nJmuSlh0aw/t5e58SvT8Mh643IQiFY0nBPOKvkyJqPjYrOXvL3z+axf+Nazt5t31erz KpvVw+hP3Zj0dw4xajqSgnajX546eKyypwkDCM91Eo/rfRT6Q2n1CUof9x4aAJ2yhSPw Ku/x4PCM3+JHaKVc+zB8rjVqrIw2+UFm+tQZ/fcyuTDHlLTe+Z8MjPg1AxDiNpLSxcMZ L1Q/cmvXcV6odr2pylnRH6vPaomHx2/fUEQJXhy0uD5iDihqIPUAEc8tps0pkOdnWJfp 9C0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=yYrOUnxp; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id f19-20020a170906085300b009a198078c54si2979875ejd.628.2023.08.22.01.53.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 01:53:52 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=yYrOUnxp; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EEDA23857715 for ; Tue, 22 Aug 2023 08:53:39 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org EEDA23857715 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1692694420; bh=TGIgfpCuSZAsJ599ubIW/pqMpvxMX3kpDQRkk4GNG/Y=; h=Date:Subject:To:Cc:References:In-Reply-To:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=yYrOUnxp+DLb+/nm4ZCHotGzSacONPcBpdOujreh00O+qCbgdDzgs0tQ365p4GWpr wsVnOeVDW+jyzq3LnuXyoj23wul00wDzTat3kVmBsqvAXNZwJIFunCILMAeJ57ISET jokfzSZ4YhSMtlxQuKBdjxWzl0++0k5+mNUkoNvg= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id 17D3E3858D28 for ; Tue, 22 Aug 2023 08:52:54 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 17D3E3858D28 Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37M8nJlo023593; Tue, 22 Aug 2023 08:52:50 GMT Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3sms22sek4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 22 Aug 2023 08:52:49 +0000 Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 37M8iwme007109; Tue, 22 Aug 2023 08:52:49 GMT Received: from ppma13.dal12v.mail.ibm.com (dd.9e.1632.ip4.static.sl-reverse.com [50.22.158.221]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3sms22sej3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 22 Aug 2023 08:52:49 +0000 Received: from pps.filterd (ppma13.dal12v.mail.ibm.com [127.0.0.1]) by ppma13.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 37M7w6kP007480; Tue, 22 Aug 2023 08:52:46 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma13.dal12v.mail.ibm.com (PPS) with ESMTPS id 3ska9k1tme-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 22 Aug 2023 08:52:46 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 37M8qinY55771596 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 22 Aug 2023 08:52:45 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DC27B2004D; Tue, 22 Aug 2023 08:52:44 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C220820040; Tue, 22 Aug 2023 08:52:42 +0000 (GMT) Received: from [9.197.233.216] (unknown [9.197.233.216]) by smtpav06.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 22 Aug 2023 08:52:42 +0000 (GMT) Message-ID: <1c07d6a4-f322-6a1d-aaea-4d17733493fe@linux.ibm.com> Date: Tue, 22 Aug 2023 16:52:41 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.6.1 Subject: [PATCH 3/3] vect: Move VMAT_GATHER_SCATTER handlings from final loop nest Content-Language: en-US To: GCC Patches Cc: Richard Biener , Richard Sandiford , Segher Boessenkool , Peter Bergner References: <8c6c6b96-0b97-4eed-5b88-bda2b3dcc902@linux.ibm.com> In-Reply-To: <8c6c6b96-0b97-4eed-5b88-bda2b3dcc902@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-GUID: ddvLXDZGz0qPQFM9iIUOxi3xo9u9Kagx X-Proofpoint-ORIG-GUID: 9s6avYIWOsHTZBGrEDn8qswwZbADRwtm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-22_07,2023-08-18_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 mlxscore=0 lowpriorityscore=0 adultscore=0 impostorscore=0 phishscore=0 priorityscore=1501 suspectscore=0 spamscore=0 bulkscore=0 malwarescore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2308220065 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: "Kewen.Lin via Gcc-patches" From: "Kewen.Lin" Reply-To: "Kewen.Lin" Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1774918757017824498 X-GMAIL-MSGID: 1774918757017824498 Hi, Like r14-3317 which moves the handlings on memory access type VMAT_GATHER_SCATTER in vectorizable_load final loop nest, this one is to deal with vectorizable_store side. Bootstrapped and regtested on x86_64-redhat-linux, aarch64-linux-gnu and powerpc64{,le}-linux-gnu. Is it ok for trunk? BR, Kewen ----- gcc/ChangeLog: * tree-vect-stmts.cc (vectorizable_store): Move the handlings on VMAT_GATHER_SCATTER in the final loop nest to its own loop, and update the final nest accordingly. --- gcc/tree-vect-stmts.cc | 258 +++++++++++++++++++++++++---------------- 1 file changed, 159 insertions(+), 99 deletions(-) -- 2.31.1 diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index 18f5ebcc09c..b959c1861ad 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -8930,44 +8930,23 @@ vectorizable_store (vec_info *vinfo, return true; } - auto_vec result_chain (group_size); - auto_vec vec_offsets; - auto_vec vec_oprnds; - for (j = 0; j < ncopies; j++) + if (memory_access_type == VMAT_GATHER_SCATTER) { - gimple *new_stmt; - if (j == 0) + gcc_assert (!slp && !grouped_store); + auto_vec vec_offsets; + for (j = 0; j < ncopies; j++) { - if (slp) - { - /* Get vectorized arguments for SLP_NODE. */ - vect_get_vec_defs (vinfo, stmt_info, slp_node, 1, op, - &vec_oprnds); - vec_oprnd = vec_oprnds[0]; - } - else + gimple *new_stmt; + if (j == 0) { - /* For interleaved stores we collect vectorized defs for all the - stores in the group in DR_CHAIN. DR_CHAIN is then used as an - input to vect_permute_store_chain(). - - If the store is not grouped, DR_GROUP_SIZE is 1, and DR_CHAIN - is of size 1. */ - stmt_vec_info next_stmt_info = first_stmt_info; - for (i = 0; i < group_size; i++) - { - /* Since gaps are not supported for interleaved stores, - DR_GROUP_SIZE is the exact number of stmts in the chain. - Therefore, NEXT_STMT_INFO can't be NULL_TREE. In case - that there is no interleaving, DR_GROUP_SIZE is 1, - and only one iteration of the loop will be executed. */ - op = vect_get_store_rhs (next_stmt_info); - vect_get_vec_defs_for_operand (vinfo, next_stmt_info, ncopies, - op, gvec_oprnds[i]); - vec_oprnd = (*gvec_oprnds[i])[0]; - dr_chain.quick_push (vec_oprnd); - next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info); - } + /* Since the store is not grouped, DR_GROUP_SIZE is 1, and + DR_CHAIN is of size 1. */ + gcc_assert (group_size == 1); + op = vect_get_store_rhs (first_stmt_info); + vect_get_vec_defs_for_operand (vinfo, first_stmt_info, ncopies, + op, gvec_oprnds[0]); + vec_oprnd = (*gvec_oprnds[0])[0]; + dr_chain.quick_push (vec_oprnd); if (mask) { vect_get_vec_defs_for_operand (vinfo, stmt_info, ncopies, @@ -8975,91 +8954,55 @@ vectorizable_store (vec_info *vinfo, mask_vectype); vec_mask = vec_masks[0]; } - } - /* We should have catched mismatched types earlier. */ - gcc_assert (useless_type_conversion_p (vectype, - TREE_TYPE (vec_oprnd))); - bool simd_lane_access_p - = STMT_VINFO_SIMD_LANE_ACCESS_P (stmt_info) != 0; - if (simd_lane_access_p - && !loop_masks - && TREE_CODE (DR_BASE_ADDRESS (first_dr_info->dr)) == ADDR_EXPR - && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr_info->dr), 0)) - && integer_zerop (get_dr_vinfo_offset (vinfo, first_dr_info)) - && integer_zerop (DR_INIT (first_dr_info->dr)) - && alias_sets_conflict_p (get_alias_set (aggr_type), - get_alias_set (TREE_TYPE (ref_type)))) - { - dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr_info->dr)); - dataref_offset = build_int_cst (ref_type, 0); + /* We should have catched mismatched types earlier. */ + gcc_assert (useless_type_conversion_p (vectype, + TREE_TYPE (vec_oprnd))); + if (STMT_VINFO_GATHER_SCATTER_P (stmt_info)) + vect_get_gather_scatter_ops (loop_vinfo, loop, stmt_info, + slp_node, &gs_info, &dataref_ptr, + &vec_offsets); + else + dataref_ptr + = vect_create_data_ref_ptr (vinfo, first_stmt_info, aggr_type, + NULL, offset, &dummy, gsi, + &ptr_incr, false, bump); } - else if (STMT_VINFO_GATHER_SCATTER_P (stmt_info)) - vect_get_gather_scatter_ops (loop_vinfo, loop, stmt_info, slp_node, - &gs_info, &dataref_ptr, &vec_offsets); else - dataref_ptr - = vect_create_data_ref_ptr (vinfo, first_stmt_info, aggr_type, - simd_lane_access_p ? loop : NULL, - offset, &dummy, gsi, &ptr_incr, - simd_lane_access_p, bump); - } - else - { - gcc_assert (!LOOP_VINFO_USING_SELECT_VL_P (loop_vinfo)); - /* DR_CHAIN is then used as an input to vect_permute_store_chain(). - If the store is not grouped, DR_GROUP_SIZE is 1, and DR_CHAIN is - of size 1. */ - for (i = 0; i < group_size; i++) { - vec_oprnd = (*gvec_oprnds[i])[j]; - dr_chain[i] = vec_oprnd; + gcc_assert (!LOOP_VINFO_USING_SELECT_VL_P (loop_vinfo)); + vec_oprnd = (*gvec_oprnds[0])[j]; + dr_chain[0] = vec_oprnd; + if (mask) + vec_mask = vec_masks[j]; + if (!STMT_VINFO_GATHER_SCATTER_P (stmt_info)) + dataref_ptr = bump_vector_ptr (vinfo, dataref_ptr, ptr_incr, + gsi, stmt_info, bump); } - if (mask) - vec_mask = vec_masks[j]; - if (dataref_offset) - dataref_offset = int_const_binop (PLUS_EXPR, dataref_offset, bump); - else if (!STMT_VINFO_GATHER_SCATTER_P (stmt_info)) - dataref_ptr = bump_vector_ptr (vinfo, dataref_ptr, ptr_incr, gsi, - stmt_info, bump); - } - - new_stmt = NULL; - if (grouped_store) - /* Permute. */ - vect_permute_store_chain (vinfo, dr_chain, group_size, stmt_info, gsi, - &result_chain); - stmt_vec_info next_stmt_info = first_stmt_info; - for (i = 0; i < vec_num; i++) - { - unsigned misalign; + new_stmt = NULL; unsigned HOST_WIDE_INT align; - tree final_mask = NULL_TREE; tree final_len = NULL_TREE; tree bias = NULL_TREE; if (loop_masks) final_mask = vect_get_loop_mask (loop_vinfo, gsi, loop_masks, - vec_num * ncopies, vectype, - vec_num * j + i); + ncopies, vectype, j); if (vec_mask) final_mask = prepare_vec_mask (loop_vinfo, mask_vectype, final_mask, vec_mask, gsi); - if (memory_access_type == VMAT_GATHER_SCATTER - && gs_info.ifn != IFN_LAST) + if (gs_info.ifn != IFN_LAST) { if (STMT_VINFO_GATHER_SCATTER_P (stmt_info)) - vec_offset = vec_offsets[vec_num * j + i]; + vec_offset = vec_offsets[j]; tree scale = size_int (gs_info.scale); if (gs_info.ifn == IFN_MASK_LEN_SCATTER_STORE) { if (loop_lens) final_len = vect_get_loop_len (loop_vinfo, gsi, loop_lens, - vec_num * ncopies, vectype, - vec_num * j + i, 1); + ncopies, vectype, j, 1); else final_len = build_int_cst (sizetype, TYPE_VECTOR_SUBPARTS (vectype)); @@ -9091,9 +9034,8 @@ vectorizable_store (vec_info *vinfo, gimple_call_set_nothrow (call, true); vect_finish_stmt_generation (vinfo, stmt_info, call, gsi); new_stmt = call; - break; } - else if (memory_access_type == VMAT_GATHER_SCATTER) + else { /* Emulated scatter. */ gcc_assert (!final_mask); @@ -9142,8 +9084,126 @@ vectorizable_store (vec_info *vinfo, new_stmt = gimple_build_assign (ref, elt); vect_finish_stmt_generation (vinfo, stmt_info, new_stmt, gsi); } - break; } + if (j == 0) + *vec_stmt = new_stmt; + STMT_VINFO_VEC_STMTS (stmt_info).safe_push (new_stmt); + } + return true; + } + + auto_vec result_chain (group_size); + auto_vec vec_oprnds; + for (j = 0; j < ncopies; j++) + { + gimple *new_stmt; + if (j == 0) + { + if (slp) + { + /* Get vectorized arguments for SLP_NODE. */ + vect_get_vec_defs (vinfo, stmt_info, slp_node, 1, op, + &vec_oprnds); + vec_oprnd = vec_oprnds[0]; + } + else + { + /* For interleaved stores we collect vectorized defs for all the + stores in the group in DR_CHAIN. DR_CHAIN is then used as an + input to vect_permute_store_chain(). + + If the store is not grouped, DR_GROUP_SIZE is 1, and DR_CHAIN + is of size 1. */ + stmt_vec_info next_stmt_info = first_stmt_info; + for (i = 0; i < group_size; i++) + { + /* Since gaps are not supported for interleaved stores, + DR_GROUP_SIZE is the exact number of stmts in the chain. + Therefore, NEXT_STMT_INFO can't be NULL_TREE. In case + that there is no interleaving, DR_GROUP_SIZE is 1, + and only one iteration of the loop will be executed. */ + op = vect_get_store_rhs (next_stmt_info); + vect_get_vec_defs_for_operand (vinfo, next_stmt_info, ncopies, + op, gvec_oprnds[i]); + vec_oprnd = (*gvec_oprnds[i])[0]; + dr_chain.quick_push (vec_oprnd); + next_stmt_info = DR_GROUP_NEXT_ELEMENT (next_stmt_info); + } + if (mask) + { + vect_get_vec_defs_for_operand (vinfo, stmt_info, ncopies, + mask, &vec_masks, + mask_vectype); + vec_mask = vec_masks[0]; + } + } + + /* We should have catched mismatched types earlier. */ + gcc_assert (useless_type_conversion_p (vectype, + TREE_TYPE (vec_oprnd))); + bool simd_lane_access_p + = STMT_VINFO_SIMD_LANE_ACCESS_P (stmt_info) != 0; + if (simd_lane_access_p + && !loop_masks + && TREE_CODE (DR_BASE_ADDRESS (first_dr_info->dr)) == ADDR_EXPR + && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr_info->dr), 0)) + && integer_zerop (get_dr_vinfo_offset (vinfo, first_dr_info)) + && integer_zerop (DR_INIT (first_dr_info->dr)) + && alias_sets_conflict_p (get_alias_set (aggr_type), + get_alias_set (TREE_TYPE (ref_type)))) + { + dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr_info->dr)); + dataref_offset = build_int_cst (ref_type, 0); + } + else + dataref_ptr + = vect_create_data_ref_ptr (vinfo, first_stmt_info, aggr_type, + simd_lane_access_p ? loop : NULL, + offset, &dummy, gsi, &ptr_incr, + simd_lane_access_p, bump); + } + else + { + gcc_assert (!LOOP_VINFO_USING_SELECT_VL_P (loop_vinfo)); + /* DR_CHAIN is then used as an input to vect_permute_store_chain(). + If the store is not grouped, DR_GROUP_SIZE is 1, and DR_CHAIN is + of size 1. */ + for (i = 0; i < group_size; i++) + { + vec_oprnd = (*gvec_oprnds[i])[j]; + dr_chain[i] = vec_oprnd; + } + if (mask) + vec_mask = vec_masks[j]; + if (dataref_offset) + dataref_offset = int_const_binop (PLUS_EXPR, dataref_offset, bump); + else + dataref_ptr = bump_vector_ptr (vinfo, dataref_ptr, ptr_incr, gsi, + stmt_info, bump); + } + + new_stmt = NULL; + if (grouped_store) + /* Permute. */ + vect_permute_store_chain (vinfo, dr_chain, group_size, stmt_info, gsi, + &result_chain); + + stmt_vec_info next_stmt_info = first_stmt_info; + for (i = 0; i < vec_num; i++) + { + unsigned misalign; + unsigned HOST_WIDE_INT align; + + tree final_mask = NULL_TREE; + tree final_len = NULL_TREE; + tree bias = NULL_TREE; + if (loop_masks) + final_mask = vect_get_loop_mask (loop_vinfo, gsi, loop_masks, + vec_num * ncopies, vectype, + vec_num * j + i); + if (vec_mask) + final_mask = prepare_vec_mask (loop_vinfo, mask_vectype, final_mask, + vec_mask, gsi); if (i > 0) /* Bump the vector pointer. */