From patchwork Fri May 12 17:52:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 93296 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp5291506vqo; Fri, 12 May 2023 10:53:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6cXXFmqiiN6hfzv9kjtSuZC81bTa87upD2ou991jcakmblpXdju9Bd8zPspwwuKhP92T2o X-Received: by 2002:aa7:d845:0:b0:500:47ed:9784 with SMTP id f5-20020aa7d845000000b0050047ed9784mr23376793eds.14.1683913997681; Fri, 12 May 2023 10:53:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683913997; cv=none; d=google.com; s=arc-20160816; b=XRdAd/PnUkQ+GxbpZIzoIDeYV7YmKAxewZ4SOuVkmm2l6Rpyj4CvyUEdwS/d+/uK2g vx9ZtVc8mKWdEtVwAV2I9IQFir3xuNVw2XHU3lZkA6ovsEPnI46VI3GtXDFMaaYBQplS 3CqoBkqHtAN+j50Ejr1dmQtd/kTSlAWVY7ljBCXPgXuPDZIa4xPN3UFXgFF4h3bTxLhi 5qUsnihrlUSvubYyktD1oevuZ0TPOmX+GkAgUkfkURWBCsmzCIJ0FSGPqn/A0aVNmkgh vlVdXD8qllBaks8sdDvprNAM/ucr1RP4g7ZRJ2ovKrkCAgwPmWZPs3N4u/vOK8uzwAuc fJTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence:to:subject :message-id:date:mime-version:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=T5mIzsHsc/1ZFexksC4W5Ac3yhHPO061gdhsFBDURUg=; b=VRi5CE0LOScrIMWT8Vzz1+RZbLu1pBvhyLNWG2lTfUrJ/B7NQm/DcdEbGUyVMsnnZh 2bzhV13Gl+xk7OfVgNGtKPCM49JxLnXq59LTT8kYv06cjIk5oBYGx3dXd1U6KoXRDHrd ihn35cZkBKZFiNne/q8rNF2VcBSeYSLPfazq7IIdsxCuFmAI7ngshrkkR8ztp23sfFsw upFmBbz2pwHBMiNiUqefGWmZKt877dQqKIFwd8suuWbSPgriuKrg+4lvvrgKmGrkeZEB 2kTJQR13zIxLO8Vru9c9A7MNA3LtNmaqGlBbFLYe/atB/WFaNauFvRwG8EO6MUYMtNnM TGlA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=ha9JTpn0; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id v20-20020aa7cd54000000b0050bd320536bsi7218163edw.91.2023.05.12.10.53.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 May 2023 10:53:17 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=ha9JTpn0; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8F3A7385B527 for ; Fri, 12 May 2023 17:53:16 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8F3A7385B527 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1683913996; bh=T5mIzsHsc/1ZFexksC4W5Ac3yhHPO061gdhsFBDURUg=; h=Date:Subject:To:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=ha9JTpn0IutZhxzYyQLY99U3JiuhhAoJcfMP/GCEewrnYkoBjwRl7SdJNN2EpfKIV fNK1KpIW4CystJANK9ewfydK7kVFJ13msHDmAW20CSezVkCOd1/n4CIc2DvEE6y5Xg ZDTO3oVlgLGuuPALjbHG9A5qZ3poxGYug065fnEk= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com [IPv6:2607:f8b0:4864:20::f31]) by sourceware.org (Postfix) with ESMTPS id B86A03858C54 for ; Fri, 12 May 2023 17:52:29 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org B86A03858C54 Received: by mail-qv1-xf31.google.com with SMTP id 6a1803df08f44-61aecee26feso47680956d6.2 for ; Fri, 12 May 2023 10:52:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683913948; x=1686505948; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=T5mIzsHsc/1ZFexksC4W5Ac3yhHPO061gdhsFBDURUg=; b=TJCMglVXYY9aE+SzV5/1p1tMJ171nvdvUuSB84w4H2a47GkPjQ6mBEUT0CuT7iO5YN 0qFxIJKmbFlnDV+67r5sUA6030+MbrCu/chbyNlNn2rEFnvjyLh86uwR9W3I/JR2r+j9 bPfhc2u+egGYxbRpoZ+ytbv5+ElijkBihOb767dcs1KQFCJyhHqvzzgW/2JMl0WL2ZVS jmNXop8Qwh4f9xybf+L6GMch88W2HIiDYh425RfWAkGdOxf5+LrGK9ApoJC4XYwBtzWY +bJYy4er8TiN7uo/fvIU7URbpXILnf/BjY7VAwsCzeRq3TmYKQ1uZ0gcgkRx75aSPg4x +MOA== X-Gm-Message-State: AC+VfDyMXTtcrxru7vLFYz8YgBD+EVU47IhlN7QTL/Sy0vKmPUwb5pQq lB1fWHXosNU9R89SBCq5cQL0apBABB8Jqutdc5uUJZ4fCmT9lA== X-Received: by 2002:a05:6214:62b:b0:621:431e:5409 with SMTP id a11-20020a056214062b00b00621431e5409mr25758337qvx.16.1683913948581; Fri, 12 May 2023 10:52:28 -0700 (PDT) MIME-Version: 1.0 Date: Fri, 12 May 2023 19:52:17 +0200 Message-ID: Subject: [PATCH] i386: Cleanup ix86_expand_vecop_qihi{,2} To: "gcc-patches@gcc.gnu.org" X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Uros Bizjak via Gcc-patches From: Uros Bizjak Reply-To: Uros Bizjak Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765711803730061243?= X-GMAIL-MSGID: =?utf-8?q?1765711803730061243?= Some cleanups while looking at these two functions. gcc/ChangeLog: * config/i386/i386-expand.cc (ix86_expand_vecop_qihi2): Also reject ymm instructions for TARGET_PREFER_AVX128. Use generic gen_extend_insn to generate zero/sign extension instructions. Fix comments. (ix86_expand_vecop_qihi): Initialize interleave functions for MULT code only. Fix comments. Bootstrapped and regression tested on x86_64-linux-gnu {,-m32}. Pushed to master. Uros. diff --git a/gcc/config/i386/i386-expand.cc b/gcc/config/i386/i386-expand.cc index 634fe61ba79..8a869eb3b30 100644 --- a/gcc/config/i386/i386-expand.cc +++ b/gcc/config/i386/i386-expand.cc @@ -23122,12 +23122,11 @@ ix86_expand_vecop_qihi2 (enum rtx_code code, rtx dest, rtx op1, rtx op2) { machine_mode himode, qimode = GET_MODE (dest); rtx hop1, hop2, hdest; - rtx (*gen_extend)(rtx, rtx); rtx (*gen_truncate)(rtx, rtx); bool uns_p = (code == ASHIFTRT) ? false : true; - /* There's no V64HImode multiplication instruction. */ - if (qimode == E_V64QImode) + /* There are no V64HImode instructions. */ + if (qimode == V64QImode) return false; /* vpmovwb only available under AVX512BW. */ @@ -23136,26 +23135,24 @@ ix86_expand_vecop_qihi2 (enum rtx_code code, rtx dest, rtx op1, rtx op2) if ((qimode == V8QImode || qimode == V16QImode) && !TARGET_AVX512VL) return false; - /* Not generate zmm instruction when prefer 128/256 bit vector width. */ - if (qimode == V32QImode - && (TARGET_PREFER_AVX128 || TARGET_PREFER_AVX256)) + /* Do not generate ymm/zmm instructions when + target prefers 128/256 bit vector width. */ + if ((qimode == V16QImode && TARGET_PREFER_AVX128) + || (qimode == V32QImode && TARGET_PREFER_AVX256)) return false; switch (qimode) { case E_V8QImode: himode = V8HImode; - gen_extend = uns_p ? gen_zero_extendv8qiv8hi2 : gen_extendv8qiv8hi2; gen_truncate = gen_truncv8hiv8qi2; break; case E_V16QImode: himode = V16HImode; - gen_extend = uns_p ? gen_zero_extendv16qiv16hi2 : gen_extendv16qiv16hi2; gen_truncate = gen_truncv16hiv16qi2; break; case E_V32QImode: himode = V32HImode; - gen_extend = uns_p ? gen_zero_extendv32qiv32hi2 : gen_extendv32qiv32hi2; gen_truncate = gen_truncv32hiv32qi2; break; default: @@ -23165,8 +23162,8 @@ ix86_expand_vecop_qihi2 (enum rtx_code code, rtx dest, rtx op1, rtx op2) hop1 = gen_reg_rtx (himode); hop2 = gen_reg_rtx (himode); hdest = gen_reg_rtx (himode); - emit_insn (gen_extend (hop1, op1)); - emit_insn (gen_extend (hop2, op2)); + emit_insn (gen_extend_insn (hop1, op1, himode, qimode, uns_p)); + emit_insn (gen_extend_insn (hop2, op2, himode, qimode, uns_p)); emit_insn (gen_rtx_SET (hdest, simplify_gen_binary (code, himode, hop1, hop2))); emit_insn (gen_truncate (dest, hdest)); @@ -23285,8 +23282,9 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) rtx (*gen_ih) (rtx, rtx, rtx); rtx op1_l, op1_h, op2_l, op2_h, res_l, res_h; struct expand_vec_perm_d d; - bool ok, full_interleave; - bool uns_p = false; + bool full_interleave = true; + bool uns_p = true; + bool ok; int i; if (CONST_INT_P (op2) @@ -23303,18 +23301,12 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) { case E_V16QImode: himode = V8HImode; - gen_il = gen_vec_interleave_lowv16qi; - gen_ih = gen_vec_interleave_highv16qi; break; case E_V32QImode: himode = V16HImode; - gen_il = gen_avx2_interleave_lowv32qi; - gen_ih = gen_avx2_interleave_highv32qi; break; case E_V64QImode: himode = V32HImode; - gen_il = gen_avx512bw_interleave_lowv64qi; - gen_ih = gen_avx512bw_interleave_highv64qi; break; default: gcc_unreachable (); @@ -23327,6 +23319,26 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) each word. We don't care what goes into the high byte of each word. Rather than trying to get zero in there, most convenient is to let it be a copy of the low byte. */ + switch (qimode) + { + case E_V16QImode: + gen_il = gen_vec_interleave_lowv16qi; + gen_ih = gen_vec_interleave_highv16qi; + break; + case E_V32QImode: + gen_il = gen_avx2_interleave_lowv32qi; + gen_ih = gen_avx2_interleave_highv32qi; + full_interleave = false; + break; + case E_V64QImode: + gen_il = gen_avx512bw_interleave_lowv64qi; + gen_ih = gen_avx512bw_interleave_highv64qi; + full_interleave = false; + break; + default: + gcc_unreachable (); + } + op2_l = gen_reg_rtx (qimode); op2_h = gen_reg_rtx (qimode); emit_insn (gen_il (op2_l, op2, op2)); @@ -23336,14 +23348,13 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) op1_h = gen_reg_rtx (qimode); emit_insn (gen_il (op1_l, op1, op1)); emit_insn (gen_ih (op1_h, op1, op1)); - full_interleave = qimode == V16QImode; break; + case ASHIFTRT: + uns_p = false; + /* FALLTHRU */ case ASHIFT: case LSHIFTRT: - uns_p = true; - /* FALLTHRU */ - case ASHIFTRT: op1_l = gen_reg_rtx (himode); op1_h = gen_reg_rtx (himode); ix86_expand_sse_unpack (op1_l, op1, uns_p, false); @@ -23360,16 +23371,15 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) else op2_l = op2_h = op2; - full_interleave = true; break; default: gcc_unreachable (); } - /* Perform vashr/vlshr/vashl. */ if (code != MULT && GET_MODE_CLASS (GET_MODE (op2)) == MODE_VECTOR_INT) { + /* Expand vashr/vlshr/vashl. */ res_l = gen_reg_rtx (himode); res_h = gen_reg_rtx (himode); emit_insn (gen_rtx_SET (res_l, @@ -23379,9 +23389,9 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) simplify_gen_binary (code, himode, op1_h, op2_h))); } - /* Performance mult/ashr/lshr/ashl. */ else { + /* Expand mult/ashr/lshr/ashl. */ res_l = expand_simple_binop (himode, code, op1_l, op2_l, NULL_RTX, 1, OPTAB_DIRECT); res_h = expand_simple_binop (himode, code, op1_h, op2_h, NULL_RTX, @@ -23401,7 +23411,7 @@ ix86_expand_vecop_qihi (enum rtx_code code, rtx dest, rtx op1, rtx op2) if (full_interleave) { - /* For SSE2, we used an full interleave, so the desired + /* We used the full interleave, the desired results are in the even elements. */ for (i = 0; i < d.nelt; ++i) d.perm[i] = i * 2;