From patchwork Thu Jul 20 07:35:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: liuhongt X-Patchwork-Id: 123052 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2947070vqt; Thu, 20 Jul 2023 00:36:37 -0700 (PDT) X-Google-Smtp-Source: APBJJlGI/s7ULotqdT0NpyaUOwS9HWSuDCvYx5chhtpsyW8Mh6JFPspQLMC7U2e+8/yq+KkHNd6R X-Received: by 2002:a17:906:11:b0:997:eac2:65e0 with SMTP id 17-20020a170906001100b00997eac265e0mr4055594eja.73.1689838597123; Thu, 20 Jul 2023 00:36:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689838597; cv=none; d=google.com; s=arc-20160816; b=GrkmQHIK+WDS592sXo6PHtmDtm9D3HlUgDy1MbrCu1peG7ZILQvybfvTOtVRyxgeSV 3fftLBzbWriKgsJfjOLBT0ulynoIn3ByB3UZzvFdjit0jxqshLnT9t7k74bT69f4BW2h BM1lVv09c3B7PLyQuc7EjQtp5em9axHoSnpUJDXVCKJ/nateclD1OxRljSpFx+fMeaBb vnHtJc420312evYaz35mmw4iFAzsyV0AJI7ERpmzlhhE2h5UEwfaXzUdJcVcxvWdwwKE V961tfU5sak8GR4Nh2E9wEeQfKCTR581j1BLpfwNjJfuVOX+e81FXBe3Rx5HGGWrZnYv f1OQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:message-id:date:subject:cc :to:dmarc-filter:delivered-to:dkim-signature:dkim-filter; bh=mUO5GVDRVWCg9v6icayl6CCyFre1z17uHW9YzBHGdfA=; fh=gucDS8veQq3nzp2RbBh+Y43Fnar8gqKpR0svjpF2bK8=; b=yNTalvCIfh+LhxDJtA7evUvJvkoz9GAWFOhvuYhoDyfDlDgEkUkH/Ch0VmVZ5iiAlr 4MbspWS2Q5i5vp5YOkIAKDTNdkKiTJ/Q65jzj0Fv6+15qpXWN+XR18B3lB3NdBmqbYnd H5Npcy1PF/oLQUJnaAW0d1xqYxEa8QOGGnGzlzgPqqK7ZYD5eYHTkqzCH4Sdey/KApuT Oh/MzoQdA+P0GEdlqwxsP0Nho3vG1AIV6MvKFzw/Xv7hmiZHDlLwV69CZmqZCTv9kmGC kkKjdDqutc2TAMHrj6u7Ytx3Pq7bqEt9fbVTzIA54cJuIJs7vQ4MisuRrXA1kI2Fgydn Smjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=o9XUTCTh; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id xo11-20020a170907bb8b00b00993a4cd974esi321967ejc.287.2023.07.20.00.36.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jul 2023 00:36:37 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=o9XUTCTh; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4C0353854833 for ; Thu, 20 Jul 2023 07:36:04 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4C0353854833 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1689838564; bh=mUO5GVDRVWCg9v6icayl6CCyFre1z17uHW9YzBHGdfA=; h=To:Cc:Subject:Date:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:List-Subscribe:From:Reply-To:From; b=o9XUTCTh7i7X1Yyr9NcGuxh/5zDVNICHIQs8ZzKa//XGJeMKo09sDYwgu8vIzH95X riEbboVP+RU+WDGM/utA7W0IbP6a37lDqnUJYY+nBBMSokd2Ok1vVyi0+3vuGpNUpr QWIa6x34BK7SUSpv03mvMq+0HXlWF+6zfBWtL2gw= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by sourceware.org (Postfix) with ESMTPS id 19B823856DC7 for ; Thu, 20 Jul 2023 07:35:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 19B823856DC7 X-IronPort-AV: E=McAfee;i="6600,9927,10776"; a="453034772" X-IronPort-AV: E=Sophos;i="6.01,218,1684825200"; d="scan'208";a="453034772" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2023 00:35:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10776"; a="794343631" X-IronPort-AV: E=Sophos;i="6.01,218,1684825200"; d="scan'208";a="794343631" Received: from shvmail03.sh.intel.com ([10.239.245.20]) by fmsmga004.fm.intel.com with ESMTP; 20 Jul 2023 00:35:17 -0700 Received: from shliclel4217.sh.intel.com (shliclel4217.sh.intel.com [10.239.240.127]) by shvmail03.sh.intel.com (Postfix) with ESMTP id E0C501005173; Thu, 20 Jul 2023 15:35:16 +0800 (CST) To: gcc-patches@gcc.gnu.org Cc: ubizjak@gmail.com, hubicka@ucw.cz Subject: [PATCH] Optimize vlddqu to vmovdqu for TARGET_AVX Date: Thu, 20 Jul 2023 15:35:16 +0800 Message-Id: <20230720073516.2171485-1-hongtao.liu@intel.com> X-Mailer: git-send-email 2.39.1.388.g2fc9e9ca3c MIME-Version: 1.0 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: liuhongt via Gcc-patches From: liuhongt Reply-To: liuhongt Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771924196440797430 X-GMAIL-MSGID: 1771924196440797430 For Intel processors, after TARGET_AVX, vmovdqu is optimized as fast as vlddqu, UNSPEC_LDDQU can be removed to enable more optimizations. Can someone confirm this with AMD folks? If AMD doesn't like such optimization, I'll put my optimization under micro-architecture tuning. Bootstrapped and regtested on x86_64-pc-linux-gnu{-m32,}. If AMD also like such optimization, Ok for trunk? gcc/ChangeLog: * config/i386/sse.md (_lddqu): Change to define_expand, expand as simple move when TARGET_AVX && ( == 16 || !TARGET_AVX256_SPLIT_UNALIGNED_LOAD). The original define_insn is renamed to .. (_lddqu): .. this. gcc/testsuite/ChangeLog: * gcc.target/i386/vlddqu_vinserti128.c: New test. --- gcc/config/i386/sse.md | 15 ++++++++++++++- .../gcc.target/i386/vlddqu_vinserti128.c | 11 +++++++++++ 2 files changed, 25 insertions(+), 1 deletion(-) create mode 100644 gcc/testsuite/gcc.target/i386/vlddqu_vinserti128.c diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md index 2d81347c7b6..d571a78f4c4 100644 --- a/gcc/config/i386/sse.md +++ b/gcc/config/i386/sse.md @@ -1835,7 +1835,20 @@ (define_peephole2 [(set (match_dup 4) (match_dup 1))] "operands[4] = adjust_address (operands[0], V2DFmode, 0);") -(define_insn "_lddqu" +(define_expand "_lddqu" + [(set (match_operand:VI1 0 "register_operand") + (unspec:VI1 [(match_operand:VI1 1 "memory_operand")] + UNSPEC_LDDQU))] + "TARGET_SSE3" +{ + if (TARGET_AVX && ( == 16 || !TARGET_AVX256_SPLIT_UNALIGNED_LOAD)) + { + emit_move_insn (operands[0], operands[1]); + DONE; + } +}) + +(define_insn "*_lddqu" [(set (match_operand:VI1 0 "register_operand" "=x") (unspec:VI1 [(match_operand:VI1 1 "memory_operand" "m")] UNSPEC_LDDQU))] diff --git a/gcc/testsuite/gcc.target/i386/vlddqu_vinserti128.c b/gcc/testsuite/gcc.target/i386/vlddqu_vinserti128.c new file mode 100644 index 00000000000..29699a5fa7f --- /dev/null +++ b/gcc/testsuite/gcc.target/i386/vlddqu_vinserti128.c @@ -0,0 +1,11 @@ +/* { dg-do compile } */ +/* { dg-options "-mavx2 -O2" } */ +/* { dg-final { scan-assembler-times "vbroadcasti128" 1 } } */ +/* { dg-final { scan-assembler-not {(?n)vlddqu.*xmm} } } */ + +#include +__m256i foo(void *data) { + __m128i X1 = _mm_lddqu_si128((__m128i*)data); + __m256i V1 = _mm256_broadcastsi128_si256 (X1); + return V1; +}