From patchwork Mon Oct 16 06:23:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Haochen" X-Patchwork-Id: 153207 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp3269236vqb; Sun, 15 Oct 2023 23:26:25 -0700 (PDT) X-Google-Smtp-Source: AGHT+IESPqwCmdybLsOKVfZ0Fx/yFB0FKs0yFxgJh8JZotqHiju3KboTZ7L7zjjW/YhBzmyW1T4F X-Received: by 2002:a05:622a:13c6:b0:418:cd5:ac9d with SMTP id p6-20020a05622a13c600b004180cd5ac9dmr36046341qtk.64.1697437585359; Sun, 15 Oct 2023 23:26:25 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1697437585; cv=pass; d=google.com; s=arc-20160816; b=mrfNIynHT8K0tJcfOP7IC6C93+PAI5xAzxUDnFNBLJrCBqaNoGS5JBqSumX+hPf+uc NB7sPjodM8Mo0ooPIH8cRi4BfHtRn+Qs7fD/9LomIb/f7vPsZASjL/e3Sp1aqVFFycbj 5umFAr8oFqu9e5hN5tS+t6Oyh6p/JZu/U9KOgaN0nFLmpBz59qdFDcMqnSgoZ6hPfQOd Tq5oKnuo4R6bix0eJTcouc3WhFKonpsDxyi4tJzlh8wbRbT/4d5KQnU3/ZKFlkvQSYjI h6tGd3AgoVa2/1t9IGcaubNYKc/IQO2GeHDzUOP5OC0HF/tv7xsfP8jwmQlQQu1Qfqmm wsCw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature:arc-filter:dmarc-filter:delivered-to; bh=5iobD9ZKZrLUJcStrojivnihvFPZL/1A5NejUukfKKk=; fh=M9HM6ASxJ13myMyxe6+D5Y8nbneXlY5qT/Udj6S6pN8=; b=ZCn9NoFBisGWpu4irFw5LCUw8NeNgNnqGPRpIRqAzmhRNFT3CRP65K4raRDimUjZEr BNFiG8CunbseCIKDzxHj/5fo1jZIcTnNcEGM7aoTlU+k5wyAapAiPeX5SABINneuzEEK WoDGsupANy1DtjGt3MrXRvPTTGzLO4Js7TBBv1FDSwWwcAEhY9moGbtM8AZCS2WBoaoF +CqzBnEJMt9uxFnJbcKjRzeL4Spo3oZq+Jt3ghWOZ09CsB1aq1Gb0B8GesSlOKYElPRV 8KoYclMS3xBu1s2DkfNjqrB8eJ7VhRym9O0/42COWoMW8cRnUaCBpO8MHALXAve5JEOB gSdQ== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LaGu4phk; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from server2.sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id a2-20020a05622a02c200b004195a719fdcsi4911837qtx.563.2023.10.15.23.26.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 15 Oct 2023 23:26:25 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=LaGu4phk; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 2555A385702F for ; Mon, 16 Oct 2023 06:26:25 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by sourceware.org (Postfix) with ESMTPS id ABB67385800C for ; Mon, 16 Oct 2023 06:25:51 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org ABB67385800C Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=intel.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org ABB67385800C Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=192.55.52.88 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1697437554; cv=none; b=CbvLsXwRDsR4G/LVxmxZz86+pNSa2YQyycd2iNtqD4kUfljfhyCA6KF0fJsMbjgN8Q2ewWSPwZ69IGO4zNnQbtqQOkdr8pJNtp/oUH6lOasR1Iy0Raarw7Xsnf6ILsAGqw6JigS5y5+7i1QIdR7AlBNR9o1/NfEMhmYMqMiZvOc= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1697437554; c=relaxed/simple; bh=s/1iGbbKXlV14oRQuS6nhfdXNY3XiEDJURFvQBydHeM=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=bsLkSwwl9WRvOm3OziF2Usj+3F9zBPsupzRv43O6EvhpcebGz9ViecITaPkTSefsHY5xmULz1BbPUlFk47V66HBVOuhkq8SAKihYzC7kkenQ4vgxe6e2miZWOLxFBbNiuymDuon1sSsmkjkTvxZiRX1HWdlcNQFkZwxwduTB/2k= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697437551; x=1728973551; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s/1iGbbKXlV14oRQuS6nhfdXNY3XiEDJURFvQBydHeM=; b=LaGu4phkm02mOFxY/ORTiSw3rL1A4UUlEdznE2h4VGsZnO4/BzPNpURt 807K12cxduKDnLSqZ8KWmiJfBdI2j6/Kj/sgPtsO3muTEV2jK7/5K52/0 luWBkgPm7uz7tpULo+qSf5y0JTSW7SQQdFUFc20ON6SB0lP6chJusniXx ihhBI9XVRzj7lmNu5oJVlKxVe8nO7v/WWQNtoRetogTylak3U4AxL8CjJ aC1SQ7jXv1tBJI7hPZtTh87IPYaMAn3q1Hvl4BEYjQ4g+Hyf+7jCNwELj 1Fwe7BrRr7n6mIgKrOc/S7bGRjRFa7YmTZS7+KmC9zPFsnBSfpvYMTrlL A==; X-IronPort-AV: E=McAfee;i="6600,9927,10863"; a="416523710" X-IronPort-AV: E=Sophos;i="6.03,228,1694761200"; d="scan'208";a="416523710" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2023 23:25:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10863"; a="705483468" X-IronPort-AV: E=Sophos;i="6.03,228,1694761200"; d="scan'208";a="705483468" Received: from shvmail03.sh.intel.com ([10.239.245.20]) by orsmga003.jf.intel.com with ESMTP; 15 Oct 2023 23:25:44 -0700 Received: from shliclel4217.sh.intel.com (shliclel4217.sh.intel.com [10.239.240.127]) by shvmail03.sh.intel.com (Postfix) with ESMTP id 1B51B1005684; Mon, 16 Oct 2023 14:25:43 +0800 (CST) From: Haochen Jiang To: gcc-patches@gcc.gnu.org Cc: hongtao.liu@intel.com, ubizjak@gmail.com Subject: [PATCH 2/3] x86: Add m_CORE_HYBRID for hybrid clients tuning Date: Mon, 16 Oct 2023 14:23:39 +0800 Message-Id: <20231016062340.2639697-3-haochen.jiang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231016062340.2639697-1-haochen.jiang@intel.com> References: <20231016062340.2639697-1-haochen.jiang@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-10.9 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP, UPPERCASE_50_75 autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779892313256810169 X-GMAIL-MSGID: 1779892313256810169 gcc/Changelog: * config/i386/i386-options.cc (m_CORE_HYBRID): New. * config/i386/x86-tune.def: Replace hybrid client tune to m_CORE_HYBRID. --- gcc/config/i386/i386-options.cc | 1 + gcc/config/i386/x86-tune.def | 113 ++++++++++++++------------------ 2 files changed, 52 insertions(+), 62 deletions(-) diff --git a/gcc/config/i386/i386-options.cc b/gcc/config/i386/i386-options.cc index 1d28258b4fa..952cfe54da0 100644 --- a/gcc/config/i386/i386-options.cc +++ b/gcc/config/i386/i386-options.cc @@ -143,6 +143,7 @@ along with GCC; see the file COPYING3. If not see #define m_ARROWLAKE_S (HOST_WIDE_INT_1U<> (W-1) ^ x) - @@ -386,8 +381,7 @@ DEF_TUNE (X86_TUNE_USE_SIMODE_FIOP, "use_simode_fiop", ~(m_PENT | m_LAKEMONT | m_PPRO | m_CORE_ALL | m_BONNELL | m_SILVERMONT | m_KNL | m_KNM | m_INTEL | m_AMD_MULTIPLE | m_LUJIAZUI | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT - | m_ALDERLAKE | m_ARROWLAKE | m_ARROWLAKE_S | m_CORE_ATOM - | m_GENERIC)) + | m_CORE_HYBRID | m_CORE_ATOM | m_GENERIC)) /* X86_TUNE_USE_FFREEP: Use freep instruction instead of fstp. */ DEF_TUNE (X86_TUNE_USE_FFREEP, "use_ffreep", m_AMD_MULTIPLE | m_LUJIAZUI) @@ -396,8 +390,8 @@ DEF_TUNE (X86_TUNE_USE_FFREEP, "use_ffreep", m_AMD_MULTIPLE | m_LUJIAZUI) DEF_TUNE (X86_TUNE_EXT_80387_CONSTANTS, "ext_80387_constants", m_PPRO | m_P4_NOCONA | m_CORE_ALL | m_BONNELL | m_SILVERMONT | m_KNL | m_KNM | m_INTEL | m_K6_GEODE | m_ATHLON_K8 | m_LUJIAZUI - | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT | m_ALDERLAKE | m_ARROWLAKE - | m_ARROWLAKE_S | m_CORE_ATOM | m_GENERIC) + | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT | m_CORE_HYBRID + | m_CORE_ATOM | m_GENERIC) /*****************************************************************************/ /* SSE instruction selection tuning */ @@ -412,17 +406,16 @@ DEF_TUNE (X86_TUNE_GENERAL_REGS_SSE_SPILL, "general_regs_sse_spill", of a sequence loading registers by parts. */ DEF_TUNE (X86_TUNE_SSE_UNALIGNED_LOAD_OPTIMAL, "sse_unaligned_load_optimal", m_NEHALEM | m_SANDYBRIDGE | m_CORE_AVX2 | m_SILVERMONT | m_KNL | m_KNM - | m_INTEL | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT | m_ALDERLAKE - | m_ARROWLAKE | m_ARROWLAKE_S | m_CORE_ATOM | m_AMDFAM10 | m_BDVER - | m_BTVER | m_ZNVER | m_LUJIAZUI | m_GENERIC) + | m_INTEL | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT | m_CORE_HYBRID + | m_CORE_ATOM | m_AMDFAM10 | m_BDVER | m_BTVER | m_ZNVER | m_LUJIAZUI + | m_GENERIC) /* X86_TUNE_SSE_UNALIGNED_STORE_OPTIMAL: Use movups for misaligned stores instead of a sequence loading registers by parts. */ DEF_TUNE (X86_TUNE_SSE_UNALIGNED_STORE_OPTIMAL, "sse_unaligned_store_optimal", m_NEHALEM | m_SANDYBRIDGE | m_CORE_AVX2 | m_SILVERMONT | m_KNL | m_KNM - | m_INTEL | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT | m_ALDERLAKE - | m_ARROWLAKE | m_ARROWLAKE_S| m_CORE_ATOM | m_BDVER | m_ZNVER - | m_LUJIAZUI | m_GENERIC) + | m_INTEL | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT | m_CORE_HYBRID + | m_CORE_ATOM | m_BDVER | m_ZNVER | m_LUJIAZUI | m_GENERIC) /* X86_TUNE_SSE_PACKED_SINGLE_INSN_OPTIMAL: Use packed single precision 128bit instructions instead of double where possible. */ @@ -431,15 +424,14 @@ DEF_TUNE (X86_TUNE_SSE_PACKED_SINGLE_INSN_OPTIMAL, "sse_packed_single_insn_optim /* X86_TUNE_SSE_TYPELESS_STORES: Always movaps/movups for 128bit stores. */ DEF_TUNE (X86_TUNE_SSE_TYPELESS_STORES, "sse_typeless_stores", - m_AMD_MULTIPLE | m_LUJIAZUI | m_CORE_ALL | m_TREMONT | m_ALDERLAKE - | m_ARROWLAKE | m_ARROWLAKE_S | m_CORE_ATOM | m_GENERIC) + m_AMD_MULTIPLE | m_LUJIAZUI | m_CORE_ALL | m_TREMONT | m_CORE_HYBRID + | m_CORE_ATOM | m_GENERIC) /* X86_TUNE_SSE_LOAD0_BY_PXOR: Always use pxor to load0 as opposed to xorps/xorpd and other variants. */ DEF_TUNE (X86_TUNE_SSE_LOAD0_BY_PXOR, "sse_load0_by_pxor", m_PPRO | m_P4_NOCONA | m_CORE_ALL | m_BDVER | m_BTVER | m_ZNVER - | m_LUJIAZUI | m_TREMONT | m_ALDERLAKE | m_ARROWLAKE | m_ARROWLAKE_S - | m_CORE_ATOM | m_GENERIC) + | m_LUJIAZUI | m_TREMONT | m_CORE_HYBRID | m_CORE_ATOM | m_GENERIC) /* X86_TUNE_INTER_UNIT_MOVES_TO_VEC: Enable moves in from integer to SSE registers. If disabled, the moves will be done by storing @@ -485,14 +477,14 @@ DEF_TUNE (X86_TUNE_SLOW_PSHUFB, "slow_pshufb", /* X86_TUNE_AVOID_4BYTE_PREFIXES: Avoid instructions requiring 4+ bytes of prefixes. */ DEF_TUNE (X86_TUNE_AVOID_4BYTE_PREFIXES, "avoid_4byte_prefixes", - m_SILVERMONT | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT | m_ALDERLAKE - | m_ARROWLAKE | m_ARROWLAKE_S | m_CORE_ATOM | m_INTEL) + m_SILVERMONT | m_GOLDMONT | m_GOLDMONT_PLUS | m_TREMONT | m_CORE_HYBRID + | m_CORE_ATOM | m_INTEL) /* X86_TUNE_USE_GATHER_2PARTS: Use gather instructions for vectors with 2 elements. */ DEF_TUNE (X86_TUNE_USE_GATHER_2PARTS, "use_gather_2parts", - ~(m_ZNVER1 | m_ZNVER2 | m_ZNVER3 | m_ZNVER4 | m_ALDERLAKE - | m_ARROWLAKE | m_ARROWLAKE_S | m_CORE_ATOM | m_GENERIC | m_GDS)) + ~(m_ZNVER1 | m_ZNVER2 | m_ZNVER3 | m_ZNVER4 | m_CORE_HYBRID + | m_CORE_ATOM | m_GENERIC | m_GDS)) /* X86_TUNE_USE_SCATTER_2PARTS: Use scater instructions for vectors with 2 elements. */ @@ -502,8 +494,8 @@ DEF_TUNE (X86_TUNE_USE_SCATTER_2PARTS, "use_scatter_2parts", /* X86_TUNE_USE_GATHER_4PARTS: Use gather instructions for vectors with 4 elements. */ DEF_TUNE (X86_TUNE_USE_GATHER_4PARTS, "use_gather_4parts", - ~(m_ZNVER1 | m_ZNVER2 | m_ZNVER3 | m_ZNVER4 | m_ALDERLAKE - | m_ARROWLAKE | m_ARROWLAKE_S | m_CORE_ATOM | m_GENERIC | m_GDS)) + ~(m_ZNVER1 | m_ZNVER2 | m_ZNVER3 | m_ZNVER4 | m_CORE_HYBRID + | m_CORE_ATOM | m_GENERIC | m_GDS)) /* X86_TUNE_USE_SCATTER_4PARTS: Use scater instructions for vectors with 4 elements. */ @@ -513,8 +505,8 @@ DEF_TUNE (X86_TUNE_USE_SCATTER_4PARTS, "use_scatter_4parts", /* X86_TUNE_USE_GATHER: Use gather instructions for vectors with 8 or more elements. */ DEF_TUNE (X86_TUNE_USE_GATHER_8PARTS, "use_gather_8parts", - ~(m_ZNVER1 | m_ZNVER2 | m_ZNVER4 | m_ALDERLAKE | m_ARROWLAKE - | m_ARROWLAKE_S | m_CORE_ATOM | m_GENERIC | m_GDS)) + ~(m_ZNVER1 | m_ZNVER2 | m_ZNVER4 | m_CORE_HYBRID | m_CORE_ATOM + | m_GENERIC | m_GDS)) /* X86_TUNE_USE_SCATTER: Use scater instructions for vectors with 8 or more elements. */ @@ -528,8 +520,7 @@ DEF_TUNE (X86_TUNE_AVOID_128FMA_CHAINS, "avoid_fma_chains", m_ZNVER1 | m_ZNVER2 /* X86_TUNE_AVOID_256FMA_CHAINS: Avoid creating loops with tight 256bit or smaller FMA chain. */ DEF_TUNE (X86_TUNE_AVOID_256FMA_CHAINS, "avoid_fma256_chains", m_ZNVER2 | m_ZNVER3 - | m_ALDERLAKE | m_ARROWLAKE | m_ARROWLAKE_S | m_SAPPHIRERAPIDS - | m_CORE_ATOM) + | m_CORE_HYBRID | m_SAPPHIRERAPIDS | m_CORE_ATOM) /* X86_TUNE_AVOID_512FMA_CHAINS: Avoid creating loops with tight 512bit or smaller FMA chain. */ @@ -573,14 +564,12 @@ DEF_TUNE (X86_TUNE_AVX512_SPLIT_REGS, "avx512_split_regs", m_ZNVER4) /* X86_TUNE_AVX256_MOVE_BY_PIECES: Optimize move_by_pieces with 256-bit AVX instructions. */ DEF_TUNE (X86_TUNE_AVX256_MOVE_BY_PIECES, "avx256_move_by_pieces", - m_ALDERLAKE | m_ARROWLAKE | m_ARROWLAKE_S | m_CORE_AVX2 | m_ZNVER1 - | m_ZNVER2 | m_ZNVER3) + m_CORE_HYBRID | m_CORE_AVX2 | m_ZNVER1 | m_ZNVER2 | m_ZNVER3) /* X86_TUNE_AVX256_STORE_BY_PIECES: Optimize store_by_pieces with 256-bit AVX instructions. */ DEF_TUNE (X86_TUNE_AVX256_STORE_BY_PIECES, "avx256_store_by_pieces", - m_ALDERLAKE | m_ARROWLAKE | m_ARROWLAKE_S | m_CORE_AVX2 | m_ZNVER1 - | m_ZNVER2 | m_ZNVER3) + m_CORE_HYBRID | m_CORE_AVX2 | m_ZNVER1 | m_ZNVER2 | m_ZNVER3) /* X86_TUNE_AVX512_MOVE_BY_PIECES: Optimize move_by_pieces with 512-bit AVX instructions. */