From patchwork Wed Dec 20 08:51:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: HAO CHEN GUI X-Patchwork-Id: 181545 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2502654dyi; Wed, 20 Dec 2023 00:52:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IF5NgJ/dtiDYSFDEUny6EjykYeQf91n6ohO46BZOhXV38zjIXrR4zqvHtPmOiiEsV2ptAFe X-Received: by 2002:a05:622a:1746:b0:425:4043:1d9d with SMTP id l6-20020a05622a174600b0042540431d9dmr28708702qtk.112.1703062336251; Wed, 20 Dec 2023 00:52:16 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1703062336; cv=pass; d=google.com; s=arc-20160816; b=bErveZyM3qjD7sGz31TOm+ljuyR3Jjvcdan4jMJyxT1jI0cu6kV0GWpOzOZ9FCCta4 vLHBgBYRrWWEf13X1tTQxmVGaXkZG3DoNoUYoeSnT1UQmn9m/ExWFJpH49EQtb4eZ/6k p8s0zru2q6FmTYituP/otB4dVlgszwUseulVDWoOiMcj8eJucRgMQdcRXOr8t8Gh8Txp G5PQdbKS80ryV8nOG1IILEMD4vlkob7M9fSeFlrTZ/dF9fHEgb9Bp+S/hJ5VXhew3CzR dcI2hh3se2T/K/W06EfC2kpABsrAhW4eQaWBiSQlMRymxM2dLBOf0XzQ6XWjdMDQSOEp DELQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:mime-version :content-transfer-encoding:subject:from:cc:to:content-language :user-agent:date:message-id:dkim-signature:arc-filter:dmarc-filter :delivered-to; bh=uNJc5bpE8E3lUJi+ueFGpnzH1XL1CEs+hiVXwD79Pq0=; fh=3eUSxJU+9IWNwGHlMjnmqDQDnJfeMKAjlglEUO7a4vw=; b=WlH7E/huvMwTg/tRKIvd42tp5BO7WGkNddVOLHJSu98/ztaa5C38d8TYwayCjCF6Kq CqJTmqbvdYOYBafpAgH2iUkc/ToIyYKc8smXZjDlrKZ5FgeTZJDcOo7AQoR9/Ku/F4N2 JLL3xLV1bL3/ukjl6LUIXYAIKV6ogNU+L/7CMAxJODzGO0nCmnyyuyvcL5kobuBP+RHX yaI5+xGiWnY1jWNUqePy72Pzvjcr96BXtMFbRZAl/x9gv24/CpVdaft9NRx6nbvr5oA/ lzdXr1X1xTfWDejNRuJo6QeorRiFYOlMYAJubiDMwlyAS3AJGL7v0QHk5VX641pvnujE +p3Q== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=kMcxT6pj; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id d20-20020ac85ad4000000b00427771e245esi3311605qtd.55.2023.12.20.00.52.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 00:52:16 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=kMcxT6pj; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id F1E7C386101B for ; Wed, 20 Dec 2023 08:52:15 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by sourceware.org (Postfix) with ESMTPS id B8BF6385E02F for ; Wed, 20 Dec 2023 08:51:50 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org B8BF6385E02F Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linux.ibm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org B8BF6385E02F Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=148.163.158.5 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703062312; cv=none; b=NnXw25sUPhrKfHIE9dL+4TBh0f7R/iT/mRSXqrQ83nUUPfXY7LWCa2GMwvCQYRyCihfOio08Ut5fp5Vn23ONj1um+e866P5P+37SfTbc7mp2Y2M9oUluvRUraHv6vDWbpwYVkTK6o9mfi3nFPQ0xl9mgLqyMuBSHGgA/hBTu8OA= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703062312; c=relaxed/simple; bh=Ejup8gB4/ren4oUdc4bnSGo2nc5OSRfgAZXT4HA7scg=; h=DKIM-Signature:Message-ID:Date:To:From:Subject:MIME-Version; b=IwZWGZB6NX/AcAWG9yl5mQr52vI8qF/cf6cVkDQKu5HunNyNOcffQaLYSwWkorMEY2wXxZutKpGwqNzaBPsWibnb03V5picRy7/CtvHSD4bP+Xa2bCh/Eo4Gnr0Ir8FIL0koqYm5pwPvM3/xf9JTQqKGCWdi5Sdb2HKCoiXHJ7A= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0353722.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3BK8Cdpw030339; Wed, 20 Dec 2023 08:51:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=message-id : date : to : cc : from : subject : content-type : content-transfer-encoding : mime-version; s=pp1; bh=uNJc5bpE8E3lUJi+ueFGpnzH1XL1CEs+hiVXwD79Pq0=; b=kMcxT6pj7JClJSlfrBmUqLfYZdzeH2jYebVT9R9olh9I7g5XKS7FedqlOCnve4cwJn46 klf7rT2yCqA1FIhK9zXvmRTDTpw5MgWe4bXERXwtn3Q45L7UmbinOwfZXOS2xzY9TwlK Zq6mlsjBdEujsnKO3o9fQQxmW9ZL8nfLGnRv1LC1+xCEoyCyClRJOT686et14zX5LsAe 0nRAn4L1axA9tYCY7qIc85/huPOdpM7sJeeAFO1R/crTM1k/s02nxbY0XMJ8IKsJ2tw/ 3TNzz50d4GdCMcuXjHM5LBESyxbvfkeN5QqSZI9TmEYxxOeUVBYI/17/yaXH7D9YFK1/ pA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3v3vjbgram-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 20 Dec 2023 08:51:49 +0000 Received: from m0353722.ppops.net (m0353722.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 3BK8nDY2008454; Wed, 20 Dec 2023 08:51:49 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3v3vjbgr1v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 20 Dec 2023 08:51:48 +0000 Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3BK7xd2N010900; Wed, 20 Dec 2023 08:51:25 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3v1q7nnake-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 20 Dec 2023 08:51:25 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3BK8pMvh15663842 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 20 Dec 2023 08:51:22 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6A0DA2004E; Wed, 20 Dec 2023 08:51:22 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E77572004B; Wed, 20 Dec 2023 08:51:20 +0000 (GMT) Received: from [9.200.103.64] (unknown [9.200.103.64]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Wed, 20 Dec 2023 08:51:20 +0000 (GMT) Message-ID: Date: Wed, 20 Dec 2023 16:51:19 +0800 User-Agent: Mozilla Thunderbird Content-Language: en-US To: gcc-patches Cc: Segher Boessenkool , David , "Kewen.Lin" , Peter Bergner From: HAO CHEN GUI Subject: [Patchv3, rs6000] Correct definition of macro of fixed point efficient unaligned X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: cpuJGo5J_ekOmJoQOLdLM-JXByXD7169 X-Proofpoint-GUID: hViBTCdE9Hv7lJI6G3HRyGDi9LY0xtxE X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-20_02,2023-12-14_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 suspectscore=0 phishscore=0 mlxscore=0 adultscore=0 lowpriorityscore=0 malwarescore=0 priorityscore=1501 clxscore=1015 spamscore=0 mlxlogscore=999 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2312200061 X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785790292127386007 X-GMAIL-MSGID: 1785790292127386007 Hi, The patch corrects the definition of TARGET_EFFICIENT_OVERLAPPING_UNALIGNED and replace it with the call of slow_unaligned_access. Compared with last version, https://gcc.gnu.org/pipermail/gcc-patches/2023-December/640832.html the main change is to pass alignment measured by bits to slow_unaligned_access. Bootstrapped and tested on x86 and powerpc64-linux BE and LE with no regressions. Is this OK for trunk? Thanks Gui Haochen ChangeLog rs6000: Correct definition of macro of fixed point efficient unaligned Marco TARGET_EFFICIENT_OVERLAPPING_UNALIGNED is used in rs6000-string.cc to guard the platform which is efficient on fixed point unaligned load/store. It's originally defined by TARGET_EFFICIENT_UNALIGNED_VSX which is enabled from P8 and can be disabled by mno-vsx option. So the definition is wrong. This patch corrects the problem and call slow_unaligned_access to judge if fixed point unaligned load/store is efficient or not. gcc/ * config/rs6000/rs6000.h (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED): Remove. * config/rs6000/rs6000-string.cc (select_block_compare_mode): Replace TARGET_EFFICIENT_OVERLAPPING_UNALIGNED with targetm.slow_unaligned_access. (expand_block_compare_gpr): Likewise. (expand_block_compare): Likewise. (expand_strncmp_gpr_sequence): Likewise. gcc/testsuite/ * gcc.target/powerpc/block-cmp-1.c: New. * gcc.target/powerpc/block-cmp-2.c: New. patch.diff diff --git a/gcc/config/rs6000/rs6000-string.cc b/gcc/config/rs6000/rs6000-string.cc index 44a946cd453..05dc41622f4 100644 --- a/gcc/config/rs6000/rs6000-string.cc +++ b/gcc/config/rs6000/rs6000-string.cc @@ -305,7 +305,7 @@ select_block_compare_mode (unsigned HOST_WIDE_INT offset, else if (bytes == GET_MODE_SIZE (QImode)) return QImode; else if (bytes < GET_MODE_SIZE (SImode) - && TARGET_EFFICIENT_OVERLAPPING_UNALIGNED + && !targetm.slow_unaligned_access (SImode, align * BITS_PER_UNIT) && offset >= GET_MODE_SIZE (SImode) - bytes) /* This matches the case were we have SImode and 3 bytes and offset >= 1 and permits us to move back one and overlap @@ -313,7 +313,7 @@ select_block_compare_mode (unsigned HOST_WIDE_INT offset, unwanted bytes off of the input. */ return SImode; else if (word_mode_ok && bytes < UNITS_PER_WORD - && TARGET_EFFICIENT_OVERLAPPING_UNALIGNED + && !targetm.slow_unaligned_access (word_mode, align * BITS_PER_UNIT) && offset >= UNITS_PER_WORD-bytes) /* Similarly, if we can use DImode it will get matched here and can do an overlapping read that ends at the end of the block. */ @@ -1749,7 +1749,8 @@ expand_block_compare_gpr(unsigned HOST_WIDE_INT bytes, unsigned int base_align, load_mode_size = GET_MODE_SIZE (load_mode); if (bytes >= load_mode_size) cmp_bytes = load_mode_size; - else if (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED) + else if (!targetm.slow_unaligned_access (load_mode, + align * BITS_PER_UNIT)) { /* Move this load back so it doesn't go past the end. P8/P9 can do this efficiently. */ @@ -2026,7 +2027,7 @@ expand_block_compare (rtx operands[]) /* The code generated for p7 and older is not faster than glibc memcmp if alignment is small and length is not short, so bail out to avoid those conditions. */ - if (!TARGET_EFFICIENT_OVERLAPPING_UNALIGNED + if (targetm.slow_unaligned_access (word_mode, base_align * BITS_PER_UNIT) && ((base_align == 1 && bytes > 16) || (base_align == 2 && bytes > 32))) return false; @@ -2168,7 +2169,8 @@ expand_strncmp_gpr_sequence (unsigned HOST_WIDE_INT bytes_to_compare, load_mode_size = GET_MODE_SIZE (load_mode); if (bytes_to_compare >= load_mode_size) cmp_bytes = load_mode_size; - else if (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED) + else if (!targetm.slow_unaligned_access (load_mode, + align * BITS_PER_UNIT)) { /* Move this load back so it doesn't go past the end. P8/P9 can do this efficiently. */ diff --git a/gcc/config/rs6000/rs6000.h b/gcc/config/rs6000/rs6000.h index 326c45221e9..3971a56c588 100644 --- a/gcc/config/rs6000/rs6000.h +++ b/gcc/config/rs6000/rs6000.h @@ -483,10 +483,6 @@ extern int rs6000_vector_align[]; #define TARGET_NO_SF_SUBREG TARGET_DIRECT_MOVE_64BIT #define TARGET_ALLOW_SF_SUBREG (!TARGET_DIRECT_MOVE_64BIT) -/* This wants to be set for p8 and newer. On p7, overlapping unaligned - loads are slow. */ -#define TARGET_EFFICIENT_OVERLAPPING_UNALIGNED TARGET_EFFICIENT_UNALIGNED_VSX - /* Byte/char syncs were added as phased in for ISA 2.06B, but are not present in power7, so conditionalize them on p8 features. TImode syncs need quad memory support. */ diff --git a/gcc/testsuite/gcc.target/powerpc/block-cmp-1.c b/gcc/testsuite/gcc.target/powerpc/block-cmp-1.c new file mode 100644 index 00000000000..bcf0cb2ab4f --- /dev/null +++ b/gcc/testsuite/gcc.target/powerpc/block-cmp-1.c @@ -0,0 +1,11 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -mdejagnu-cpu=power8 -mno-vsx" } */ +/* { dg-final { scan-assembler-not {\mb[l]? memcmp\M} } } */ + +/* Test that it still can do expand for memcmpsi instead of calling library + on P8 with vsx disabled. */ + +int foo (const char* s1, const char* s2) +{ + return __builtin_memcmp (s1, s2, 20); +} diff --git a/gcc/testsuite/gcc.target/powerpc/block-cmp-2.c b/gcc/testsuite/gcc.target/powerpc/block-cmp-2.c new file mode 100644 index 00000000000..dfee15b2147 --- /dev/null +++ b/gcc/testsuite/gcc.target/powerpc/block-cmp-2.c @@ -0,0 +1,12 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target opt_mstrict_align } */ +/* { dg-options "-O2 -mstrict-align" } */ +/* { dg-final { scan-assembler-times {\mb[l]? memcmp\M} 1 } } */ + +/* Test that it calls library for block memory compare when strict-align + is set. The flag causes rs6000_slow_unaligned_access returns true. */ + +int foo (const char* s1, const char* s2) +{ + return __builtin_memcmp (s1, s2, 20); +}