[libiberty] remove TBAA violation in iterative_hash, improve code-gen

Message ID 20240214151442.26E24385E830@sourceware.org
State Accepted
Headers
Series [libiberty] remove TBAA violation in iterative_hash, improve code-gen |

Checks

Context Check Description
snail/gcc-patch-check success Github commit url

Commit Message

Richard Biener Feb. 14, 2024, 3:13 p.m. UTC
  The following removes the TBAA violation present in iterative_hash.
As we eventually LTO that it's important to fix.  This also improves
code generation for the >= 12 bytes loop by using | to compose the
4 byte words as at least GCC 7 and up can recognize that pattern
and perform a 4 byte load while the variant with a + is not
recognized (not on trunk either), I think we have an enhancement bug
for this somewhere.

Given we reliably merge and the bogus "optimized" path might be
only relevant for archs that cannot do misaligned loads efficiently
I've chosen to keep a specialization for aligned accesses.

Bootstrapped and tested on x86_64-unknown-linux-gnu, OK for trunk?

Thanks,
Richard.

libiberty/
	* hashtab.c (iterative_hash): Remove TBAA violating handling
	of aligned little-endian case in favor of just keeping the
	aligned case special-cased.  Use | for composing a larger word.
---
 libiberty/hashtab.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)
  

Patch

diff --git a/libiberty/hashtab.c b/libiberty/hashtab.c
index 48f28078114..e3a07256a30 100644
--- a/libiberty/hashtab.c
+++ b/libiberty/hashtab.c
@@ -940,26 +940,23 @@  iterative_hash (const void *k_in /* the key */,
   c = initval;           /* the previous hash value */
 
   /*---------------------------------------- handle most of the key */
-#ifndef WORDS_BIGENDIAN
-  /* On a little-endian machine, if the data is 4-byte aligned we can hash
-     by word for better speed.  This gives nondeterministic results on
-     big-endian machines.  */
-  if (sizeof (hashval_t) == 4 && (((size_t)k)&3) == 0)
-    while (len >= 12)    /* aligned */
+  /* Provide specialization for the aligned case for targets that cannot
+     efficiently perform misaligned loads of a merged access.  */
+  if ((((size_t)k)&3) == 0)
+    while (len >= 12)
       {
-	a += *(hashval_t *)(k+0);
-	b += *(hashval_t *)(k+4);
-	c += *(hashval_t *)(k+8);
+	a += (k[0] | ((hashval_t)k[1]<<8) | ((hashval_t)k[2]<<16) | ((hashval_t)k[3]<<24));
+	b += (k[4] | ((hashval_t)k[5]<<8) | ((hashval_t)k[6]<<16) | ((hashval_t)k[7]<<24));
+	c += (k[8] | ((hashval_t)k[9]<<8) | ((hashval_t)k[10]<<16)| ((hashval_t)k[11]<<24));
 	mix(a,b,c);
 	k += 12; len -= 12;
       }
   else /* unaligned */
-#endif
     while (len >= 12)
       {
-	a += (k[0] +((hashval_t)k[1]<<8) +((hashval_t)k[2]<<16) +((hashval_t)k[3]<<24));
-	b += (k[4] +((hashval_t)k[5]<<8) +((hashval_t)k[6]<<16) +((hashval_t)k[7]<<24));
-	c += (k[8] +((hashval_t)k[9]<<8) +((hashval_t)k[10]<<16)+((hashval_t)k[11]<<24));
+	a += (k[0] | ((hashval_t)k[1]<<8) | ((hashval_t)k[2]<<16) | ((hashval_t)k[3]<<24));
+	b += (k[4] | ((hashval_t)k[5]<<8) | ((hashval_t)k[6]<<16) | ((hashval_t)k[7]<<24));
+	c += (k[8] | ((hashval_t)k[9]<<8) | ((hashval_t)k[10]<<16)| ((hashval_t)k[11]<<24));
 	mix(a,b,c);
 	k += 12; len -= 12;
       }