[3/6] rtl-ssa: Fix ICE when deleting memory clobbers

Message ID 20231024105006.3337671-4-richard.sandiford@arm.com
State Accepted
Headers
Series rtl-ssa: Various fixes needed for the late-combine pass |

Checks

Context Check Description
snail/gcc-patch-check success Github commit url

Commit Message

Richard Sandiford Oct. 24, 2023, 10:50 a.m. UTC
  Sometimes an optimisation can remove a clobber of scratch registers
or scratch memory.  We then need to update the DU chains to reflect
the removed clobber.

For registers this isn't a problem.  Clobbers of registers are just
momentary blips in the register's lifetime.  They act as a barrier for
moving uses later or defs earlier, but otherwise they have no effect on
the semantics of other instructions.  Removing a clobber is therefore a
cheap, local operation.

In contrast, clobbers of memory are modelled as full sets.
This is because (a) a clobber of memory does not invalidate
*all* memory and (b) it's a common idiom to use (clobber (mem ...))
in stack barriers.  But removing a set and redirecting all uses
to a different set is a linear operation.  Doing it for potentially
every optimisation could lead to quadratic behaviour.

This patch therefore refrains from removing sets of memory that appear
to be redundant.  There's an opportunity to clean this up in linear time
at the end of the pass, but as things stand, nothing would benefit from
that.

This is also a very rare event.  Usually we should try to optimise the
insn before the scratch memory has been allocated.

gcc/
	* rtl-ssa/changes.cc (function_info::finalize_new_accesses):
	If a change describes a set of memory, ensure that that set
	is kept, regardless of the insn pattern.
---
 gcc/rtl-ssa/changes.cc | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)
  

Comments

Jeff Law Oct. 24, 2023, 5:24 p.m. UTC | #1
On 10/24/23 04:50, Richard Sandiford wrote:
> Sometimes an optimisation can remove a clobber of scratch registers
> or scratch memory.  We then need to update the DU chains to reflect
> the removed clobber.
> 
> For registers this isn't a problem.  Clobbers of registers are just
> momentary blips in the register's lifetime.  They act as a barrier for
> moving uses later or defs earlier, but otherwise they have no effect on
> the semantics of other instructions.  Removing a clobber is therefore a
> cheap, local operation.
> 
> In contrast, clobbers of memory are modelled as full sets.
> This is because (a) a clobber of memory does not invalidate
> *all* memory and (b) it's a common idiom to use (clobber (mem ...))
> in stack barriers.  But removing a set and redirecting all uses
> to a different set is a linear operation.  Doing it for potentially
> every optimisation could lead to quadratic behaviour.
> 
> This patch therefore refrains from removing sets of memory that appear
> to be redundant.  There's an opportunity to clean this up in linear time
> at the end of the pass, but as things stand, nothing would benefit from
> that.
> 
> This is also a very rare event.  Usually we should try to optimise the
> insn before the scratch memory has been allocated.
> 
> gcc/
> 	* rtl-ssa/changes.cc (function_info::finalize_new_accesses):
> 	If a change describes a set of memory, ensure that that set
> 	is kept, regardless of the insn pattern.
OK
jeff
  

Patch

diff --git a/gcc/rtl-ssa/changes.cc b/gcc/rtl-ssa/changes.cc
index c73c23c86fb..5800f9dba97 100644
--- a/gcc/rtl-ssa/changes.cc
+++ b/gcc/rtl-ssa/changes.cc
@@ -429,8 +429,18 @@  function_info::finalize_new_accesses (insn_change &change, insn_info *pos)
   // Also keep any explicitly-recorded call clobbers, which are deliberately
   // excluded from the vec_rtx_properties.  Calls shouldn't move, so we can
   // keep the definitions in their current position.
+  //
+  // If the change describes a set of memory, but the pattern doesn't
+  // reference memory, keep the set anyway.  This can happen if the
+  // old pattern was a parallel that contained a memory clobber, and if
+  // the new pattern was recognized without that clobber.  Keeping the
+  // set avoids a linear-complexity update to the set's users.
+  //
+  // ??? We could queue an update so that these bogus clobbers are
+  // removed later.
   for (def_info *def : change.new_defs)
-    if (def->m_has_been_superceded && def->is_call_clobber ())
+    if (def->m_has_been_superceded
+	&& (def->is_call_clobber () || def->is_mem ()))
       {
 	def->m_has_been_superceded = false;
 	def->set_insn (insn);
@@ -535,7 +545,7 @@  function_info::finalize_new_accesses (insn_change &change, insn_info *pos)
 	}
     }
 
-  // Install the new list of definitions in CHANGE.
+  // Install the new list of uses in CHANGE.
   sort_accesses (m_temp_uses);
   change.new_uses = use_array (temp_access_array (m_temp_uses));
   m_temp_uses.truncate (0);