call-cdce: Add missing BUILT_IN_*F{32,64}X handling and improve BUILT_IN_*L [PR113993]

Message ID ZdW2rkXeVr77plE+@tucnak
State Unresolved
Headers
Series call-cdce: Add missing BUILT_IN_*F{32,64}X handling and improve BUILT_IN_*L [PR113993] |

Checks

Context Check Description
snail/gcc-patch-check warning Git am fail log

Commit Message

Jakub Jelinek Feb. 21, 2024, 8:39 a.m. UTC
  Hi!

The following testcase ICEs, because can_test_argument_range
returns true for BUILT_IN_{COSH,SINH,EXP{,M1,2}}{F32X,F64X}
among many other builtins, but get_no_error_domain doesn't handle
those.

float32x_type_node when supported in GCC always has DFmode, so that
case is easy (and call-cdce assumes that SFmode is IEEE float and DFmode
is IEEE double).  So *F32X is simply handled by adding those cases
next to *F64.
float64x_type_node when supported in GCC by definition has a mode
with larger precision and exponent range than DFmode, so it can be XFmode,
TFmode or KFmode.  I went through all the l/f128 suffixed builtins and
verified that the float128x_type_node no error domain range is actually
identical to the Intel extended long double no error domain range; it isn't
that surprising, both IEEE quad and Intel/Motorola extended have the same
exponent range [-16381, 16384] (well, Motorola -16382 probably because of
different behavior for denormals, but that has nothing to do with
get_no_error_domain which is about large inputs overflowing into +-Inf
or triggering NaN, denormals could in theory do something solely for sqrt
and even that is fine).  In theory some target could have different larger
type, so for *F64X the code verifies that
REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384
and if so, uses the *F128 domains, otherwise falls back to the non-suffixed
ones (aka *F64), that is certainly the conservative minimum.
While at it, the patch also changes the *L suffixed cases to do pretty much
the same, the comment said that the function just assumes for *L
the *F64 ranges, but that is unnecessarily conservative.
All we currently have for long double is:
1) IEEE quad (emax 16384, *F128 ranges)
2) XFmode Intel/Motorola extended (emax 16384, same as *F128 ranges)
3) IBM extended (double double, emax 1024, the extra precision doesn't
   really help and the domains are the same as for *F64)
4) same as double (*F64 again)
So, the patch uses also for *L
REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
checks and either tail recurses into the *F128 case for that or to
non-suffixed (aka *F64) case otherwise.
BUILT_IN_*F128X not handled because no target has those and it doesn't
seem something is on the horizon and who knows what would be used for that.
Thus, all we get this wrong for are probably VAX floats or something
similar, no intent from me to look at that, that is preexisting issue.

BTW, I'm surprised we don't have BUILT_IN_EXP10F{16,32,64,128,32X,64X,128X}
builtins, seems glibc has those (sure, I think except *16 and *128x).

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2024-02-21  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/113993
	* tree-call-cdce.cc (get_no_error_domain): Handle
	BUILT_IN_{COSH,SINH,EXP{,M1,2}}{F32X,F64X}.  Handle
	BUILT_IN_{COSH,SINH,EXP{,M1,2}}L for
	REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
	the as the F128 suffixed cases, otherwise as non-suffixed ones.
	Handle BUILT_IN_{EXP,POW}10L for
	REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
	as (-inf, 4932).

	* gcc.dg/tree-ssa/pr113993.c: New test.


	Jakub
  

Comments

Richard Biener Feb. 22, 2024, 9:14 a.m. UTC | #1
On Wed, 21 Feb 2024, Jakub Jelinek wrote:

> Hi!
> 
> The following testcase ICEs, because can_test_argument_range
> returns true for BUILT_IN_{COSH,SINH,EXP{,M1,2}}{F32X,F64X}
> among many other builtins, but get_no_error_domain doesn't handle
> those.
> 
> float32x_type_node when supported in GCC always has DFmode, so that
> case is easy (and call-cdce assumes that SFmode is IEEE float and DFmode
> is IEEE double).  So *F32X is simply handled by adding those cases
> next to *F64.
> float64x_type_node when supported in GCC by definition has a mode
> with larger precision and exponent range than DFmode, so it can be XFmode,
> TFmode or KFmode.  I went through all the l/f128 suffixed builtins and
> verified that the float128x_type_node no error domain range is actually
> identical to the Intel extended long double no error domain range; it isn't
> that surprising, both IEEE quad and Intel/Motorola extended have the same
> exponent range [-16381, 16384] (well, Motorola -16382 probably because of
> different behavior for denormals, but that has nothing to do with
> get_no_error_domain which is about large inputs overflowing into +-Inf
> or triggering NaN, denormals could in theory do something solely for sqrt
> and even that is fine).  In theory some target could have different larger
> type, so for *F64X the code verifies that
> REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384
> and if so, uses the *F128 domains, otherwise falls back to the non-suffixed
> ones (aka *F64), that is certainly the conservative minimum.
> While at it, the patch also changes the *L suffixed cases to do pretty much
> the same, the comment said that the function just assumes for *L
> the *F64 ranges, but that is unnecessarily conservative.
> All we currently have for long double is:
> 1) IEEE quad (emax 16384, *F128 ranges)
> 2) XFmode Intel/Motorola extended (emax 16384, same as *F128 ranges)
> 3) IBM extended (double double, emax 1024, the extra precision doesn't
>    really help and the domains are the same as for *F64)
> 4) same as double (*F64 again)
> So, the patch uses also for *L
> REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
> checks and either tail recurses into the *F128 case for that or to
> non-suffixed (aka *F64) case otherwise.
> BUILT_IN_*F128X not handled because no target has those and it doesn't
> seem something is on the horizon and who knows what would be used for that.
> Thus, all we get this wrong for are probably VAX floats or something
> similar, no intent from me to look at that, that is preexisting issue.
> 
> BTW, I'm surprised we don't have BUILT_IN_EXP10F{16,32,64,128,32X,64X,128X}
> builtins, seems glibc has those (sure, I think except *16 and *128x).
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

OK.

Thanks,
Richard.

> 2024-02-21  Jakub Jelinek  <jakub@redhat.com>
> 
> 	PR tree-optimization/113993
> 	* tree-call-cdce.cc (get_no_error_domain): Handle
> 	BUILT_IN_{COSH,SINH,EXP{,M1,2}}{F32X,F64X}.  Handle
> 	BUILT_IN_{COSH,SINH,EXP{,M1,2}}L for
> 	REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
> 	the as the F128 suffixed cases, otherwise as non-suffixed ones.
> 	Handle BUILT_IN_{EXP,POW}10L for
> 	REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
> 	as (-inf, 4932).
> 
> 	* gcc.dg/tree-ssa/pr113993.c: New test.
> 
> --- gcc/tree-call-cdce.cc.jj	2024-01-03 11:51:37.654646209 +0100
> +++ gcc/tree-call-cdce.cc	2024-02-20 09:19:24.432837856 +0100
> @@ -677,14 +677,14 @@ gen_conditions_for_pow (gcall *pow_call,
>     Since IEEE only sets minimum requirements for long double format,
>     different long double formats exist under different implementations
>     (e.g, 64 bit double precision (DF), 80 bit double-extended
> -   precision (XF), and 128 bit quad precision (QF) ).  For simplicity,
> +   precision (XF), and 128 bit quad precision (TF) ).  For simplicity,
>     in this implementation, the computed bounds for long double assume
> -   64 bit format (DF), and are therefore conservative.  Another
> -   assumption is that single precision float type is always SF mode,
> -   and double type is DF mode.  This function is quite
> -   implementation specific, so it may not be suitable to be part of
> -   builtins.cc.  This needs to be revisited later to see if it can
> -   be leveraged in x87 assembly expansion.  */
> +   64 bit format (DF) except when it is IEEE quad or extended with the same
> +   emax, and are therefore sometimes conservative.  Another assumption is
> +   that single precision float type is always SF mode, and double type is DF
> +   mode.  This function is quite implementation specific, so it may not be
> +   suitable to be part of builtins.cc.  This needs to be revisited later
> +   to see if it can be leveraged in x87 assembly expansion.  */
>  
>  static inp_domain
>  get_no_error_domain (enum built_in_function fnc)
> @@ -723,10 +723,10 @@ get_no_error_domain (enum built_in_funct
>                           89, true, false);
>      case BUILT_IN_COSH:
>      case BUILT_IN_SINH:
> -    case BUILT_IN_COSHL:
> -    case BUILT_IN_SINHL:
>      case BUILT_IN_COSHF64:
>      case BUILT_IN_SINHF64:
> +    case BUILT_IN_COSHF32X:
> +    case BUILT_IN_SINHF32X:
>        /* cosh: (-710, +710)  */
>        return get_domain (-710, true, false,
>                           710, true, false);
> @@ -735,6 +735,16 @@ get_no_error_domain (enum built_in_funct
>        /* coshf128: (-11357, +11357)  */
>        return get_domain (-11357, true, false,
>  			 11357, true, false);
> +    case BUILT_IN_COSHL:
> +    case BUILT_IN_SINHL:
> +      if (REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384)
> +	return get_no_error_domain (BUILT_IN_COSHF128);
> +      return get_no_error_domain (BUILT_IN_COSH);
> +    case BUILT_IN_COSHF64X:
> +    case BUILT_IN_SINHF64X:
> +      if (REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384)
> +	return get_no_error_domain (BUILT_IN_COSHF128);
> +      return get_no_error_domain (BUILT_IN_COSH);
>      /* Log functions: (0, +inf)  */
>      CASE_FLT_FN (BUILT_IN_LOG):
>      CASE_FLT_FN_FLOATN_NX (BUILT_IN_LOG):
> @@ -751,7 +761,7 @@ get_no_error_domain (enum built_in_funct
>      /* Exp functions.  */
>      case BUILT_IN_EXPF16:
>      case BUILT_IN_EXPM1F16:
> -      /* expf: (-inf, 11)  */
> +      /* expf16: (-inf, 11)  */
>        return get_domain (-1, false, false,
>  			 11, true, false);
>      case BUILT_IN_EXPF:
> @@ -763,10 +773,10 @@ get_no_error_domain (enum built_in_funct
>                           88, true, false);
>      case BUILT_IN_EXP:
>      case BUILT_IN_EXPM1:
> -    case BUILT_IN_EXPL:
> -    case BUILT_IN_EXPM1L:
>      case BUILT_IN_EXPF64:
>      case BUILT_IN_EXPM1F64:
> +    case BUILT_IN_EXPF32X:
> +    case BUILT_IN_EXPM1F32X:
>        /* exp: (-inf, 709)  */
>        return get_domain (-1, false, false,
>                           709, true, false);
> @@ -775,6 +785,16 @@ get_no_error_domain (enum built_in_funct
>        /* expf128: (-inf, 11356)  */
>        return get_domain (-1, false, false,
>  			 11356, true, false);
> +    case BUILT_IN_EXPL:
> +    case BUILT_IN_EXPM1L:
> +      if (REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384)
> +	return get_no_error_domain (BUILT_IN_EXPF128);
> +      return get_no_error_domain (BUILT_IN_EXP);
> +    case BUILT_IN_EXPF64X:
> +    case BUILT_IN_EXPM1F64X:
> +      if (REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384)
> +	return get_no_error_domain (BUILT_IN_EXPF128);
> +      return get_no_error_domain (BUILT_IN_EXP);
>      case BUILT_IN_EXP2F16:
>        /* exp2f16: (-inf, 16)  */
>        return get_domain (-1, false, false,
> @@ -785,8 +805,8 @@ get_no_error_domain (enum built_in_funct
>        return get_domain (-1, false, false,
>                           128, true, false);
>      case BUILT_IN_EXP2:
> -    case BUILT_IN_EXP2L:
>      case BUILT_IN_EXP2F64:
> +    case BUILT_IN_EXP2F32X:
>        /* exp2: (-inf, 1024)  */
>        return get_domain (-1, false, false,
>                           1024, true, false);
> @@ -794,6 +814,14 @@ get_no_error_domain (enum built_in_funct
>        /* exp2f128: (-inf, 16384)  */
>        return get_domain (-1, false, false,
>  			 16384, true, false);
> +    case BUILT_IN_EXP2L:
> +      if (REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384)
> +	return get_no_error_domain (BUILT_IN_EXP2F128);
> +      return get_no_error_domain (BUILT_IN_EXP2);
> +    case BUILT_IN_EXP2F64X:
> +      if (REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384)
> +	return get_no_error_domain (BUILT_IN_EXP2F128);
> +      return get_no_error_domain (BUILT_IN_EXP2);
>      case BUILT_IN_EXP10F:
>      case BUILT_IN_POW10F:
>        /* exp10f: (-inf, 38)  */
> @@ -801,11 +829,16 @@ get_no_error_domain (enum built_in_funct
>                           38, true, false);
>      case BUILT_IN_EXP10:
>      case BUILT_IN_POW10:
> -    case BUILT_IN_EXP10L:
> -    case BUILT_IN_POW10L:
>        /* exp10: (-inf, 308)  */
>        return get_domain (-1, false, false,
>                           308, true, false);
> +    case BUILT_IN_EXP10L:
> +    case BUILT_IN_POW10L:
> +      if (REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384)
> +	/* exp10l: (-inf, 4932)  */
> +	return get_domain (-1, false, false,
> +			   4932, true, false);
> +      return get_no_error_domain (BUILT_IN_EXP10);
>      /* sqrt: [0, +inf)  */
>      CASE_FLT_FN (BUILT_IN_SQRT):
>      CASE_FLT_FN_FLOATN_NX (BUILT_IN_SQRT):
> --- gcc/testsuite/gcc.dg/tree-ssa/pr113993.c.jj	2024-02-20 09:51:59.755613591 +0100
> +++ gcc/testsuite/gcc.dg/tree-ssa/pr113993.c	2024-02-20 09:52:28.815210185 +0100
> @@ -0,0 +1,299 @@
> +/* PR tree-optimization/113993 */
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -fdump-tree-optimized" } */
> +/* { dg-add-options float32 } */
> +/* { dg-add-options float64 } */
> +/* { dg-add-options float128 } */
> +/* { dg-add-options float32x } */
> +/* { dg-add-options float64x } */
> +/* { dg-final { scan-tree-dump-not "__builtin_\[a-z0-9\] \\\(\[^\n\r\]\\\);" "optimized" } } */
> +
> +void
> +flt (float f1, float f2, float f3, float f4, float f5,
> +     float f6, float f7, float f8, float f9, float f10)
> +{
> +  if (!(f1 >= -1.0f && f1 <= 1.0f)) __builtin_unreachable ();
> +  __builtin_acosf (f1);
> +  __builtin_asinf (f1);
> +  if (!(f2 >= 1.0f && f2 <= __builtin_inff ())) __builtin_unreachable ();
> +  __builtin_acoshf (f2);
> +  if (!(f3 > -1.0f && f3 < 1.0f)) __builtin_unreachable ();
> +  __builtin_atanhf (f3);
> +  if (!(f4 > 0.0f && f4 < __builtin_inff ())) __builtin_unreachable ();
> +  __builtin_logf (f4);
> +  __builtin_log2f (f4);
> +  __builtin_log10f (f4);
> +  if (!(f5 > -1.0f && f5 < __builtin_inff ())) __builtin_unreachable ();
> +  __builtin_log1pf (f5);
> +  if (!(f6 >= 0.0f && f6 < __builtin_inff ())) __builtin_unreachable ();
> +  __builtin_sqrtf (f6);
> +#if __FLT_MANT_DIG__ == __FLT32_MANT_DIG__ && __FLT_MAX_EXP__ == __FLT32_MAX_EXP__
> +  if (!(f7 > -89.0f && f7 < 89.0f)) __builtin_unreachable ();
> +  __builtin_coshf (f7);
> +  __builtin_sinhf (f7);
> +  if (!(f8 > -__builtin_inff () && f8 < 88.0f)) __builtin_unreachable ();
> +  __builtin_expf (f8);
> +  if (!(f9 > -__builtin_inff () && f9 < 128.0f)) __builtin_unreachable ();
> +  __builtin_exp2f (f9);
> +  if (!(f10 > -__builtin_inff () && f10 < 38.0f)) __builtin_unreachable ();
> +  __builtin_exp10f (f10);
> +#endif
> +}
> +
> +#if defined(__FLT16_MANT_DIG__) && 0 /* No library routines here, these don't actually fold away.  */
> +void
> +flt16 (_Float16 f1, _Float16 f2, _Float16 f3, _Float16 f4, _Float16 f5,
> +       _Float16 f6, _Float16 f7, _Float16 f8, _Float16 f9)
> +{
> +  if (!(f1 >= -1.0f16 && f1 <= 1.0f16)) __builtin_unreachable ();
> +  __builtin_acosf16 (f1);
> +  __builtin_asinf16 (f1);
> +  if (!(f2 >= 1.0f16 && f2 <= __builtin_inff16 ())) __builtin_unreachable ();
> +  __builtin_acoshf16 (f2);
> +  if (!(f3 > -1.0f16 && f3 < 1.0f16)) __builtin_unreachable ();
> +  __builtin_atanhf16 (f3);
> +  if (!(f4 > 0.0f16 && f4 < __builtin_inff16 ())) __builtin_unreachable ();
> +  __builtin_logf16 (f4);
> +  __builtin_log2f16 (f4);
> +  __builtin_log10f16 (f4);
> +  if (!(f5 > -1.0f16 && f5 < __builtin_inff16 ())) __builtin_unreachable ();
> +  __builtin_log1pf16 (f5);
> +  if (!(f6 >= 0.0f16 && f6 < __builtin_inff16 ())) __builtin_unreachable ();
> +  __builtin_sqrtf16 (f6);
> +  if (!(f7 > -11.0f16 && f7 < 11.0f16)) __builtin_unreachable ();
> +  __builtin_coshf16 (f7);
> +  __builtin_sinhf16 (f7);
> +  if (!(f8 > -__builtin_inff16 () && f8 < 11.0f16)) __builtin_unreachable ();
> +  __builtin_expf16 (f8);
> +  if (!(f9 > -__builtin_inff16 () && f9 < 16.0f16)) __builtin_unreachable ();
> +  __builtin_exp2f16 (f9);
> +}
> +#endif
> +
> +#ifdef __FLT32_MANT_DIG__
> +void
> +flt32 (_Float32 f1, _Float32 f2, _Float32 f3, _Float32 f4, _Float32 f5,
> +       _Float32 f6, _Float32 f7, _Float32 f8, _Float32 f9)
> +{
> +  if (!(f1 >= -1.0f32 && f1 <= 1.0f32)) __builtin_unreachable ();
> +  __builtin_acosf32 (f1);
> +  __builtin_asinf32 (f1);
> +  if (!(f2 >= 1.0f32 && f2 <= __builtin_inff32 ())) __builtin_unreachable ();
> +  __builtin_acoshf32 (f2);
> +  if (!(f3 > -1.0f32 && f3 < 1.0f32)) __builtin_unreachable ();
> +  __builtin_atanhf32 (f3);
> +  if (!(f4 > 0.0f32 && f4 < __builtin_inff32 ())) __builtin_unreachable ();
> +  __builtin_logf32 (f4);
> +  __builtin_log2f32 (f4);
> +  __builtin_log10f32 (f4);
> +  if (!(f5 > -1.0f32 && f5 < __builtin_inff32 ())) __builtin_unreachable ();
> +  __builtin_log1pf32 (f5);
> +  if (!(f6 >= 0.0f32 && f6 < __builtin_inff32 ())) __builtin_unreachable ();
> +  __builtin_sqrtf32 (f6);
> +  if (!(f7 > -89.0f32 && f7 < 89.0f32)) __builtin_unreachable ();
> +  __builtin_coshf32 (f7);
> +  __builtin_sinhf32 (f7);
> +  if (!(f8 > -__builtin_inff32 () && f8 < 88.0f32)) __builtin_unreachable ();
> +  __builtin_expf32 (f8);
> +  if (!(f9 > -__builtin_inff32 () && f9 < 128.0f32)) __builtin_unreachable ();
> +  __builtin_exp2f32 (f9);
> +}
> +#endif
> +
> +void
> +dbl (double f1, double f2, double f3, double f4, double f5,
> +     double f6, double f7, double f8, double f9, double f10)
> +{
> +  if (!(f1 >= -1.0 && f1 <= 1.0)) __builtin_unreachable ();
> +  __builtin_acos (f1);
> +  __builtin_asin (f1);
> +  if (!(f2 >= 1.0 && f2 <= __builtin_inf ())) __builtin_unreachable ();
> +  __builtin_acosh (f2);
> +  if (!(f3 > -1.0 && f3 < 1.0)) __builtin_unreachable ();
> +  __builtin_atanh (f3);
> +  if (!(f4 > 0.0 && f4 < __builtin_inf ())) __builtin_unreachable ();
> +  __builtin_log (f4);
> +  __builtin_log2 (f4);
> +  __builtin_log10 (f4);
> +  if (!(f5 > -1.0 && f5 < __builtin_inf ())) __builtin_unreachable ();
> +  __builtin_log1p (f5);
> +  if (!(f6 >= 0.0 && f6 < __builtin_inf ())) __builtin_unreachable ();
> +  __builtin_sqrt (f6);
> +#if __DBL_MANT_DIG__ == __FLT64_MANT_DIG__ && __DBL_MAX_EXP__ == __FLT64_MAX_EXP__
> +  if (!(f7 > -710.0 && f7 < 710.0)) __builtin_unreachable ();
> +  __builtin_cosh (f7);
> +  __builtin_sinh (f7);
> +  if (!(f8 > -__builtin_inf () && f8 < 709.0)) __builtin_unreachable ();
> +  __builtin_exp (f8);
> +  if (!(f9 > -__builtin_inf () && f9 < 1024.0)) __builtin_unreachable ();
> +  __builtin_exp2 (f9);
> +  if (!(f10 > -__builtin_inf () && f10 < 308.0)) __builtin_unreachable ();
> +  __builtin_exp10 (f10);
> +#endif
> +}
> +
> +#ifdef __FLT64_MANT_DIG__
> +void
> +flt64 (_Float64 f1, _Float64 f2, _Float64 f3, _Float64 f4, _Float64 f5,
> +       _Float64 f6, _Float64 f7, _Float64 f8, _Float64 f9)
> +{
> +  if (!(f1 >= -1.0f64 && f1 <= 1.0f64)) __builtin_unreachable ();
> +  __builtin_acosf64 (f1);
> +  __builtin_asinf64 (f1);
> +  if (!(f2 >= 1.0f64 && f2 <= __builtin_inff64 ())) __builtin_unreachable ();
> +  __builtin_acoshf64 (f2);
> +  if (!(f3 > -1.0f64 && f3 < 1.0f64)) __builtin_unreachable ();
> +  __builtin_atanhf64 (f3);
> +  if (!(f4 > 0.0f64 && f4 < __builtin_inff64 ())) __builtin_unreachable ();
> +  __builtin_logf64 (f4);
> +  __builtin_log2f64 (f4);
> +  __builtin_log10f64 (f4);
> +  if (!(f5 > -1.0f64 && f5 < __builtin_inff64 ())) __builtin_unreachable ();
> +  __builtin_log1pf64 (f5);
> +  if (!(f6 >= 0.0f64 && f6 < __builtin_inff64 ())) __builtin_unreachable ();
> +  __builtin_sqrtf64 (f6);
> +  if (!(f7 > -710.0f64 && f7 < 710.0f64)) __builtin_unreachable ();
> +  __builtin_coshf64 (f7);
> +  __builtin_sinhf64 (f7);
> +  if (!(f8 > -__builtin_inff64 () && f8 < 709.0f64)) __builtin_unreachable ();
> +  __builtin_expf64 (f8);
> +  if (!(f9 > -__builtin_inff64 () && f9 < 1024.0f64)) __builtin_unreachable ();
> +  __builtin_exp2f64 (f9);
> +}
> +#endif
> +
> +#ifdef __FLT32X_MANT_DIG__
> +void
> +flt32x (_Float32x f1, _Float32x f2, _Float32x f3, _Float32x f4, _Float32x f5,
> +	_Float32x f6, _Float32x f7, _Float32x f8, _Float32x f9)
> +{
> +  if (!(f1 >= -1.0f32x && f1 <= 1.0f32x)) __builtin_unreachable ();
> +  __builtin_acosf32x (f1);
> +  __builtin_asinf32x (f1);
> +  if (!(f2 >= 1.0f32x && f2 <= __builtin_inff32x ())) __builtin_unreachable ();
> +  __builtin_acoshf32x (f2);
> +  if (!(f3 > -1.0f32x && f3 < 1.0f32x)) __builtin_unreachable ();
> +  __builtin_atanhf32x (f3);
> +  if (!(f4 > 0.0f32x && f4 < __builtin_inff32x ())) __builtin_unreachable ();
> +  __builtin_logf32x (f4);
> +  __builtin_log2f32x (f4);
> +  __builtin_log10f32x (f4);
> +  if (!(f5 > -1.0f32x && f5 < __builtin_inff32x ())) __builtin_unreachable ();
> +  __builtin_log1pf32x (f5);
> +  if (!(f6 >= 0.0f32x && f6 < __builtin_inff32x ())) __builtin_unreachable ();
> +  __builtin_sqrtf32x (f6);
> +#if __FLT32X_MANT_DIG__ == __FLT64_MANT_DIG__ && __FLT32X_MAX_EXP__ == __FLT64_MAX_EXP__
> +  if (!(f7 > -710.0f32x && f7 < 710.0f32x)) __builtin_unreachable ();
> +  __builtin_coshf32x (f7);
> +  __builtin_sinhf32x (f7);
> +  if (!(f8 > -__builtin_inff32x () && f8 < 709.0f32x)) __builtin_unreachable ();
> +  __builtin_expf32x (f8);
> +  if (!(f9 > -__builtin_inff32x () && f9 < 1024.0f32x)) __builtin_unreachable ();
> +  __builtin_exp2f32x (f9);
> +#endif
> +}
> +#endif
> +
> +void
> +ldbl (long double f1, long double f2, long double f3, long double f4, long double f5,
> +      long double f6, long double f7, long double f8, long double f9, long double f10)
> +{
> +  if (!(f1 >= -1.0L && f1 <= 1.0L)) __builtin_unreachable ();
> +  __builtin_acosl (f1);
> +  __builtin_asinl (f1);
> +  if (!(f2 >= 1.0L && f2 <= __builtin_infl ())) __builtin_unreachable ();
> +  __builtin_acoshl (f2);
> +  if (!(f3 > -1.0L && f3 < 1.0L)) __builtin_unreachable ();
> +  __builtin_atanhl (f3);
> +  if (!(f4 > 0.0L && f4 < __builtin_infl ())) __builtin_unreachable ();
> +  __builtin_logl (f4);
> +  __builtin_log2l (f4);
> +  __builtin_log10l (f4);
> +  if (!(f5 > -1.0L && f5 < __builtin_infl ())) __builtin_unreachable ();
> +  __builtin_log1pl (f5);
> +  if (!(f6 >= 0.0L && f6 < __builtin_infl ())) __builtin_unreachable ();
> +  __builtin_sqrtl (f6);
> +#if __LDBL_MAX_EXP__ == 16384
> +  if (!(f7 > -11357.0L && f7 < 11357.0L)) __builtin_unreachable ();
> +  __builtin_coshl (f7);
> +  __builtin_sinhl (f7);
> +  if (!(f8 > -__builtin_infl () && f8 < 11356.0L)) __builtin_unreachable ();
> +  __builtin_expl (f8);
> +  if (!(f9 > -__builtin_infl () && f9 < 16384.0L)) __builtin_unreachable ();
> +  __builtin_exp2l (f9);
> +  if (!(f10 > -__builtin_infl () && f10 < 4932.0L)) __builtin_unreachable ();
> +  __builtin_exp10l (f10);
> +#elif __LDBL_MANT_DIG__ == __FLT64_MANT_DIG__ && __LDBL_MAX_EXP__ == __FLT64_MAX_EXP__
> +  if (!(f7 > -710.0L && f7 < 710.0L)) __builtin_unreachable ();
> +  __builtin_coshl (f7);
> +  __builtin_sinhl (f7);
> +  if (!(f8 > -__builtin_infl () && f8 < 709.0L)) __builtin_unreachable ();
> +  __builtin_expl (f8);
> +  if (!(f9 > -__builtin_infl () && f9 < 1024.0L)) __builtin_unreachable ();
> +  __builtin_exp2l (f9);
> +  if (!(f10 > -__builtin_infl () && f10 < 308.0L)) __builtin_unreachable ();
> +  __builtin_exp10l (f10);
> +#endif
> +}
> +
> +#ifdef __FLT128_MANT_DIG__
> +void
> +flt128 (_Float128 f1, _Float128 f2, _Float128 f3, _Float128 f4, _Float128 f5,
> +	_Float128 f6, _Float128 f7, _Float128 f8, _Float128 f9)
> +{
> +  if (!(f1 >= -1.0f128 && f1 <= 1.0f128)) __builtin_unreachable ();
> +  __builtin_acosf128 (f1);
> +  __builtin_asinf128 (f1);
> +  if (!(f2 >= 1.0f128 && f2 <= __builtin_inff128 ())) __builtin_unreachable ();
> +  __builtin_acoshf128 (f2);
> +  if (!(f3 > -1.0f128 && f3 < 1.0f128)) __builtin_unreachable ();
> +  __builtin_atanhf128 (f3);
> +  if (!(f4 > 0.0f128 && f4 < __builtin_inff128 ())) __builtin_unreachable ();
> +  __builtin_logf128 (f4);
> +  __builtin_log2f128 (f4);
> +  __builtin_log10f128 (f4);
> +  if (!(f5 > -1.0f128 && f5 < __builtin_inff128 ())) __builtin_unreachable ();
> +  __builtin_log1pf128 (f5);
> +  if (!(f6 >= 0.0f128 && f6 < __builtin_inff128 ())) __builtin_unreachable ();
> +  __builtin_sqrtf128 (f6);
> +  if (!(f7 > -11357.0f128 && f7 < 11357.0f128)) __builtin_unreachable ();
> +  __builtin_coshf128 (f7);
> +  __builtin_sinhf128 (f7);
> +  if (!(f8 > -__builtin_inff128 () && f8 < 11356.0f128)) __builtin_unreachable ();
> +  __builtin_expf128 (f8);
> +  if (!(f9 > -__builtin_inff128 () && f9 < 16384.0f128)) __builtin_unreachable ();
> +  __builtin_exp2f128 (f9);
> +}
> +#endif
> +
> +#ifdef __FLT64X_MANT_DIG__
> +void
> +flt64x (_Float64x f1, _Float64x f2, _Float64x f3, _Float64x f4, _Float64x f5,
> +	_Float64x f6, _Float64x f7, _Float64x f8, _Float64x f9)
> +{
> +  if (!(f1 >= -1.0f64x && f1 <= 1.0f64x)) __builtin_unreachable ();
> +  __builtin_acosf64x (f1);
> +  __builtin_asinf64x (f1);
> +  if (!(f2 >= 1.0f64x && f2 <= __builtin_inff64x ())) __builtin_unreachable ();
> +  __builtin_acoshf64x (f2);
> +  if (!(f3 > -1.0f64x && f3 < 1.0f64x)) __builtin_unreachable ();
> +  __builtin_atanhf64x (f3);
> +  if (!(f4 > 0.0f64x && f4 < __builtin_inff64x ())) __builtin_unreachable ();
> +  __builtin_logf64x (f4);
> +  __builtin_log2f64x (f4);
> +  __builtin_log10f64x (f4);
> +  if (!(f5 > -1.0f64x && f5 < __builtin_inff64x ())) __builtin_unreachable ();
> +  __builtin_log1pf64x (f5);
> +  if (!(f6 >= 0.0f64x && f6 < __builtin_inff64x ())) __builtin_unreachable ();
> +  __builtin_sqrtf64x (f6);
> +#if __FLT64X_MAX_EXP__ == 16384
> +  if (!(f7 > -11357.0f64x && f7 < 11357.0f64x)) __builtin_unreachable ();
> +  __builtin_coshf64x (f7);
> +  __builtin_sinhf64x (f7);
> +  if (!(f8 > -__builtin_inff64x () && f8 < 11356.0f64x)) __builtin_unreachable ();
> +  __builtin_expf64x (f8);
> +  if (!(f9 > -__builtin_inff64x () && f9 < 16384.0f64x)) __builtin_unreachable ();
> +  __builtin_exp2f64x (f9);
> +#endif
> +}
> +#endif
> 
> 	Jakub
> 
>
  

Patch

--- gcc/tree-call-cdce.cc.jj	2024-01-03 11:51:37.654646209 +0100
+++ gcc/tree-call-cdce.cc	2024-02-20 09:19:24.432837856 +0100
@@ -677,14 +677,14 @@  gen_conditions_for_pow (gcall *pow_call,
    Since IEEE only sets minimum requirements for long double format,
    different long double formats exist under different implementations
    (e.g, 64 bit double precision (DF), 80 bit double-extended
-   precision (XF), and 128 bit quad precision (QF) ).  For simplicity,
+   precision (XF), and 128 bit quad precision (TF) ).  For simplicity,
    in this implementation, the computed bounds for long double assume
-   64 bit format (DF), and are therefore conservative.  Another
-   assumption is that single precision float type is always SF mode,
-   and double type is DF mode.  This function is quite
-   implementation specific, so it may not be suitable to be part of
-   builtins.cc.  This needs to be revisited later to see if it can
-   be leveraged in x87 assembly expansion.  */
+   64 bit format (DF) except when it is IEEE quad or extended with the same
+   emax, and are therefore sometimes conservative.  Another assumption is
+   that single precision float type is always SF mode, and double type is DF
+   mode.  This function is quite implementation specific, so it may not be
+   suitable to be part of builtins.cc.  This needs to be revisited later
+   to see if it can be leveraged in x87 assembly expansion.  */
 
 static inp_domain
 get_no_error_domain (enum built_in_function fnc)
@@ -723,10 +723,10 @@  get_no_error_domain (enum built_in_funct
                          89, true, false);
     case BUILT_IN_COSH:
     case BUILT_IN_SINH:
-    case BUILT_IN_COSHL:
-    case BUILT_IN_SINHL:
     case BUILT_IN_COSHF64:
     case BUILT_IN_SINHF64:
+    case BUILT_IN_COSHF32X:
+    case BUILT_IN_SINHF32X:
       /* cosh: (-710, +710)  */
       return get_domain (-710, true, false,
                          710, true, false);
@@ -735,6 +735,16 @@  get_no_error_domain (enum built_in_funct
       /* coshf128: (-11357, +11357)  */
       return get_domain (-11357, true, false,
 			 11357, true, false);
+    case BUILT_IN_COSHL:
+    case BUILT_IN_SINHL:
+      if (REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384)
+	return get_no_error_domain (BUILT_IN_COSHF128);
+      return get_no_error_domain (BUILT_IN_COSH);
+    case BUILT_IN_COSHF64X:
+    case BUILT_IN_SINHF64X:
+      if (REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384)
+	return get_no_error_domain (BUILT_IN_COSHF128);
+      return get_no_error_domain (BUILT_IN_COSH);
     /* Log functions: (0, +inf)  */
     CASE_FLT_FN (BUILT_IN_LOG):
     CASE_FLT_FN_FLOATN_NX (BUILT_IN_LOG):
@@ -751,7 +761,7 @@  get_no_error_domain (enum built_in_funct
     /* Exp functions.  */
     case BUILT_IN_EXPF16:
     case BUILT_IN_EXPM1F16:
-      /* expf: (-inf, 11)  */
+      /* expf16: (-inf, 11)  */
       return get_domain (-1, false, false,
 			 11, true, false);
     case BUILT_IN_EXPF:
@@ -763,10 +773,10 @@  get_no_error_domain (enum built_in_funct
                          88, true, false);
     case BUILT_IN_EXP:
     case BUILT_IN_EXPM1:
-    case BUILT_IN_EXPL:
-    case BUILT_IN_EXPM1L:
     case BUILT_IN_EXPF64:
     case BUILT_IN_EXPM1F64:
+    case BUILT_IN_EXPF32X:
+    case BUILT_IN_EXPM1F32X:
       /* exp: (-inf, 709)  */
       return get_domain (-1, false, false,
                          709, true, false);
@@ -775,6 +785,16 @@  get_no_error_domain (enum built_in_funct
       /* expf128: (-inf, 11356)  */
       return get_domain (-1, false, false,
 			 11356, true, false);
+    case BUILT_IN_EXPL:
+    case BUILT_IN_EXPM1L:
+      if (REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384)
+	return get_no_error_domain (BUILT_IN_EXPF128);
+      return get_no_error_domain (BUILT_IN_EXP);
+    case BUILT_IN_EXPF64X:
+    case BUILT_IN_EXPM1F64X:
+      if (REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384)
+	return get_no_error_domain (BUILT_IN_EXPF128);
+      return get_no_error_domain (BUILT_IN_EXP);
     case BUILT_IN_EXP2F16:
       /* exp2f16: (-inf, 16)  */
       return get_domain (-1, false, false,
@@ -785,8 +805,8 @@  get_no_error_domain (enum built_in_funct
       return get_domain (-1, false, false,
                          128, true, false);
     case BUILT_IN_EXP2:
-    case BUILT_IN_EXP2L:
     case BUILT_IN_EXP2F64:
+    case BUILT_IN_EXP2F32X:
       /* exp2: (-inf, 1024)  */
       return get_domain (-1, false, false,
                          1024, true, false);
@@ -794,6 +814,14 @@  get_no_error_domain (enum built_in_funct
       /* exp2f128: (-inf, 16384)  */
       return get_domain (-1, false, false,
 			 16384, true, false);
+    case BUILT_IN_EXP2L:
+      if (REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384)
+	return get_no_error_domain (BUILT_IN_EXP2F128);
+      return get_no_error_domain (BUILT_IN_EXP2);
+    case BUILT_IN_EXP2F64X:
+      if (REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384)
+	return get_no_error_domain (BUILT_IN_EXP2F128);
+      return get_no_error_domain (BUILT_IN_EXP2);
     case BUILT_IN_EXP10F:
     case BUILT_IN_POW10F:
       /* exp10f: (-inf, 38)  */
@@ -801,11 +829,16 @@  get_no_error_domain (enum built_in_funct
                          38, true, false);
     case BUILT_IN_EXP10:
     case BUILT_IN_POW10:
-    case BUILT_IN_EXP10L:
-    case BUILT_IN_POW10L:
       /* exp10: (-inf, 308)  */
       return get_domain (-1, false, false,
                          308, true, false);
+    case BUILT_IN_EXP10L:
+    case BUILT_IN_POW10L:
+      if (REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384)
+	/* exp10l: (-inf, 4932)  */
+	return get_domain (-1, false, false,
+			   4932, true, false);
+      return get_no_error_domain (BUILT_IN_EXP10);
     /* sqrt: [0, +inf)  */
     CASE_FLT_FN (BUILT_IN_SQRT):
     CASE_FLT_FN_FLOATN_NX (BUILT_IN_SQRT):
--- gcc/testsuite/gcc.dg/tree-ssa/pr113993.c.jj	2024-02-20 09:51:59.755613591 +0100
+++ gcc/testsuite/gcc.dg/tree-ssa/pr113993.c	2024-02-20 09:52:28.815210185 +0100
@@ -0,0 +1,299 @@ 
+/* PR tree-optimization/113993 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options float64 } */
+/* { dg-add-options float128 } */
+/* { dg-add-options float32x } */
+/* { dg-add-options float64x } */
+/* { dg-final { scan-tree-dump-not "__builtin_\[a-z0-9\] \\\(\[^\n\r\]\\\);" "optimized" } } */
+
+void
+flt (float f1, float f2, float f3, float f4, float f5,
+     float f6, float f7, float f8, float f9, float f10)
+{
+  if (!(f1 >= -1.0f && f1 <= 1.0f)) __builtin_unreachable ();
+  __builtin_acosf (f1);
+  __builtin_asinf (f1);
+  if (!(f2 >= 1.0f && f2 <= __builtin_inff ())) __builtin_unreachable ();
+  __builtin_acoshf (f2);
+  if (!(f3 > -1.0f && f3 < 1.0f)) __builtin_unreachable ();
+  __builtin_atanhf (f3);
+  if (!(f4 > 0.0f && f4 < __builtin_inff ())) __builtin_unreachable ();
+  __builtin_logf (f4);
+  __builtin_log2f (f4);
+  __builtin_log10f (f4);
+  if (!(f5 > -1.0f && f5 < __builtin_inff ())) __builtin_unreachable ();
+  __builtin_log1pf (f5);
+  if (!(f6 >= 0.0f && f6 < __builtin_inff ())) __builtin_unreachable ();
+  __builtin_sqrtf (f6);
+#if __FLT_MANT_DIG__ == __FLT32_MANT_DIG__ && __FLT_MAX_EXP__ == __FLT32_MAX_EXP__
+  if (!(f7 > -89.0f && f7 < 89.0f)) __builtin_unreachable ();
+  __builtin_coshf (f7);
+  __builtin_sinhf (f7);
+  if (!(f8 > -__builtin_inff () && f8 < 88.0f)) __builtin_unreachable ();
+  __builtin_expf (f8);
+  if (!(f9 > -__builtin_inff () && f9 < 128.0f)) __builtin_unreachable ();
+  __builtin_exp2f (f9);
+  if (!(f10 > -__builtin_inff () && f10 < 38.0f)) __builtin_unreachable ();
+  __builtin_exp10f (f10);
+#endif
+}
+
+#if defined(__FLT16_MANT_DIG__) && 0 /* No library routines here, these don't actually fold away.  */
+void
+flt16 (_Float16 f1, _Float16 f2, _Float16 f3, _Float16 f4, _Float16 f5,
+       _Float16 f6, _Float16 f7, _Float16 f8, _Float16 f9)
+{
+  if (!(f1 >= -1.0f16 && f1 <= 1.0f16)) __builtin_unreachable ();
+  __builtin_acosf16 (f1);
+  __builtin_asinf16 (f1);
+  if (!(f2 >= 1.0f16 && f2 <= __builtin_inff16 ())) __builtin_unreachable ();
+  __builtin_acoshf16 (f2);
+  if (!(f3 > -1.0f16 && f3 < 1.0f16)) __builtin_unreachable ();
+  __builtin_atanhf16 (f3);
+  if (!(f4 > 0.0f16 && f4 < __builtin_inff16 ())) __builtin_unreachable ();
+  __builtin_logf16 (f4);
+  __builtin_log2f16 (f4);
+  __builtin_log10f16 (f4);
+  if (!(f5 > -1.0f16 && f5 < __builtin_inff16 ())) __builtin_unreachable ();
+  __builtin_log1pf16 (f5);
+  if (!(f6 >= 0.0f16 && f6 < __builtin_inff16 ())) __builtin_unreachable ();
+  __builtin_sqrtf16 (f6);
+  if (!(f7 > -11.0f16 && f7 < 11.0f16)) __builtin_unreachable ();
+  __builtin_coshf16 (f7);
+  __builtin_sinhf16 (f7);
+  if (!(f8 > -__builtin_inff16 () && f8 < 11.0f16)) __builtin_unreachable ();
+  __builtin_expf16 (f8);
+  if (!(f9 > -__builtin_inff16 () && f9 < 16.0f16)) __builtin_unreachable ();
+  __builtin_exp2f16 (f9);
+}
+#endif
+
+#ifdef __FLT32_MANT_DIG__
+void
+flt32 (_Float32 f1, _Float32 f2, _Float32 f3, _Float32 f4, _Float32 f5,
+       _Float32 f6, _Float32 f7, _Float32 f8, _Float32 f9)
+{
+  if (!(f1 >= -1.0f32 && f1 <= 1.0f32)) __builtin_unreachable ();
+  __builtin_acosf32 (f1);
+  __builtin_asinf32 (f1);
+  if (!(f2 >= 1.0f32 && f2 <= __builtin_inff32 ())) __builtin_unreachable ();
+  __builtin_acoshf32 (f2);
+  if (!(f3 > -1.0f32 && f3 < 1.0f32)) __builtin_unreachable ();
+  __builtin_atanhf32 (f3);
+  if (!(f4 > 0.0f32 && f4 < __builtin_inff32 ())) __builtin_unreachable ();
+  __builtin_logf32 (f4);
+  __builtin_log2f32 (f4);
+  __builtin_log10f32 (f4);
+  if (!(f5 > -1.0f32 && f5 < __builtin_inff32 ())) __builtin_unreachable ();
+  __builtin_log1pf32 (f5);
+  if (!(f6 >= 0.0f32 && f6 < __builtin_inff32 ())) __builtin_unreachable ();
+  __builtin_sqrtf32 (f6);
+  if (!(f7 > -89.0f32 && f7 < 89.0f32)) __builtin_unreachable ();
+  __builtin_coshf32 (f7);
+  __builtin_sinhf32 (f7);
+  if (!(f8 > -__builtin_inff32 () && f8 < 88.0f32)) __builtin_unreachable ();
+  __builtin_expf32 (f8);
+  if (!(f9 > -__builtin_inff32 () && f9 < 128.0f32)) __builtin_unreachable ();
+  __builtin_exp2f32 (f9);
+}
+#endif
+
+void
+dbl (double f1, double f2, double f3, double f4, double f5,
+     double f6, double f7, double f8, double f9, double f10)
+{
+  if (!(f1 >= -1.0 && f1 <= 1.0)) __builtin_unreachable ();
+  __builtin_acos (f1);
+  __builtin_asin (f1);
+  if (!(f2 >= 1.0 && f2 <= __builtin_inf ())) __builtin_unreachable ();
+  __builtin_acosh (f2);
+  if (!(f3 > -1.0 && f3 < 1.0)) __builtin_unreachable ();
+  __builtin_atanh (f3);
+  if (!(f4 > 0.0 && f4 < __builtin_inf ())) __builtin_unreachable ();
+  __builtin_log (f4);
+  __builtin_log2 (f4);
+  __builtin_log10 (f4);
+  if (!(f5 > -1.0 && f5 < __builtin_inf ())) __builtin_unreachable ();
+  __builtin_log1p (f5);
+  if (!(f6 >= 0.0 && f6 < __builtin_inf ())) __builtin_unreachable ();
+  __builtin_sqrt (f6);
+#if __DBL_MANT_DIG__ == __FLT64_MANT_DIG__ && __DBL_MAX_EXP__ == __FLT64_MAX_EXP__
+  if (!(f7 > -710.0 && f7 < 710.0)) __builtin_unreachable ();
+  __builtin_cosh (f7);
+  __builtin_sinh (f7);
+  if (!(f8 > -__builtin_inf () && f8 < 709.0)) __builtin_unreachable ();
+  __builtin_exp (f8);
+  if (!(f9 > -__builtin_inf () && f9 < 1024.0)) __builtin_unreachable ();
+  __builtin_exp2 (f9);
+  if (!(f10 > -__builtin_inf () && f10 < 308.0)) __builtin_unreachable ();
+  __builtin_exp10 (f10);
+#endif
+}
+
+#ifdef __FLT64_MANT_DIG__
+void
+flt64 (_Float64 f1, _Float64 f2, _Float64 f3, _Float64 f4, _Float64 f5,
+       _Float64 f6, _Float64 f7, _Float64 f8, _Float64 f9)
+{
+  if (!(f1 >= -1.0f64 && f1 <= 1.0f64)) __builtin_unreachable ();
+  __builtin_acosf64 (f1);
+  __builtin_asinf64 (f1);
+  if (!(f2 >= 1.0f64 && f2 <= __builtin_inff64 ())) __builtin_unreachable ();
+  __builtin_acoshf64 (f2);
+  if (!(f3 > -1.0f64 && f3 < 1.0f64)) __builtin_unreachable ();
+  __builtin_atanhf64 (f3);
+  if (!(f4 > 0.0f64 && f4 < __builtin_inff64 ())) __builtin_unreachable ();
+  __builtin_logf64 (f4);
+  __builtin_log2f64 (f4);
+  __builtin_log10f64 (f4);
+  if (!(f5 > -1.0f64 && f5 < __builtin_inff64 ())) __builtin_unreachable ();
+  __builtin_log1pf64 (f5);
+  if (!(f6 >= 0.0f64 && f6 < __builtin_inff64 ())) __builtin_unreachable ();
+  __builtin_sqrtf64 (f6);
+  if (!(f7 > -710.0f64 && f7 < 710.0f64)) __builtin_unreachable ();
+  __builtin_coshf64 (f7);
+  __builtin_sinhf64 (f7);
+  if (!(f8 > -__builtin_inff64 () && f8 < 709.0f64)) __builtin_unreachable ();
+  __builtin_expf64 (f8);
+  if (!(f9 > -__builtin_inff64 () && f9 < 1024.0f64)) __builtin_unreachable ();
+  __builtin_exp2f64 (f9);
+}
+#endif
+
+#ifdef __FLT32X_MANT_DIG__
+void
+flt32x (_Float32x f1, _Float32x f2, _Float32x f3, _Float32x f4, _Float32x f5,
+	_Float32x f6, _Float32x f7, _Float32x f8, _Float32x f9)
+{
+  if (!(f1 >= -1.0f32x && f1 <= 1.0f32x)) __builtin_unreachable ();
+  __builtin_acosf32x (f1);
+  __builtin_asinf32x (f1);
+  if (!(f2 >= 1.0f32x && f2 <= __builtin_inff32x ())) __builtin_unreachable ();
+  __builtin_acoshf32x (f2);
+  if (!(f3 > -1.0f32x && f3 < 1.0f32x)) __builtin_unreachable ();
+  __builtin_atanhf32x (f3);
+  if (!(f4 > 0.0f32x && f4 < __builtin_inff32x ())) __builtin_unreachable ();
+  __builtin_logf32x (f4);
+  __builtin_log2f32x (f4);
+  __builtin_log10f32x (f4);
+  if (!(f5 > -1.0f32x && f5 < __builtin_inff32x ())) __builtin_unreachable ();
+  __builtin_log1pf32x (f5);
+  if (!(f6 >= 0.0f32x && f6 < __builtin_inff32x ())) __builtin_unreachable ();
+  __builtin_sqrtf32x (f6);
+#if __FLT32X_MANT_DIG__ == __FLT64_MANT_DIG__ && __FLT32X_MAX_EXP__ == __FLT64_MAX_EXP__
+  if (!(f7 > -710.0f32x && f7 < 710.0f32x)) __builtin_unreachable ();
+  __builtin_coshf32x (f7);
+  __builtin_sinhf32x (f7);
+  if (!(f8 > -__builtin_inff32x () && f8 < 709.0f32x)) __builtin_unreachable ();
+  __builtin_expf32x (f8);
+  if (!(f9 > -__builtin_inff32x () && f9 < 1024.0f32x)) __builtin_unreachable ();
+  __builtin_exp2f32x (f9);
+#endif
+}
+#endif
+
+void
+ldbl (long double f1, long double f2, long double f3, long double f4, long double f5,
+      long double f6, long double f7, long double f8, long double f9, long double f10)
+{
+  if (!(f1 >= -1.0L && f1 <= 1.0L)) __builtin_unreachable ();
+  __builtin_acosl (f1);
+  __builtin_asinl (f1);
+  if (!(f2 >= 1.0L && f2 <= __builtin_infl ())) __builtin_unreachable ();
+  __builtin_acoshl (f2);
+  if (!(f3 > -1.0L && f3 < 1.0L)) __builtin_unreachable ();
+  __builtin_atanhl (f3);
+  if (!(f4 > 0.0L && f4 < __builtin_infl ())) __builtin_unreachable ();
+  __builtin_logl (f4);
+  __builtin_log2l (f4);
+  __builtin_log10l (f4);
+  if (!(f5 > -1.0L && f5 < __builtin_infl ())) __builtin_unreachable ();
+  __builtin_log1pl (f5);
+  if (!(f6 >= 0.0L && f6 < __builtin_infl ())) __builtin_unreachable ();
+  __builtin_sqrtl (f6);
+#if __LDBL_MAX_EXP__ == 16384
+  if (!(f7 > -11357.0L && f7 < 11357.0L)) __builtin_unreachable ();
+  __builtin_coshl (f7);
+  __builtin_sinhl (f7);
+  if (!(f8 > -__builtin_infl () && f8 < 11356.0L)) __builtin_unreachable ();
+  __builtin_expl (f8);
+  if (!(f9 > -__builtin_infl () && f9 < 16384.0L)) __builtin_unreachable ();
+  __builtin_exp2l (f9);
+  if (!(f10 > -__builtin_infl () && f10 < 4932.0L)) __builtin_unreachable ();
+  __builtin_exp10l (f10);
+#elif __LDBL_MANT_DIG__ == __FLT64_MANT_DIG__ && __LDBL_MAX_EXP__ == __FLT64_MAX_EXP__
+  if (!(f7 > -710.0L && f7 < 710.0L)) __builtin_unreachable ();
+  __builtin_coshl (f7);
+  __builtin_sinhl (f7);
+  if (!(f8 > -__builtin_infl () && f8 < 709.0L)) __builtin_unreachable ();
+  __builtin_expl (f8);
+  if (!(f9 > -__builtin_infl () && f9 < 1024.0L)) __builtin_unreachable ();
+  __builtin_exp2l (f9);
+  if (!(f10 > -__builtin_infl () && f10 < 308.0L)) __builtin_unreachable ();
+  __builtin_exp10l (f10);
+#endif
+}
+
+#ifdef __FLT128_MANT_DIG__
+void
+flt128 (_Float128 f1, _Float128 f2, _Float128 f3, _Float128 f4, _Float128 f5,
+	_Float128 f6, _Float128 f7, _Float128 f8, _Float128 f9)
+{
+  if (!(f1 >= -1.0f128 && f1 <= 1.0f128)) __builtin_unreachable ();
+  __builtin_acosf128 (f1);
+  __builtin_asinf128 (f1);
+  if (!(f2 >= 1.0f128 && f2 <= __builtin_inff128 ())) __builtin_unreachable ();
+  __builtin_acoshf128 (f2);
+  if (!(f3 > -1.0f128 && f3 < 1.0f128)) __builtin_unreachable ();
+  __builtin_atanhf128 (f3);
+  if (!(f4 > 0.0f128 && f4 < __builtin_inff128 ())) __builtin_unreachable ();
+  __builtin_logf128 (f4);
+  __builtin_log2f128 (f4);
+  __builtin_log10f128 (f4);
+  if (!(f5 > -1.0f128 && f5 < __builtin_inff128 ())) __builtin_unreachable ();
+  __builtin_log1pf128 (f5);
+  if (!(f6 >= 0.0f128 && f6 < __builtin_inff128 ())) __builtin_unreachable ();
+  __builtin_sqrtf128 (f6);
+  if (!(f7 > -11357.0f128 && f7 < 11357.0f128)) __builtin_unreachable ();
+  __builtin_coshf128 (f7);
+  __builtin_sinhf128 (f7);
+  if (!(f8 > -__builtin_inff128 () && f8 < 11356.0f128)) __builtin_unreachable ();
+  __builtin_expf128 (f8);
+  if (!(f9 > -__builtin_inff128 () && f9 < 16384.0f128)) __builtin_unreachable ();
+  __builtin_exp2f128 (f9);
+}
+#endif
+
+#ifdef __FLT64X_MANT_DIG__
+void
+flt64x (_Float64x f1, _Float64x f2, _Float64x f3, _Float64x f4, _Float64x f5,
+	_Float64x f6, _Float64x f7, _Float64x f8, _Float64x f9)
+{
+  if (!(f1 >= -1.0f64x && f1 <= 1.0f64x)) __builtin_unreachable ();
+  __builtin_acosf64x (f1);
+  __builtin_asinf64x (f1);
+  if (!(f2 >= 1.0f64x && f2 <= __builtin_inff64x ())) __builtin_unreachable ();
+  __builtin_acoshf64x (f2);
+  if (!(f3 > -1.0f64x && f3 < 1.0f64x)) __builtin_unreachable ();
+  __builtin_atanhf64x (f3);
+  if (!(f4 > 0.0f64x && f4 < __builtin_inff64x ())) __builtin_unreachable ();
+  __builtin_logf64x (f4);
+  __builtin_log2f64x (f4);
+  __builtin_log10f64x (f4);
+  if (!(f5 > -1.0f64x && f5 < __builtin_inff64x ())) __builtin_unreachable ();
+  __builtin_log1pf64x (f5);
+  if (!(f6 >= 0.0f64x && f6 < __builtin_inff64x ())) __builtin_unreachable ();
+  __builtin_sqrtf64x (f6);
+#if __FLT64X_MAX_EXP__ == 16384
+  if (!(f7 > -11357.0f64x && f7 < 11357.0f64x)) __builtin_unreachable ();
+  __builtin_coshf64x (f7);
+  __builtin_sinhf64x (f7);
+  if (!(f8 > -__builtin_inff64x () && f8 < 11356.0f64x)) __builtin_unreachable ();
+  __builtin_expf64x (f8);
+  if (!(f9 > -__builtin_inff64x () && f9 < 16384.0f64x)) __builtin_unreachable ();
+  __builtin_exp2f64x (f9);
+#endif
+}
+#endif