From patchwork Mon Dec 4 07:36:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jakub Jelinek X-Patchwork-Id: 173091 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp2603858vqy; Sun, 3 Dec 2023 23:37:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IHhIcEdXo9snl9lu5cI9NKi/soRapmhagaQa5WnCpvsSFjqdblgzQC9iA0YAMGF+IZlFWDY X-Received: by 2002:a05:622a:110c:b0:418:79a:e350 with SMTP id e12-20020a05622a110c00b00418079ae350mr6327868qty.23.1701675457054; Sun, 03 Dec 2023 23:37:37 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1701675457; cv=pass; d=google.com; s=arc-20160816; b=QoOoNEVnmB9+P1oLEjaB+9cj2Vr1sI84djOF6Xc7YQqqJHK4/wiwPrTDsGW+fUK+Fh IpMr5bPWKsuMCENSdYR3NhyJkWHmZk3R4zBUWFp8I5z5tJcodYhMUgawyPYGVaYcFY0e N+SZB4iZWi9dqAQtfRv4txirMgPhn5V1BSb+yTQ+jhrg/bX5T1zxjDmfNRlP9ypYy2y8 YMyMvGqHFhxh9j4abLWH7JnayZquXg3IFYBQ8QDj5a8mZAqD2yR1eY9dsVeyX9mvWC30 YSDwfJOOB7zgQXCVgV5ScQ7q8dn+rs99nT7I2npH23vEaA5pAN9hSGOyZO+Kodh1SqPs HlZw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:reply-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :content-disposition:in-reply-to:mime-version:references:message-id :subject:cc:to:from:date:dkim-signature:arc-filter:dmarc-filter :delivered-to; bh=6HFwsDV1A1VeQrkjNtEgwlcjMvoKuLoDRWvC0EGU5Cc=; fh=Bm1ob4wOcUrdmf54eYNOY4yB5JdGvJ5A5HKd22riRsE=; b=P3ZCosiTHdKSwqbhiC/SxqRudBt5CgwHAR/zlmuf4nO5Dn5OCY6WJLsBkRh5FJLMod KslgnJbVO3nXbCPXugwcm8Fgd/2rh1wQnG7++/z4z0RpbGzO5uieT61Vudi0NTI35iCY ymqNXyC+WBQBNBsXVbvMlGOTVn6mHuYkL7pJJzKQY2O9QKNEKuvPOifjAGeWDE5JyXOq QWcp6R5V1ajt1vXXwTe0X+Vt/Q7tFvRp2KU5Id7D7GLgqZ45O3ntSSOs/dAsjwHBEAWo Bkdjg5iyk1SfVEclLkE0Tgi0IiVWNnSYqQgaylUNw2Bqa3Qfk6RYLmq8OCiA8JiyEmws ObRQ== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gO2GrjKT; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id fi11-20020a05622a58cb00b0042402f30e65si7342503qtb.502.2023.12.03.23.37.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 03 Dec 2023 23:37:37 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gO2GrjKT; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id CB85E385C6FB for ; Mon, 4 Dec 2023 07:37:36 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id D212538582A3 for ; Mon, 4 Dec 2023 07:37:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D212538582A3 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org D212538582A3 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701675433; cv=none; b=lifyDRWIIPpsFHZdFs8adWc1nPo/qJpSwhObLyIyEt472G2cahsJRIUcCECUP3uH3KoEonzeYADKAt8yXOo9vD3awK/f6XHZdC4eG8vEnySGUjhqIukzjyNLi5CsSkthV8Mvp6z2qrtabUwIoZGOda/xFcqIT/uHqOOwXX7V5fo= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1701675433; c=relaxed/simple; bh=xhviVWtwoNGWudLMmtjozLBHDaGTMOGlEqhPaS8ZHKg=; h=DKIM-Signature:Date:From:To:Subject:Message-ID:MIME-Version; b=dM8qaFdML+tDX88gpeclncJu6mJKxPjl1BJ0HUjZzQ6GRqOYN/IZ5c399fA1Dc51cXlJs0WMDcDxEiloQKG13pyKTwejLOQJ74wr/c7cvzE8hlziD5tdncjiHih1G4DCg/AvIsbC4EDouY5kO/Q9ypkZebiRxidQLuhR3o1k3dc= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701675427; h=from:from:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6HFwsDV1A1VeQrkjNtEgwlcjMvoKuLoDRWvC0EGU5Cc=; b=gO2GrjKTVd2WRsDFILzOtRDw5aSL0eJp3Y9Sn0MCox8ubN2wGWqHWs6kci4Tu7N8AHDFZe QPzA4rUErgrPpUGzcX8Hdhkl1sVfzMUd+Vpfr2R2OHQaX3r7/d3YdLJIf6jeTlP+JavXgU FvsQvLMM57u5kORuld4nNFSc58vcsbk= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-552-2MF9P2NgPr-Y5BvyebFavg-1; Mon, 04 Dec 2023 02:37:04 -0500 X-MC-Unique: 2MF9P2NgPr-Y5BvyebFavg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DF8B238116F5; Mon, 4 Dec 2023 07:37:03 +0000 (UTC) Received: from tucnak.zalov.cz (unknown [10.39.195.157]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 534C01121306; Mon, 4 Dec 2023 07:37:03 +0000 (UTC) Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.17.1/8.17.1) with ESMTPS id 3B47b05I2150115 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Mon, 4 Dec 2023 08:37:00 +0100 Received: (from jakub@localhost) by tucnak.zalov.cz (8.17.1/8.17.1/Submit) id 3B47awYe2150114; Mon, 4 Dec 2023 08:36:58 +0100 Date: Mon, 4 Dec 2023 08:36:58 +0100 From: Jakub Jelinek To: Sandra Loosemore Cc: Gerald Pfeifer , Joseph Myers , gcc-patches@gcc.gnu.org Subject: [PATCH] extend.texi: Mark builtin arguments with @var{...} Message-ID: References: MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-Spam-Status: No, score=-3.6 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Jakub Jelinek Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784336044455718883 X-GMAIL-MSGID: 1784336044455718883 On Fri, Dec 01, 2023 at 10:43:57AM -0700, Sandra Loosemore wrote: > On 12/1/23 10:33, Jakub Jelinek wrote: > > Shall we tweak that somehow? If the argument names are unimportant, perhaps > > it is fine to leave that out, but shouldn't we always use @var{...} around > > the parameter names when specified? > > Yup. The Texinfo manual says: "When using @deftypefn command and > variations, you should mark parameter names with @var to distinguish these > from data type names, keywords, and other parts of the literal syntax of the > programming language." Here is a patch which does that (but not adding types to where they were missing, that will be harder to search for). Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk? 2023-12-04 Jakub Jelinek * doc/extend.texi (__sync_fetch_and_add, __sync_fetch_and_sub, __sync_fetch_and_or, __sync_fetch_and_and, __sync_fetch_and_xor, __sync_fetch_and_nand, __sync_add_and_fetch, __sync_sub_and_fetch, __sync_or_and_fetch, __sync_and_and_fetch, __sync_xor_and_fetch, __sync_nand_and_fetch, __sync_bool_compare_and_swap, __sync_val_compare_and_swap, __sync_lock_test_and_set, __sync_lock_release, __atomic_load_n, __atomic_load, __atomic_store_n, __atomic_store, __atomic_exchange_n, __atomic_exchange, __atomic_compare_exchange_n, __atomic_compare_exchange, __atomic_add_fetch, __atomic_sub_fetch, __atomic_and_fetch, __atomic_xor_fetch, __atomic_or_fetch, __atomic_nand_fetch, __atomic_fetch_add, __atomic_fetch_sub, __atomic_fetch_and, __atomic_fetch_xor, __atomic_fetch_or, __atomic_fetch_nand, __atomic_test_and_set, __atomic_clear, __atomic_thread_fence, __atomic_signal_fence, __atomic_always_lock_free, __atomic_is_lock_free, __builtin_add_overflow, __builtin_sadd_overflow, __builtin_saddl_overflow, __builtin_saddll_overflow, __builtin_uadd_overflow, __builtin_uaddl_overflow, __builtin_uaddll_overflow, __builtin_sub_overflow, __builtin_ssub_overflow, __builtin_ssubl_overflow, __builtin_ssubll_overflow, __builtin_usub_overflow, __builtin_usubl_overflow, __builtin_usubll_overflow, __builtin_mul_overflow, __builtin_smul_overflow, __builtin_smull_overflow, __builtin_smulll_overflow, __builtin_umul_overflow, __builtin_umull_overflow, __builtin_umulll_overflow, __builtin_add_overflow_p, __builtin_sub_overflow_p, __builtin_mul_overflow_p, __builtin_addc, __builtin_addcl, __builtin_addcll, __builtin_subc, __builtin_subcl, __builtin_subcll, __builtin_alloca, __builtin_alloca_with_align, __builtin_alloca_with_align_and_max, __builtin_speculation_safe_value, __builtin_nan, __builtin_nand32, __builtin_nand64, __builtin_nand128, __builtin_nanf, __builtin_nanl, __builtin_nanf@var{n}, __builtin_nanf@var{n}x, __builtin_nans, __builtin_nansd32, __builtin_nansd64, __builtin_nansd128, __builtin_nansf, __builtin_nansl, __builtin_nansf@var{n}, __builtin_nansf@var{n}x, __builtin_ffs, __builtin_clz, __builtin_ctz, __builtin_clrsb, __builtin_popcount, __builtin_parity, __builtin_bswap16, __builtin_bswap32, __builtin_bswap64, __builtin_bswap128, __builtin_extend_pointer, __builtin_goacc_parlevel_id, __builtin_goacc_parlevel_size, vec_clrl, vec_clrr, vec_mulh, vec_mul, vec_div, vec_dive, vec_mod, __builtin_rx_mvtc): Use @var{...} around parameter names. (vec_rl, vec_sl, vec_sr, vec_sra): Likewise. Use @var{...} also around A, B and R in description. Jakub --- gcc/doc/extend.texi.jj 2023-12-01 16:57:27.577890670 +0100 +++ gcc/doc/extend.texi 2023-12-02 10:35:16.509472645 +0100 @@ -12733,12 +12733,12 @@ variables to be protected. The list is empty. GCC interprets an empty list as meaning that all globally accessible variables should be protected. -@defbuiltin{@var{type} __sync_fetch_and_add (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_fetch_and_sub (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_fetch_and_or (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_fetch_and_and (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_fetch_and_xor (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_fetch_and_nand (@var{type} *ptr, @var{type} value, ...)} +@defbuiltin{@var{type} __sync_fetch_and_add (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_fetch_and_sub (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_fetch_and_or (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_fetch_and_and (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_fetch_and_xor (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_fetch_and_nand (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} These built-in functions perform the operation suggested by the name, and returns the value that had previously been in memory. That is, operations on integer operands have the following semantics. Operations on pointer @@ -12758,13 +12758,13 @@ type. It must not be a boolean type. as @code{*ptr = ~(tmp & value)} instead of @code{*ptr = ~tmp & value}. @enddefbuiltin -@defbuiltin{@var{type} __sync_add_and_fetch (@var{type} *ptr, @ - @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_sub_and_fetch (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_or_and_fetch (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_and_and_fetch (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_xor_and_fetch (@var{type} *ptr, @var{type} value, ...)} -@defbuiltinx{@var{type} __sync_nand_and_fetch (@var{type} *ptr, @var{type} value, ...)} +@defbuiltin{@var{type} __sync_add_and_fetch (@var{type} *@var{ptr}, @ + @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_sub_and_fetch (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_or_and_fetch (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_and_and_fetch (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_xor_and_fetch (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} +@defbuiltinx{@var{type} __sync_nand_and_fetch (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} These built-in functions perform the operation suggested by the name, and return the new value. That is, operations on integer operands have the following semantics. Operations on pointer operands are performed as @@ -12783,8 +12783,8 @@ as @code{*ptr = ~(*ptr & value)} instead @code{*ptr = ~*ptr & value}. @enddefbuiltin -@defbuiltin{bool __sync_bool_compare_and_swap (@var{type} *ptr, @var{type} oldval, @var{type} newval, ...)} -@defbuiltinx{@var{type} __sync_val_compare_and_swap (@var{type} *ptr, @var{type} oldval, @var{type} newval, ...)} +@defbuiltin{bool __sync_bool_compare_and_swap (@var{type} *@var{ptr}, @var{type} @var{oldval}, @var{type} @var{newval}, ...)} +@defbuiltinx{@var{type} __sync_val_compare_and_swap (@var{type} *@var{ptr}, @var{type} @var{oldval}, @var{type} @var{newval}, ...)} These built-in functions perform an atomic compare and swap. That is, if the current value of @code{*@var{ptr}} is @var{oldval}, then write @var{newval} into @@ -12799,7 +12799,7 @@ of @code{*@var{ptr}} before the operatio This built-in function issues a full memory barrier. @enddefbuiltin -@defbuiltin{@var{type} __sync_lock_test_and_set (@var{type} *ptr, @var{type} value, ...)} +@defbuiltin{@var{type} __sync_lock_test_and_set (@var{type} *@var{ptr}, @var{type} @var{value}, ...)} This built-in function, as described by Intel, is not a traditional test-and-set operation, but rather an atomic exchange operation. It writes @var{value} into @code{*@var{ptr}}, and returns the previous contents of @@ -12819,7 +12819,7 @@ be globally visible yet, and previous me satisfied. @enddefbuiltin -@defbuiltin{void __sync_lock_release (@var{type} *ptr, ...)} +@defbuiltin{void __sync_lock_release (@var{type} *@var{ptr}, ...)} This built-in function releases the lock acquired by @code{__sync_lock_test_and_set}. Normally this means writing the constant 0 to @code{*@var{ptr}}. @@ -12936,7 +12936,7 @@ reserved for the memory order. The rema for target use and should be 0. Use of the predefined atomic values ensures proper usage. -@defbuiltin{@var{type} __atomic_load_n (@var{type} *ptr, int memorder)} +@defbuiltin{@var{type} __atomic_load_n (@var{type} *@var{ptr}, int @var{memorder})} This built-in function implements an atomic load operation. It returns the contents of @code{*@var{ptr}}. @@ -12946,13 +12946,13 @@ and @code{__ATOMIC_CONSUME}. @enddefbuiltin -@defbuiltin{void __atomic_load (@var{type} *ptr, @var{type} *ret, int memorder)} +@defbuiltin{void __atomic_load (@var{type} *@var{ptr}, @var{type} *ret, int @var{memorder})} This is the generic version of an atomic load. It returns the contents of @code{*@var{ptr}} in @code{*@var{ret}}. @enddefbuiltin -@defbuiltin{void __atomic_store_n (@var{type} *ptr, @var{type} val, int memorder)} +@defbuiltin{void __atomic_store_n (@var{type} *@var{ptr}, @var{type} val, int @var{memorder})} This built-in function implements an atomic store operation. It writes @code{@var{val}} into @code{*@var{ptr}}. @@ -12961,13 +12961,13 @@ The valid memory order variants are @enddefbuiltin -@defbuiltin{void __atomic_store (@var{type} *ptr, @var{type} *val, int memorder)} +@defbuiltin{void __atomic_store (@var{type} *@var{ptr}, @var{type} *val, int @var{memorder})} This is the generic version of an atomic store. It stores the value of @code{*@var{val}} into @code{*@var{ptr}}. @enddefbuiltin -@defbuiltin{@var{type} __atomic_exchange_n (@var{type} *ptr, @var{type} val, int memorder)} +@defbuiltin{@var{type} __atomic_exchange_n (@var{type} *@var{ptr}, @var{type} val, int @var{memorder})} This built-in function implements an atomic exchange operation. It writes @var{val} into @code{*@var{ptr}}, and returns the previous contents of @code{*@var{ptr}}. @@ -12976,14 +12976,14 @@ All memory order variants are valid. @enddefbuiltin -@defbuiltin{void __atomic_exchange (@var{type} *ptr, @var{type} *val, @var{type} *ret, int memorder)} +@defbuiltin{void __atomic_exchange (@var{type} *@var{ptr}, @var{type} *val, @var{type} *ret, int @var{memorder})} This is the generic version of an atomic exchange. It stores the contents of @code{*@var{val}} into @code{*@var{ptr}}. The original value of @code{*@var{ptr}} is copied into @code{*@var{ret}}. @enddefbuiltin -@defbuiltin{bool __atomic_compare_exchange_n (@var{type} *ptr, @var{type} *expected, @var{type} desired, bool weak, int success_memorder, int failure_memorder)} +@defbuiltin{bool __atomic_compare_exchange_n (@var{type} *@var{ptr}, @var{type} *@var{expected}, @var{type} @var{desired}, bool @var{weak}, int @var{success_memorder}, int @var{failure_memorder})} This built-in function implements an atomic compare and exchange operation. This compares the contents of @code{*@var{ptr}} with the contents of @code{*@var{expected}}. If equal, the operation is a @emph{read-modify-write} @@ -13007,7 +13007,7 @@ stronger order than that specified by @v @enddefbuiltin -@defbuiltin{bool __atomic_compare_exchange (@var{type} *ptr, @var{type} *expected, @var{type} *desired, bool weak, int success_memorder, int failure_memorder)} +@defbuiltin{bool __atomic_compare_exchange (@var{type} *@var{ptr}, @var{type} *@var{expected}, @var{type} *@var{desired}, bool @var{weak}, int @var{success_memorder}, int @var{failure_memorder})} This built-in function implements the generic version of @code{__atomic_compare_exchange}. The function is virtually identical to @code{__atomic_compare_exchange_n}, except the desired value is also a @@ -13015,12 +13015,12 @@ pointer. @enddefbuiltin -@defbuiltin{@var{type} __atomic_add_fetch (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_sub_fetch (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_and_fetch (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_xor_fetch (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_or_fetch (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_nand_fetch (@var{type} *ptr, @var{type} val, int memorder)} +@defbuiltin{@var{type} __atomic_add_fetch (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_sub_fetch (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_and_fetch (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_xor_fetch (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_or_fetch (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_nand_fetch (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} These built-in functions perform the operation suggested by the name, and return the result of the operation. Operations on pointer arguments are performed as if the operands were of the @code{uintptr_t} type. That is, @@ -13036,12 +13036,12 @@ type. It must not be a boolean type. A @enddefbuiltin -@defbuiltin{@var{type} __atomic_fetch_add (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_fetch_sub (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_fetch_and (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_fetch_xor (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_fetch_or (@var{type} *ptr, @var{type} val, int memorder)} -@defbuiltinx{@var{type} __atomic_fetch_nand (@var{type} *ptr, @var{type} val, int memorder)} +@defbuiltin{@var{type} __atomic_fetch_add (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_fetch_sub (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_fetch_and (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_fetch_xor (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_fetch_or (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} +@defbuiltinx{@var{type} __atomic_fetch_nand (@var{type} *@var{ptr}, @var{type} @var{val}, int @var{memorder})} These built-in functions perform the operation suggested by the name, and return the value that had previously been in @code{*@var{ptr}}. Operations on pointer arguments are performed as if the operands were of @@ -13058,7 +13058,7 @@ The same constraints on arguments apply @enddefbuiltin -@defbuiltin{bool __atomic_test_and_set (void *ptr, int memorder)} +@defbuiltin{bool __atomic_test_and_set (void *@var{ptr}, int @var{memorder})} This built-in function performs an atomic test-and-set operation on the byte at @code{*@var{ptr}}. The byte is set to some implementation @@ -13071,7 +13071,7 @@ All memory orders are valid. @enddefbuiltin -@defbuiltin{void __atomic_clear (bool *ptr, int memorder)} +@defbuiltin{void __atomic_clear (bool *@var{ptr}, int @var{memorder})} This built-in function performs an atomic clear operation on @code{*@var{ptr}}. After the operation, @code{*@var{ptr}} contains 0. @@ -13086,7 +13086,7 @@ The valid memory order variants are @enddefbuiltin -@defbuiltin{void __atomic_thread_fence (int memorder)} +@defbuiltin{void __atomic_thread_fence (int @var{memorder})} This built-in function acts as a synchronization fence between threads based on the specified memory order. @@ -13095,7 +13095,7 @@ All memory orders are valid. @enddefbuiltin -@defbuiltin{void __atomic_signal_fence (int memorder)} +@defbuiltin{void __atomic_signal_fence (int @var{memorder})} This built-in function acts as a synchronization fence between a thread and signal handlers based in the same thread. @@ -13104,7 +13104,7 @@ All memory orders are valid. @enddefbuiltin -@defbuiltin{bool __atomic_always_lock_free (size_t size, void *ptr)} +@defbuiltin{bool __atomic_always_lock_free (size_t @var{size}, void *@var{ptr})} This built-in function returns @code{true} if objects of @var{size} bytes always generate lock-free atomic instructions for the target architecture. @@ -13121,7 +13121,7 @@ if (__atomic_always_lock_free (sizeof (l @enddefbuiltin -@defbuiltin{bool __atomic_is_lock_free (size_t size, void *ptr)} +@defbuiltin{bool __atomic_is_lock_free (size_t @var{size}, void *@var{ptr})} This built-in function returns @code{true} if objects of @var{size} bytes always generate lock-free atomic instructions for the target architecture. If @@ -13139,13 +13139,13 @@ compiler may also ignore this parameter. The following built-in functions allow performing simple arithmetic operations together with checking whether the operations overflowed. -@defbuiltin{bool __builtin_add_overflow (@var{type1} a, @var{type2} b, @var{type3} *res)} -@defbuiltinx{bool __builtin_sadd_overflow (int a, int b, int *res)} -@defbuiltinx{bool __builtin_saddl_overflow (long int a, long int b, long int *res)} -@defbuiltinx{bool __builtin_saddll_overflow (long long int a, long long int b, long long int *res)} -@defbuiltinx{bool __builtin_uadd_overflow (unsigned int a, unsigned int b, unsigned int *res)} -@defbuiltinx{bool __builtin_uaddl_overflow (unsigned long int a, unsigned long int b, unsigned long int *res)} -@defbuiltinx{bool __builtin_uaddll_overflow (unsigned long long int a, unsigned long long int b, unsigned long long int *res)} +@defbuiltin{bool __builtin_add_overflow (@var{type1} @var{a}, @var{type2} @var{b}, @var{type3} *@var{res})} +@defbuiltinx{bool __builtin_sadd_overflow (int @var{a}, int @var{b}, int *@var{res})} +@defbuiltinx{bool __builtin_saddl_overflow (long int @var{a}, long int @var{b}, long int *@var{res})} +@defbuiltinx{bool __builtin_saddll_overflow (long long int @var{a}, long long int @var{b}, long long int *@var{res})} +@defbuiltinx{bool __builtin_uadd_overflow (unsigned int @var{a}, unsigned int @var{b}, unsigned int *@var{res})} +@defbuiltinx{bool __builtin_uaddl_overflow (unsigned long int @var{a}, unsigned long int @var{b}, unsigned long int *@var{res})} +@defbuiltinx{bool __builtin_uaddll_overflow (unsigned long long int @var{a}, unsigned long long int @var{b}, unsigned long long int *@var{res})} These built-in functions promote the first two operands into infinite precision signed type and perform addition on those promoted operands. The result is then @@ -13165,13 +13165,13 @@ after addition, conditional jump on carr @enddefbuiltin -@defbuiltin{bool __builtin_sub_overflow (@var{type1} a, @var{type2} b, @var{type3} *res)} -@defbuiltinx{bool __builtin_ssub_overflow (int a, int b, int *res)} -@defbuiltinx{bool __builtin_ssubl_overflow (long int a, long int b, long int *res)} -@defbuiltinx{bool __builtin_ssubll_overflow (long long int a, long long int b, long long int *res)} -@defbuiltinx{bool __builtin_usub_overflow (unsigned int a, unsigned int b, unsigned int *res)} -@defbuiltinx{bool __builtin_usubl_overflow (unsigned long int a, unsigned long int b, unsigned long int *res)} -@defbuiltinx{bool __builtin_usubll_overflow (unsigned long long int a, unsigned long long int b, unsigned long long int *res)} +@defbuiltin{bool __builtin_sub_overflow (@var{type1} @var{a}, @var{type2} @var{b}, @var{type3} *@var{res})} +@defbuiltinx{bool __builtin_ssub_overflow (int @var{a}, int @var{b}, int *@var{res})} +@defbuiltinx{bool __builtin_ssubl_overflow (long int @var{a}, long int @var{b}, long int *@var{res})} +@defbuiltinx{bool __builtin_ssubll_overflow (long long int @var{a}, long long int @var{b}, long long int *@var{res})} +@defbuiltinx{bool __builtin_usub_overflow (unsigned int @var{a}, unsigned int @var{b}, unsigned int *@var{res})} +@defbuiltinx{bool __builtin_usubl_overflow (unsigned long int @var{a}, unsigned long int @var{b}, unsigned long int *@var{res})} +@defbuiltinx{bool __builtin_usubll_overflow (unsigned long long int @var{a}, unsigned long long int @var{b}, unsigned long long int *@var{res})} These built-in functions are similar to the add overflow checking built-in functions above, except they perform subtraction, subtract the second argument @@ -13179,13 +13179,13 @@ from the first one, instead of addition. @enddefbuiltin -@defbuiltin{bool __builtin_mul_overflow (@var{type1} a, @var{type2} b, @var{type3} *res)} -@defbuiltinx{bool __builtin_smul_overflow (int a, int b, int *res)} -@defbuiltinx{bool __builtin_smull_overflow (long int a, long int b, long int *res)} -@defbuiltinx{bool __builtin_smulll_overflow (long long int a, long long int b, long long int *res)} -@defbuiltinx{bool __builtin_umul_overflow (unsigned int a, unsigned int b, unsigned int *res)} -@defbuiltinx{bool __builtin_umull_overflow (unsigned long int a, unsigned long int b, unsigned long int *res)} -@defbuiltinx{bool __builtin_umulll_overflow (unsigned long long int a, unsigned long long int b, unsigned long long int *res)} +@defbuiltin{bool __builtin_mul_overflow (@var{type1} @var{a}, @var{type2} @var{b}, @var{type3} *@var{res})} +@defbuiltinx{bool __builtin_smul_overflow (int @var{a}, int @var{b}, int *@var{res})} +@defbuiltinx{bool __builtin_smull_overflow (long int @var{a}, long int @var{b}, long int *@var{res})} +@defbuiltinx{bool __builtin_smulll_overflow (long long int @var{a}, long long int @var{b}, long long int *@var{res})} +@defbuiltinx{bool __builtin_umul_overflow (unsigned int @var{a}, unsigned int @var{b}, unsigned int *@var{res})} +@defbuiltinx{bool __builtin_umull_overflow (unsigned long int @var{a}, unsigned long int @var{b}, unsigned long int *@var{res})} +@defbuiltinx{bool __builtin_umulll_overflow (unsigned long long int @var{a}, unsigned long long int @var{b}, unsigned long long int *@var{res})} These built-in functions are similar to the add overflow checking built-in functions above, except they perform multiplication, instead of addition. @@ -13195,9 +13195,9 @@ functions above, except they perform mul The following built-in functions allow checking if simple arithmetic operation would overflow. -@defbuiltin{bool __builtin_add_overflow_p (@var{type1} a, @var{type2} b, @var{type3} c)} -@defbuiltinx{bool __builtin_sub_overflow_p (@var{type1} a, @var{type2} b, @var{type3} c)} -@defbuiltinx{bool __builtin_mul_overflow_p (@var{type1} a, @var{type2} b, @var{type3} c)} +@defbuiltin{bool __builtin_add_overflow_p (@var{type1} @var{a}, @var{type2} @var{b}, @var{type3} @var{c})} +@defbuiltinx{bool __builtin_sub_overflow_p (@var{type1} @var{a}, @var{type2} @var{b}, @var{type3} @var{c})} +@defbuiltinx{bool __builtin_mul_overflow_p (@var{type1} @var{a}, @var{type2} @var{b}, @var{type3} @var{c})} These built-in functions are similar to @code{__builtin_add_overflow}, @code{__builtin_sub_overflow}, or @code{__builtin_mul_overflow}, except that @@ -13237,9 +13237,9 @@ after addition, conditional jump on carr @enddefbuiltin -@defbuiltin{{unsigned int} __builtin_addc (unsigned int a, unsigned int b, unsigned int carry_in, unsigned int *carry_out)} -@defbuiltinx{{unsigned long int} __builtin_addcl (unsigned long int a, unsigned long int b, unsigned int carry_in, unsigned long int *carry_out)} -@defbuiltinx{{unsigned long long int} __builtin_addcll (unsigned long long int a, unsigned long long int b, unsigned long long int carry_in, unsigned long long int *carry_out)} +@defbuiltin{{unsigned int} __builtin_addc (unsigned int @var{a}, unsigned int @var{b}, unsigned int @var{carry_in}, unsigned int *@var{carry_out})} +@defbuiltinx{{unsigned long int} __builtin_addcl (unsigned long int @var{a}, unsigned long int @var{b}, unsigned int @var{carry_in}, unsigned long int *@var{carry_out})} +@defbuiltinx{{unsigned long long int} __builtin_addcll (unsigned long long int @var{a}, unsigned long long int @var{b}, unsigned long long int @var{carry_in}, unsigned long long int *@var{carry_out})} These built-in functions are equivalent to: @smallexample @@ -13259,9 +13259,9 @@ emitted if one of them (preferrably the @enddefbuiltin -@defbuiltin{{unsigned int} __builtin_subc (unsigned int a, unsigned int b, unsigned int carry_in, unsigned int *carry_out)} -@defbuiltinx{{unsigned long int} __builtin_subcl (unsigned long int a, unsigned long int b, unsigned int carry_in, unsigned long int *carry_out)} -@defbuiltinx{{unsigned long long int} __builtin_subcll (unsigned long long int a, unsigned long long int b, unsigned long long int carry_in, unsigned long long int *carry_out)} +@defbuiltin{{unsigned int} __builtin_subc (unsigned int @var{a}, unsigned int @var{b}, unsigned int @var{carry_in}, unsigned int *@var{carry_out})} +@defbuiltinx{{unsigned long int} __builtin_subcl (unsigned long int @var{a}, unsigned long int @var{b}, unsigned int @var{carry_in}, unsigned long int *@var{carry_out})} +@defbuiltinx{{unsigned long long int} __builtin_subcll (unsigned long long int @var{a}, unsigned long long int @var{b}, unsigned long long int @var{carry_in}, unsigned long long int *@var{carry_out})} These built-in functions are equivalent to: @smallexample @@ -14043,7 +14043,7 @@ for all target libcs, but in all cases t calls. These built-in functions appear both with and without the @code{__builtin_} prefix. -@defbuiltin{{void *} __builtin_alloca (size_t size)} +@defbuiltin{{void *} __builtin_alloca (size_t @var{size})} The @code{__builtin_alloca} function must be called at block scope. The function allocates an object @var{size} bytes large on the stack of the calling function. The object is aligned on the default stack @@ -14083,7 +14083,7 @@ where GCC provides them as an extension. @enddefbuiltin -@defbuiltin{{void *} __builtin_alloca_with_align (size_t size, size_t alignment)} +@defbuiltin{{void *} __builtin_alloca_with_align (size_t @var{size}, size_t @var{alignment})} The @code{__builtin_alloca_with_align} function must be called at block scope. The function allocates an object @var{size} bytes large on the stack of the calling function. The allocated object is aligned on @@ -14130,7 +14130,7 @@ an extension. @xref{Variable Length}, f @enddefbuiltin -@defbuiltin{{void *} __builtin_alloca_with_align_and_max (size_t size, size_t alignment, size_t max_size)} +@defbuiltin{{void *} __builtin_alloca_with_align_and_max (size_t @var{size}, size_t @var{alignment}, size_t @var{max_size})} Similar to @code{__builtin_alloca_with_align} but takes an extra argument specifying an upper bound for @var{size} in case its value cannot be computed at compile time, for use by @option{-fstack-usage}, @option{-Wstack-usage} @@ -14183,7 +14183,7 @@ recognized in such contexts. @enddefbuiltin -@defbuiltin{@var{type} __builtin_speculation_safe_value (@var{type} val, @var{type} failval)} +@defbuiltin{@var{type} __builtin_speculation_safe_value (@var{type} @var{val}, @var{type} @var{failval})} This built-in function can be used to help mitigate against unsafe speculative execution. @var{type} may be any integral type or any @@ -14915,7 +14915,7 @@ argument. GCC treats this parameter as does not do default promotion from float to double. @enddefbuiltin -@defbuiltin{double __builtin_nan (const char *str)} +@defbuiltin{double __builtin_nan (const char *@var{str})} This is an implementation of the ISO C99 function @code{nan}. Since ISO C99 defines this function in terms of @code{strtod}, which we @@ -14932,68 +14932,68 @@ consumed by @code{strtol}, is evaluated compile-time constant. @enddefbuiltin -@defbuiltin{_Decimal32 __builtin_nand32 (const char *str)} +@defbuiltin{_Decimal32 __builtin_nand32 (const char *@var{str})} Similar to @code{__builtin_nan}, except the return type is @code{_Decimal32}. @enddefbuiltin -@defbuiltin{_Decimal64 __builtin_nand64 (const char *str)} +@defbuiltin{_Decimal64 __builtin_nand64 (const char *@var{str})} Similar to @code{__builtin_nan}, except the return type is @code{_Decimal64}. @enddefbuiltin -@defbuiltin{_Decimal128 __builtin_nand128 (const char *str)} +@defbuiltin{_Decimal128 __builtin_nand128 (const char *@var{str})} Similar to @code{__builtin_nan}, except the return type is @code{_Decimal128}. @enddefbuiltin -@defbuiltin{float __builtin_nanf (const char *str)} +@defbuiltin{float __builtin_nanf (const char *@var{str})} Similar to @code{__builtin_nan}, except the return type is @code{float}. @enddefbuiltin -@defbuiltin{{long double} __builtin_nanl (const char *str)} +@defbuiltin{{long double} __builtin_nanl (const char *@var{str})} Similar to @code{__builtin_nan}, except the return type is @code{long double}. @enddefbuiltin -@defbuiltin{_Float@var{n} __builtin_nanf@var{n} (const char *str)} +@defbuiltin{_Float@var{n} __builtin_nanf@var{n} (const char *@var{str})} Similar to @code{__builtin_nan}, except the return type is @code{_Float@var{n}}. @enddefbuiltin -@defbuiltin{_Float@var{n}x __builtin_nanf@var{n}x (const char *str)} +@defbuiltin{_Float@var{n}x __builtin_nanf@var{n}x (const char *@var{str})} Similar to @code{__builtin_nan}, except the return type is @code{_Float@var{n}x}. @enddefbuiltin -@defbuiltin{double __builtin_nans (const char *str)} +@defbuiltin{double __builtin_nans (const char *@var{str})} Similar to @code{__builtin_nan}, except the significand is forced to be a signaling NaN@. The @code{nans} function is proposed by @uref{https://www.open-std.org/jtc1/sc22/wg14/www/docs/n965.htm,,WG14 N965}. @enddefbuiltin -@defbuiltin{_Decimal32 __builtin_nansd32 (const char *str)} +@defbuiltin{_Decimal32 __builtin_nansd32 (const char *@var{str})} Similar to @code{__builtin_nans}, except the return type is @code{_Decimal32}. @enddefbuiltin -@defbuiltin{_Decimal64 __builtin_nansd64 (const char *str)} +@defbuiltin{_Decimal64 __builtin_nansd64 (const char *@var{str})} Similar to @code{__builtin_nans}, except the return type is @code{_Decimal64}. @enddefbuiltin -@defbuiltin{_Decimal128 __builtin_nansd128 (const char *str)} +@defbuiltin{_Decimal128 __builtin_nansd128 (const char *@var{str})} Similar to @code{__builtin_nans}, except the return type is @code{_Decimal128}. @enddefbuiltin -@defbuiltin{float __builtin_nansf (const char *str)} +@defbuiltin{float __builtin_nansf (const char *@var{str})} Similar to @code{__builtin_nans}, except the return type is @code{float}. @enddefbuiltin -@defbuiltin{{long double} __builtin_nansl (const char *str)} +@defbuiltin{{long double} __builtin_nansl (const char *@var{str})} Similar to @code{__builtin_nans}, except the return type is @code{long double}. @enddefbuiltin -@defbuiltin{_Float@var{n} __builtin_nansf@var{n} (const char *str)} +@defbuiltin{_Float@var{n} __builtin_nansf@var{n} (const char *@var{str})} Similar to @code{__builtin_nans}, except the return type is @code{_Float@var{n}}. @enddefbuiltin -@defbuiltin{_Float@var{n}x __builtin_nansf@var{n}x (const char *str)} +@defbuiltin{_Float@var{n}x __builtin_nansf@var{n}x (const char *@var{str})} Similar to @code{__builtin_nans}, except the return type is @code{_Float@var{n}x}. @enddefbuiltin @@ -15012,32 +15012,32 @@ With @code{-ffinite-math-only} option th return 0. @enddefbuiltin -@defbuiltin{int __builtin_ffs (int x)} +@defbuiltin{int __builtin_ffs (int @var{x})} Returns one plus the index of the least significant 1-bit of @var{x}, or if @var{x} is zero, returns zero. @enddefbuiltin -@defbuiltin{int __builtin_clz (unsigned int x)} +@defbuiltin{int __builtin_clz (unsigned int @var{x})} Returns the number of leading 0-bits in @var{x}, starting at the most significant bit position. If @var{x} is 0, the result is undefined. @enddefbuiltin -@defbuiltin{int __builtin_ctz (unsigned int x)} +@defbuiltin{int __builtin_ctz (unsigned int @var{x})} Returns the number of trailing 0-bits in @var{x}, starting at the least significant bit position. If @var{x} is 0, the result is undefined. @enddefbuiltin -@defbuiltin{int __builtin_clrsb (int x)} +@defbuiltin{int __builtin_clrsb (int @var{x})} Returns the number of leading redundant sign bits in @var{x}, i.e.@: the number of bits following the most significant bit that are identical to it. There are no special cases for 0 or other values. @enddefbuiltin -@defbuiltin{int __builtin_popcount (unsigned int x)} +@defbuiltin{int __builtin_popcount (unsigned int @var{x})} Returns the number of 1-bits in @var{x}. @enddefbuiltin -@defbuiltin{int __builtin_parity (unsigned int x)} +@defbuiltin{int __builtin_parity (unsigned int @var{x})} Returns the parity of @var{x}, i.e.@: the number of 1-bits in @var{x} modulo 2. @enddefbuiltin @@ -15270,29 +15270,29 @@ Returns the first argument raised to the @code{pow} function no guarantees about precision and rounding are made. @enddefbuiltin -@defbuiltin{uint16_t __builtin_bswap16 (uint16_t x)} +@defbuiltin{uint16_t __builtin_bswap16 (uint16_t @var{x})} Returns @var{x} with the order of the bytes reversed; for example, @code{0xaabb} becomes @code{0xbbaa}. Byte here always means exactly 8 bits. @enddefbuiltin -@defbuiltin{uint32_t __builtin_bswap32 (uint32_t x)} +@defbuiltin{uint32_t __builtin_bswap32 (uint32_t @var{x})} Similar to @code{__builtin_bswap16}, except the argument and return types are 32-bit. @enddefbuiltin -@defbuiltin{uint64_t __builtin_bswap64 (uint64_t x)} +@defbuiltin{uint64_t __builtin_bswap64 (uint64_t @var{x})} Similar to @code{__builtin_bswap32}, except the argument and return types are 64-bit. @enddefbuiltin -@defbuiltin{uint128_t __builtin_bswap128 (uint128_t x)} +@defbuiltin{uint128_t __builtin_bswap128 (uint128_t @var{x})} Similar to @code{__builtin_bswap64}, except the argument and return types are 128-bit. Only supported on targets when 128-bit types are supported. @enddefbuiltin -@defbuiltin{Pmode __builtin_extend_pointer (void * x)} +@defbuiltin{Pmode __builtin_extend_pointer (void * @var{x})} On targets where the user visible pointer size is smaller than the size of an actual hardware address this function returns the extended user pointer. Targets where this is true included ILP32 mode on x86_64 or @@ -15300,12 +15300,12 @@ Aarch64. This function is mainly useful code. @enddefbuiltin -@defbuiltin{int __builtin_goacc_parlevel_id (int x)} +@defbuiltin{int __builtin_goacc_parlevel_id (int @var{x})} Returns the openacc gang, worker or vector id depending on whether @var{x} is 0, 1 or 2. @enddefbuiltin -@defbuiltin{int __builtin_goacc_parlevel_size (int x)} +@defbuiltin{int __builtin_goacc_parlevel_size (int @var{x})} Returns the openacc gang, worker or vector size depending on whether @var{x} is 0, 1 or 2. @enddefbuiltin @@ -20911,9 +20911,9 @@ implemented by the @code{vctzdm} instruc @smallexample @exdent vector signed char -@exdent vec_clrl (vector signed char a, unsigned int n); +@exdent vec_clrl (vector signed char @var{a}, unsigned int @var{n}); @exdent vector unsigned char -@exdent vec_clrl (vector unsigned char a, unsigned int n); +@exdent vec_clrl (vector unsigned char @var{a}, unsigned int @var{n}); @end smallexample Clear the left-most @code{(16 - n)} bytes of vector argument @code{a}, as if implemented by the @code{vclrlb} instruction on a big-endian target @@ -20923,9 +20923,9 @@ value of @code{n} that is greater than 1 @smallexample @exdent vector signed char -@exdent vec_clrr (vector signed char a, unsigned int n); +@exdent vec_clrr (vector signed char @var{a}, unsigned int @var{n}); @exdent vector unsigned char -@exdent vec_clrr (vector unsigned char a, unsigned int n); +@exdent vec_clrr (vector unsigned char @var{a}, unsigned int @var{n}); @end smallexample Clear the right-most @code{(16 - n)} bytes of vector argument @code{a}, as if implemented by the @code{vclrrb} instruction on a big-endian target @@ -21379,9 +21379,9 @@ Vector Integer Multiply/Divide/Modulo @smallexample @exdent vector signed int -@exdent vec_mulh (vector signed int a, vector signed int b); +@exdent vec_mulh (vector signed int @var{a}, vector signed int @var{b}); @exdent vector unsigned int -@exdent vec_mulh (vector unsigned int a, vector unsigned int b); +@exdent vec_mulh (vector unsigned int @var{a}, vector unsigned int @var{b}); @end smallexample For each integer value @code{i} from 0 to 3, do the following. The integer @@ -21391,9 +21391,9 @@ into word element @code{i} of the vector @smallexample @exdent vector signed long long -@exdent vec_mulh (vector signed long long a, vector signed long long b); +@exdent vec_mulh (vector signed long long @var{a}, vector signed long long @var{b}); @exdent vector unsigned long long -@exdent vec_mulh (vector unsigned long long a, vector unsigned long long b); +@exdent vec_mulh (vector unsigned long long @var{a}, vector unsigned long long @var{b}); @end smallexample For each integer value @code{i} from 0 to 1, do the following. The integer @@ -21403,9 +21403,9 @@ are placed into doubleword element @code @smallexample @exdent vector unsigned long long -@exdent vec_mul (vector unsigned long long a, vector unsigned long long b); +@exdent vec_mul (vector unsigned long long @var{a}, vector unsigned long long @var{b}); @exdent vector signed long long -@exdent vec_mul (vector signed long long a, vector signed long long b); +@exdent vec_mul (vector signed long long @var{a}, vector signed long long @var{b}); @end smallexample For each integer value @code{i} from 0 to 1, do the following. The integer @@ -21415,9 +21415,9 @@ are placed into doubleword element @code @smallexample @exdent vector signed int -@exdent vec_div (vector signed int a, vector signed int b); +@exdent vec_div (vector signed int @var{a}, vector signed int @var{b}); @exdent vector unsigned int -@exdent vec_div (vector unsigned int a, vector unsigned int b); +@exdent vec_div (vector unsigned int @var{a}, vector unsigned int @var{b}); @end smallexample For each integer value @code{i} from 0 to 3, do the following. The integer in @@ -21428,9 +21428,9 @@ the vector returned. If an attempt is ma @smallexample @exdent vector signed long long -@exdent vec_div (vector signed long long a, vector signed long long b); +@exdent vec_div (vector signed long long @var{a}, vector signed long long @var{b}); @exdent vector unsigned long long -@exdent vec_div (vector unsigned long long a, vector unsigned long long b); +@exdent vec_div (vector unsigned long long @var{a}, vector unsigned long long @var{b}); @end smallexample For each integer value @code{i} from 0 to 1, do the following. The integer in @@ -21442,9 +21442,9 @@ the quotient is undefined. @smallexample @exdent vector signed int -@exdent vec_dive (vector signed int a, vector signed int b); +@exdent vec_dive (vector signed int @var{a}, vector signed int @var{b}); @exdent vector unsigned int -@exdent vec_dive (vector unsigned int a, vector unsigned int b); +@exdent vec_dive (vector unsigned int @var{a}, vector unsigned int @var{b}); @end smallexample For each integer value @code{i} from 0 to 3, do the following. The integer in @@ -21456,9 +21456,9 @@ divisions รท 0 then the quoti @smallexample @exdent vector signed long long -@exdent vec_dive (vector signed long long a, vector signed long long b); +@exdent vec_dive (vector signed long long @var{a}, vector signed long long @var{b}); @exdent vector unsigned long long -@exdent vec_dive (vector unsigned long long a, vector unsigned long long b); +@exdent vec_dive (vector unsigned long long @var{a}, vector unsigned long long @var{b}); @end smallexample For each integer value @code{i} from 0 to 1, do the following. The integer in @@ -21470,9 +21470,9 @@ quotient cannot be represented in 64 bit @smallexample @exdent vector signed int -@exdent vec_mod (vector signed int a, vector signed int b); +@exdent vec_mod (vector signed int @var{a}, vector signed int @var{b}); @exdent vector unsigned int -@exdent vec_mod (vector unsigned int a, vector unsigned int b); +@exdent vec_mod (vector unsigned int @var{a}, vector unsigned int @var{b}); @end smallexample For each integer value @code{i} from 0 to 3, do the following. The integer in @@ -21483,9 +21483,9 @@ the vector returned. If an attempt is m @smallexample @exdent vector signed long long -@exdent vec_mod (vector signed long long a, vector signed long long b); +@exdent vec_mod (vector signed long long @var{a}, vector signed long long @var{b}); @exdent vector unsigned long long -@exdent vec_mod (vector unsigned long long a, vector unsigned long long b); +@exdent vec_mod (vector unsigned long long @var{a}, vector unsigned long long @var{b}); @end smallexample For each integer value @code{i} from 0 to 1, do the following. The integer in @@ -21500,14 +21500,14 @@ immediate value is either 0, 1, 2 or 3. @findex vec_genpcvm @smallexample -@exdent vector unsigned __int128 vec_rl (vector unsigned __int128 A, - vector unsigned __int128 B); -@exdent vector signed __int128 vec_rl (vector signed __int128 A, - vector unsigned __int128 B); +@exdent vector unsigned __int128 vec_rl (vector unsigned __int128 @var{A}, + vector unsigned __int128 @var{B}); +@exdent vector signed __int128 vec_rl (vector signed __int128 @var{A}, + vector unsigned __int128 @var{B}); @end smallexample -Result value: Each element of R is obtained by rotating the corresponding element -of A left by the number of bits specified by the corresponding element of B. +Result value: Each element of @var{R} is obtained by rotating the corresponding element +of @var{A} left by the number of bits specified by the corresponding element of @var{B}. @smallexample @@ -21541,28 +21541,28 @@ input. The shift is obtained from the t [125:131] where all bits counted from zero at the left. @smallexample -@exdent vector unsigned __int128 vec_sl(vector unsigned __int128 A, vector unsigned __int128 B); -@exdent vector signed __int128 vec_sl(vector signed __int128 A, vector unsigned __int128 B); +@exdent vector unsigned __int128 vec_sl(vector unsigned __int128 @var{A}, vector unsigned __int128 @var{B}); +@exdent vector signed __int128 vec_sl(vector signed __int128 @var{A}, vector unsigned __int128 @var{B}); @end smallexample -Result value: Each element of R is obtained by shifting the corresponding element of -A left by the number of bits specified by the corresponding element of B. +Result value: Each element of @var{R} is obtained by shifting the corresponding element of +@var{A} left by the number of bits specified by the corresponding element of @var{B}. @smallexample -@exdent vector unsigned __int128 vec_sr(vector unsigned __int128 A, vector unsigned __int128 B); -@exdent vector signed __int128 vec_sr(vector signed __int128 A, vector unsigned __int128 B); +@exdent vector unsigned __int128 vec_sr(vector unsigned __int128 @var{A}, vector unsigned __int128 @var{B}); +@exdent vector signed __int128 vec_sr(vector signed __int128 @var{A}, vector unsigned __int128 @var{B}); @end smallexample -Result value: Each element of R is obtained by shifting the corresponding element of -A right by the number of bits specified by the corresponding element of B. +Result value: Each element of @var{R} is obtained by shifting the corresponding element of +@var{A} right by the number of bits specified by the corresponding element of @var{B}. @smallexample -@exdent vector unsigned __int128 vec_sra(vector unsigned __int128 A, vector unsigned __int128 B); -@exdent vector signed __int128 vec_sra(vector signed __int128 A, vector unsigned __int128 B); +@exdent vector unsigned __int128 vec_sra(vector unsigned __int128 @var{A}, vector unsigned __int128 @var{B}); +@exdent vector signed __int128 vec_sra(vector signed __int128 @var{A}, vector unsigned __int128 @var{B}); @end smallexample -Result value: Each element of R is obtained by arithmetic shifting the corresponding -element of A right by the number of bits specified by the corresponding element of B. +Result value: Each element of @var{R} is obtained by arithmetic shifting the corresponding +element of @var{A} right by the number of bits specified by the corresponding element of @var{B}. @smallexample @exdent vector unsigned __int128 vec_mule (vector unsigned long long, @@ -22341,7 +22341,7 @@ Generates the @code{mvtaclo} machine ins 32 bits of the accumulator. @enddefbuiltin -@defbuiltin{void __builtin_rx_mvtc (int reg, int val)} +@defbuiltin{void __builtin_rx_mvtc (int @var{reg}, int @var{val})} Generates the @code{mvtc} machine instruction which sets control register number @code{reg} to @code{val}. @enddefbuiltin