From patchwork Mon Sep 25 23:08:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 144672 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:cae8:0:b0:403:3b70:6f57 with SMTP id r8csp1604707vqu; Mon, 25 Sep 2023 18:31:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGNitwhwP/2Ino/W6VYkiPkfZv/A6KSz4IFAZHH/kHCL6PN1lsBXbXb8vuWeaJuN9nNyk3w X-Received: by 2002:a17:90a:6889:b0:274:9121:382 with SMTP id a9-20020a17090a688900b0027491210382mr1785846pjd.22.1695691914394; Mon, 25 Sep 2023 18:31:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695691914; cv=none; d=google.com; s=arc-20160816; b=aGhC2bwKl3aE/n8L0nnQADJUdO3kl5ms3wR95xZOXkXeKos+ds0Tol2er01FRm1EVV XC6zRtX6tvWqp4lf1MYEeVlmKbEndBbyRHtdgrv44E5iwBUQgwmQKrjnWnSJzZaB13BS jI1NaEwzUWwxWwNIiBwOoTLfzHeCT8WW4fBWheYtk3xz+y4e7YOzj0dXvm1jM82gb2Rz HlT9dSYqnGLICc9S5D/vCdOh2TgL0s8wdLR3T8vd9CG7t3yV62/MzudGaseseOmO0nEc a2tRV7/ExJi56AniEWJodaiQb73GxZHx0p2VfUg7HtCbk7oyq4Zt7zUqZpvJX3BA4ymL TqLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=cV/WXohPzUkk94WshSKSjmA5ZbzTW6O2vSKuW13yiOc=; fh=cnVkuVrG/TdFI46uycxh/TNno8+tW8fsC0qurRWz87Y=; b=F/ol6ORdiBNfApla9tAte88EwkfSnDjy2ITrEz98ffnXxbx8O2dO/Qv/vVq8ivepfg WVTwFguvuX5dbnCIhLpd3O3Bf3euhYXbwJChDJpHGF9rjRHHw4Fhh8Vc/oP4OEkwsOnW I9DcLMkuNT14BHQaCDiKyxuhrissCZq/K7BIPoAos3qQqsfRICWlQZtHJXl7EXB95znX DRlSD5Zq98xKDqNFEslctgcP90l6fj928KVvTvaLtx5ZxXP5zoZPp0Il0axBr5lgDyVa YIxXqi1yJwWuDe0c5IGenzT7HtKg6Do4oUx1oBUBZYC9vbhB9oLzXUBFOPiPHcmz8u/z bPpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=fzHA6zWq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id a21-20020a63e855000000b00578a7f5a0afsi11381200pgk.357.2023.09.25.18.31.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Sep 2023 18:31:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=fzHA6zWq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 2A80B80A7323; Mon, 25 Sep 2023 16:08:47 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233473AbjIYXIo (ORCPT + 28 others); Mon, 25 Sep 2023 19:08:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229882AbjIYXIl (ORCPT ); Mon, 25 Sep 2023 19:08:41 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1DC93A3; Mon, 25 Sep 2023 16:08:35 -0700 (PDT) Received: from localhost.localdomain (unknown [4.155.48.113]) by linux.microsoft.com (Postfix) with ESMTPSA id 9BB57212C81D; Mon, 25 Sep 2023 16:08:34 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 9BB57212C81D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1695683314; bh=cV/WXohPzUkk94WshSKSjmA5ZbzTW6O2vSKuW13yiOc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fzHA6zWq415MTGGxr7MdegCOW7vc9tYTg82HpemY2R12P75BlOHj9hh1hDMhyePBd olpwYstubwptYFObnZJ9X6oZjj/+HqhJ3Jh3qXPG9WB1p7RdqbSsw+GhQooOKMGJLY r8aJPc3MHlg5r+nX3LUM13d8gcZRPRVh27nqaz7U= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, cleger@rivosinc.com, linux-kselftest@vger.kernel.org Subject: [PATCH 1/2] tracing/user_events: Align set_bit() address for all archs Date: Mon, 25 Sep 2023 23:08:28 +0000 Message-Id: <20230925230829.341-2-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230925230829.341-1-beaub@linux.microsoft.com> References: <20230925230829.341-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-17.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 25 Sep 2023 16:08:47 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778061844791050211 X-GMAIL-MSGID: 1778061844791050211 All architectures should use a long aligned address passed to set_bit(). User processes can pass either a 32-bit or 64-bit sized value to be updated when tracing is enabled when on a 64-bit kernel. Both cases are ensured to be naturally aligned, however, that is not enough. The address must be long aligned without affecting checks on the value within the user process which require different adjustments for the bit for little and big endian CPUs. Add a compat flag to user_event_enabler that indicates when a 32-bit value is being used on a 64-bit kernel. Long align addresses and correct the bit to be used by set_bit() to account for this alignment. Ensure compat flags are copied during forks and used during deletion clears. Fixes: 7235759084a4 ("tracing/user_events: Use remote writes for event enablement") Link: https://lore.kernel.org/linux-trace-kernel/20230914131102.179100-1-cleger@rivosinc.com/ Reported-by: Clément Léger Suggested-by: Clément Léger Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 58 ++++++++++++++++++++++++++++---- 1 file changed, 51 insertions(+), 7 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 6f046650e527..b87f41187c6a 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -127,8 +127,13 @@ struct user_event_enabler { /* Bit 7 is for freeing status of enablement */ #define ENABLE_VAL_FREEING_BIT 7 -/* Only duplicate the bit value */ -#define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK +/* Bit 8 is for marking 32-bit on 64-bit */ +#define ENABLE_VAL_32_ON_64_BIT 8 + +#define ENABLE_VAL_COMPAT_MASK (1 << ENABLE_VAL_32_ON_64_BIT) + +/* Only duplicate the bit and compat values */ +#define ENABLE_VAL_DUP_MASK (ENABLE_VAL_BIT_MASK | ENABLE_VAL_COMPAT_MASK) #define ENABLE_BITOPS(e) (&(e)->values) @@ -174,6 +179,30 @@ struct user_event_validator { int flags; }; +static inline void align_addr_bit(unsigned long *addr, int *bit, + unsigned long *flags) +{ + if (IS_ALIGNED(*addr, sizeof(long))) { +#ifdef __BIG_ENDIAN + /* 32 bit on BE 64 bit requires a 32 bit offset when aligned. */ + if (test_bit(ENABLE_VAL_32_ON_64_BIT, flags)) + *bit += 32; +#endif + return; + } + + *addr = ALIGN_DOWN(*addr, sizeof(long)); + + /* + * We only support 32 and 64 bit values. The only time we need + * to align is a 32 bit value on a 64 bit kernel, which on LE + * is always 32 bits, and on BE requires no change when unaligned. + */ +#ifdef __LITTLE_ENDIAN + *bit += 32; +#endif +} + typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i, void *tpdata, bool *faulted); @@ -482,6 +511,7 @@ static int user_event_enabler_write(struct user_event_mm *mm, unsigned long *ptr; struct page *page; void *kaddr; + int bit = ENABLE_BIT(enabler); int ret; lockdep_assert_held(&event_mutex); @@ -497,6 +527,8 @@ static int user_event_enabler_write(struct user_event_mm *mm, test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)))) return -EBUSY; + align_addr_bit(&uaddr, &bit, ENABLE_BITOPS(enabler)); + ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, &page, NULL); @@ -515,9 +547,9 @@ static int user_event_enabler_write(struct user_event_mm *mm, /* Update bit atomically, user tracers must be atomic as well */ if (enabler->event && enabler->event->status) - set_bit(ENABLE_BIT(enabler), ptr); + set_bit(bit, ptr); else - clear_bit(ENABLE_BIT(enabler), ptr); + clear_bit(bit, ptr); kunmap_local(kaddr); unpin_user_pages_dirty_lock(&page, 1, true); @@ -849,6 +881,12 @@ static struct user_event_enabler enabler->event = user; enabler->addr = uaddr; enabler->values = reg->enable_bit; + +#if BITS_PER_LONG >= 64 + if (reg->enable_size == 4) + set_bit(ENABLE_VAL_32_ON_64_BIT, ENABLE_BITOPS(enabler)); +#endif + retry: /* Prevents state changes from racing with new enablers */ mutex_lock(&event_mutex); @@ -2377,7 +2415,8 @@ static long user_unreg_get(struct user_unreg __user *ureg, } static int user_event_mm_clear_bit(struct user_event_mm *user_mm, - unsigned long uaddr, unsigned char bit) + unsigned long uaddr, unsigned char bit, + unsigned long flags) { struct user_event_enabler enabler; int result; @@ -2385,7 +2424,7 @@ static int user_event_mm_clear_bit(struct user_event_mm *user_mm, memset(&enabler, 0, sizeof(enabler)); enabler.addr = uaddr; - enabler.values = bit; + enabler.values = bit | flags; retry: /* Prevents state changes from racing with new enablers */ mutex_lock(&event_mutex); @@ -2415,6 +2454,7 @@ static long user_events_ioctl_unreg(unsigned long uarg) struct user_event_mm *mm = current->user_event_mm; struct user_event_enabler *enabler, *next; struct user_unreg reg; + unsigned long flags; long ret; ret = user_unreg_get(ureg, ®); @@ -2425,6 +2465,7 @@ static long user_events_ioctl_unreg(unsigned long uarg) if (!mm) return -ENOENT; + flags = 0; ret = -ENOENT; /* @@ -2441,6 +2482,9 @@ static long user_events_ioctl_unreg(unsigned long uarg) ENABLE_BIT(enabler) == reg.disable_bit) { set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); + /* We must keep compat flags for the clear */ + flags |= enabler->values & ENABLE_VAL_COMPAT_MASK; + if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) user_event_enabler_destroy(enabler, true); @@ -2454,7 +2498,7 @@ static long user_events_ioctl_unreg(unsigned long uarg) /* Ensure bit is now cleared for user, regardless of event status */ if (!ret) ret = user_event_mm_clear_bit(mm, reg.disable_addr, - reg.disable_bit); + reg.disable_bit, flags); return ret; } From patchwork Mon Sep 25 23:08:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 144626 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:cae8:0:b0:403:3b70:6f57 with SMTP id r8csp1542858vqu; Mon, 25 Sep 2023 16:12:11 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHWPvyFyjTZ/ac0e011gI6UDBHJUM3HdrmoHD+iYgaidsiBa1ElWi0/BiC8BBe3naFVdd36 X-Received: by 2002:a17:902:da85:b0:1bb:b226:52b1 with SMTP id j5-20020a170902da8500b001bbb22652b1mr6043210plx.17.1695683530979; Mon, 25 Sep 2023 16:12:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695683530; cv=none; d=google.com; s=arc-20160816; b=bPp9m9UiAyyJa5a7hcKNeKv0BI2frQxpDxHD9KTfA2Poq9aQK/ohTsrjwDBUCH8aSW FNR0Cjdz7wrE4Ppfvwuzvq5NLXf4gHBzJdA46NnBDKSBgBWjSz0QvilpJisnqGdZD9+P Gv/W3+/WLMZcTpQS5ZtowhkywJDxFdpjv4+1ElEbndTnq82EBl/tuYeLKU4RaSMUnHMr RLO2UPOTr9GzRN1hGgdPPqvbOlHQLjXaO7LZ/W7FliJKkRvJmBvHKNMMNXm4GYFIXLIe j4Wzrw06YiUxSgODPl5uxRekWkAitJvpCGHIOhEjLZZ4645kQxz77hPq8Vo0hRIHUnia g/CQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=wB3d//EX+ogKijIJTe8Ceqkr+8ED/kywJ6SuAQTP9vQ=; fh=cnVkuVrG/TdFI46uycxh/TNno8+tW8fsC0qurRWz87Y=; b=nK0ZC9mN4gAHnGcf5n5fDvkJftq0zmUtFJzMiOFsO9SEj/SipGvsQMzIlIG5W6uxwv zq9svXvfwJqwu7oS+ABgIbVLHBSBIjWtdTdUaBonHou9UfQbWV/7r7qRRY7UbLujlEgv hVC0XhRyMooBWLVgPxcpHLoQpntCkUcLmFfqYcBYYFDiQ9iYiGIa8pEHAwaA9yk+li4M ogotJsqZC0DdgtE0Yfpqk1RxHmfaOpBe5tGbptWs3rwPYI9KaL02ALi1FP3sw1PIHhFp VCfwfy5CBA3jLDnvXT6v4lfWCx2TodnkYcqvT73PQHHTJ1vjPgEo3ZGl1F1B5DP80YT2 Mb6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=CTAqxWFO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id k6-20020a170902c40600b001bc1b018950si12089749plk.442.2023.09.25.16.12.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Sep 2023 16:12:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=CTAqxWFO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id CE594807E42F; Mon, 25 Sep 2023 16:08:55 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233490AbjIYXIs (ORCPT + 28 others); Mon, 25 Sep 2023 19:08:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230038AbjIYXIl (ORCPT ); Mon, 25 Sep 2023 19:08:41 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3540310C; Mon, 25 Sep 2023 16:08:35 -0700 (PDT) Received: from localhost.localdomain (unknown [4.155.48.113]) by linux.microsoft.com (Postfix) with ESMTPSA id B48EB212C81F; Mon, 25 Sep 2023 16:08:34 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com B48EB212C81F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1695683314; bh=wB3d//EX+ogKijIJTe8Ceqkr+8ED/kywJ6SuAQTP9vQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CTAqxWFOfb0l8YHKBUEnkG96vNvFe+P+AuTU/Gonj3EuD+sQ9o7eqYJFi4We9eF6h aOJu/i67vBNz/2POtoAv1vcLmjmNqanoQoXo/zg7/+u3iRnZzf6fwrJOrri69aC/pl fut1vMf8O2ylt/hgtdkHTbVhC1Itpj6Guc7CRboM= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, cleger@rivosinc.com, linux-kselftest@vger.kernel.org Subject: [PATCH 2/2] selftests/user_events: Fix abi_test for BE archs Date: Mon, 25 Sep 2023 23:08:29 +0000 Message-Id: <20230925230829.341-3-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230925230829.341-1-beaub@linux.microsoft.com> References: <20230925230829.341-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 25 Sep 2023 16:08:55 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778053054011894098 X-GMAIL-MSGID: 1778053054011894098 The abi_test currently uses a long sized test value for enablement checks. On LE this works fine, however, on BE this results in inaccurate assert checks due to a bit being used and assuming it's value is the same on both LE and BE. Use int type for 32-bit values and long type for 64-bit values to ensure appropriate behavior on both LE and BE. Fixes: 60b1af8de8c1 ("tracing/user_events: Add ABI self-test") Signed-off-by: Beau Belgrave --- tools/testing/selftests/user_events/abi_test.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c index 5125c42efe65..67af4c491c0c 100644 --- a/tools/testing/selftests/user_events/abi_test.c +++ b/tools/testing/selftests/user_events/abi_test.c @@ -46,7 +46,7 @@ static int change_event(bool enable) return ret; } -static int reg_enable(long *enable, int size, int bit) +static int reg_enable(void *enable, int size, int bit) { struct user_reg reg = {0}; int fd = open(data_file, O_RDWR); @@ -68,7 +68,7 @@ static int reg_enable(long *enable, int size, int bit) return ret; } -static int reg_disable(long *enable, int bit) +static int reg_disable(void *enable, int bit) { struct user_unreg reg = {0}; int fd = open(data_file, O_RDWR); @@ -89,12 +89,14 @@ static int reg_disable(long *enable, int bit) } FIXTURE(user) { - long check; + int check; + long check_long; }; FIXTURE_SETUP(user) { change_event(false); self->check = 0; + self->check_long = 0; } FIXTURE_TEARDOWN(user) { @@ -131,9 +133,9 @@ TEST_F(user, bit_sizes) { #if BITS_PER_LONG == 8 /* Allow 0-64 bits for 64-bit */ - ASSERT_EQ(0, reg_enable(&self->check, sizeof(long), 63)); - ASSERT_NE(0, reg_enable(&self->check, sizeof(long), 64)); - ASSERT_EQ(0, reg_disable(&self->check, 63)); + ASSERT_EQ(0, reg_enable(&self->check_long, sizeof(long), 63)); + ASSERT_NE(0, reg_enable(&self->check_long, sizeof(long), 64)); + ASSERT_EQ(0, reg_disable(&self->check_long, 63)); #endif /* Disallowed sizes (everything beside 4 and 8) */ @@ -195,7 +197,7 @@ static int clone_check(void *check) for (i = 0; i < 10; ++i) { usleep(100000); - if (*(long *)check) + if (*(int *)check) return 0; }