Message ID | 20221101223059.460937-6-sidhartha.kumar@oracle.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3247174wru; Tue, 1 Nov 2022 15:34:23 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6I9dyaVL8KqbJhErx3gm1B0sFb5v3Uk9Dq/Nv1JqU4gAERnaFqjn2x6n5W59q6k997dr/E X-Received: by 2002:a17:906:9bed:b0:7a6:a68b:9697 with SMTP id de45-20020a1709069bed00b007a6a68b9697mr19907943ejc.218.1667342062852; Tue, 01 Nov 2022 15:34:22 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1667342062; cv=pass; d=google.com; s=arc-20160816; b=Qe3RNKRnqOg52SDCqnGUg9xCotEclOPWJzHqAd7EeyUBejEGPvPPtLurmuUJfTVTbd hn5yU4eiuqDl9YDC/drtgaqLdAZVQzQskp8twB71Mxps2PP4+U3TslbVOu7ArKRSPg6c G4JfM+khVtzJDgQnGixoN9vC4JmxA0QFMb+fHQusuH5QMQ6zBmkmclJszCMsiDnUQDdO qiIcAeCTWadHJZm0RwDqEpRTQYeG4fYuEIDFxhmEP26lcjm8bv2kCX8d+Yfe1llI8ozF 5jpBV+GJDBCUYhLVhhWgBxg6Tgw2vBBJhUFbgmBy0zLL8By7/eNjCalr0ERer+vE/jCN 6Iew== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-signature; bh=OjyqufhItJZNQqVVuotec1/lf1wZzFO6juqFsw+19r8=; b=o7ZOKUPKJyw0OaeH/xCn1c8aK91pXYBOQBwez94k6BG02OAz/hcLv4pUSW3Vyq6f8k GmiO9gYAd0mhihcCyOKCZdU/UlAp3i/oIynS1LjHJz23AKpUUlLGKkLXekfLK4ygN4TD 2aAJZw6N3f+Fs1bd6XnmgzuO676iiykLAoOcgvlzUb6Ufxye6C4eApZ1t4riIQ5gKjM3 nEBvIniwOARYhkzyRjHUwB4XP+2PFkUhEV8NfI47W+4NZSwaqWJ5vCdakvHMlnW5oo+T M5EpEz820+1yWfgE8Kn9K3X68sfwEeoDpfLVkHx1V68Q3YuD6fO6P/l3lZ68iglhUqjm ElRg== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=GvRd29Nz; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=iKRJY8Un; arc=pass (i=1 spf=pass spfdomain=oracle.com dkim=pass dkdomain=oracle.com dmarc=pass fromdomain=oracle.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i4-20020a17090685c400b0077d1df3967asi9224701ejy.563.2022.11.01.15.33.56; Tue, 01 Nov 2022 15:34:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2022-7-12 header.b=GvRd29Nz; dkim=pass header.i=@oracle.onmicrosoft.com header.s=selector2-oracle-onmicrosoft-com header.b=iKRJY8Un; arc=pass (i=1 spf=pass spfdomain=oracle.com dkim=pass dkdomain=oracle.com dmarc=pass fromdomain=oracle.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230337AbiKAWcw (ORCPT <rfc822;kartikey406@gmail.com> + 99 others); Tue, 1 Nov 2022 18:32:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230317AbiKAWc1 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 1 Nov 2022 18:32:27 -0400 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 992AC1D675 for <linux-kernel@vger.kernel.org>; Tue, 1 Nov 2022 15:32:26 -0700 (PDT) Received: from pps.filterd (m0246632.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 2A1M3xsF016218; Tue, 1 Nov 2022 22:32:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : content-type : mime-version; s=corp-2022-7-12; bh=OjyqufhItJZNQqVVuotec1/lf1wZzFO6juqFsw+19r8=; b=GvRd29NzRgmCHvAQyaEE3bG+gsBdceZke68wZ6EXOO3NbWQDxnnoxDy/AeZyXlfhg4NL YB4CPuBfVtFQRtGMjXz2srQKupf74ODsexrz3fyMiAYOVPrI8BK5Dr3tAwcAEOf8NyfJ zCoUY5zQkPcze5JtBO+LPykDv8UKQQe0M3vb9D/FULh9ZCOgh9PB0cSojLDo1nBzhWTX v7kw8kxUnBI0ulIFsX7TYVcLwu98Yez2CFbg1wj5I4gEsC2BrO2e/fCZMKz+q1LRdgbW zBMwQoF8sEuwlVHnrjDh2H3igCGTrftPjyr/x6QUAGPXCfgEdefiUAwg+q7TagOBHX1i Pg== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kgussr1wu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 01 Nov 2022 22:32:14 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2A1JkV6e014190; Tue, 1 Nov 2022 22:32:14 GMT Received: from nam11-bn8-obe.outbound.protection.outlook.com (mail-bn8nam11lp2168.outbound.protection.outlook.com [104.47.58.168]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3kgtmay6yk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 01 Nov 2022 22:32:13 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bspiOvKCVaVFrV6VPbLYkewXlYtG7jMqMWot04gIHhArBKMBpozsqtAirbn7L2o4uUzEj7MkvC4FHLPz1dEKYZtwIr1Nrgylm6aTkMowafqGVgdGTKg/CLBSZiFfKX3qPVTLfdJQufUPz9Ka4t0uB+fSBkdId5owVoteGTRLMPZfX2qzq3s1qOUv8f7cWBwOnMVudRu7SuI+msS1kfEeXPY/ICSVyh6s1asq0VN3wA2f19L2YSLdTG6DmGu42O73PIAdO3aYnoP41v3zzsAQIgBcNof0Qfz2cb1UaZLe1DH7trhqBTJoQ9pzc2xrGeCwwI0q6sVxJapMuefbYhVpIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OjyqufhItJZNQqVVuotec1/lf1wZzFO6juqFsw+19r8=; b=HtxQJCtiKgJOQRf0wsv5zUV7/1W2ZPFqvb+Lrh0Nw3/qsTgRHM3sbNMnB7jmjBA/iEv5ts41bO5TgV1zcq1r+M7lfI/N829o8SpRolhCy19wFRt/e3czB2CaZ7AWbWbk3yXrnsV/wa3rmiEIjxEVPez5xKD+DwMBL2TSUXB8C8pxxMR5xS2wS7llJA3MB03nBaSEULdie0axYjC6PBhU5wZIbmTFvnhoIrg9leNnrtvmd7ncpMBzX0QWil6dzS8CVAzKCvQd5NeO1B+iWPkvKUm1EkBnj3k6buLZ2YwIXKxFG+OcWT4jjHfppoQPsOZBwB73T1+7c2tHE8L+3FHDQw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OjyqufhItJZNQqVVuotec1/lf1wZzFO6juqFsw+19r8=; b=iKRJY8UnIfsFd0+4oF3pApmhXnhUXaeawbzhDmuPqjg8rGvwDKUJxLAEcIHZWQQzMDrL7Uf5Uyn/IqQ5C+W27ukf1B/bYRx344iqtGI9SQgx09V8idDKejWqjSJ8lAj3qraC+Lcoh67kw2xDajbv+vCnF+KF8U8TktG12NZIbxY= Received: from CH0PR10MB5113.namprd10.prod.outlook.com (2603:10b6:610:c9::8) by DM4PR10MB6184.namprd10.prod.outlook.com (2603:10b6:8:8c::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5769.15; Tue, 1 Nov 2022 22:32:12 +0000 Received: from CH0PR10MB5113.namprd10.prod.outlook.com ([fe80::3702:7db0:8917:9954]) by CH0PR10MB5113.namprd10.prod.outlook.com ([fe80::3702:7db0:8917:9954%4]) with mapi id 15.20.5769.019; Tue, 1 Nov 2022 22:32:12 +0000 From: Sidhartha Kumar <sidhartha.kumar@oracle.com> To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, minhquangbui99@gmail.com, aneesh.kumar@linux.ibm.com, Sidhartha Kumar <sidhartha.kumar@oracle.com> Subject: [PATCH v2 5/9] mm/hugetlb: convert isolate_or_dissolve_huge_page to folios Date: Tue, 1 Nov 2022 15:30:55 -0700 Message-Id: <20221101223059.460937-6-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20221101223059.460937-1-sidhartha.kumar@oracle.com> References: <20221101223059.460937-1-sidhartha.kumar@oracle.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SJ0PR03CA0144.namprd03.prod.outlook.com (2603:10b6:a03:33c::29) To CH0PR10MB5113.namprd10.prod.outlook.com (2603:10b6:610:c9::8) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH0PR10MB5113:EE_|DM4PR10MB6184:EE_ X-MS-Office365-Filtering-Correlation-Id: de5eacdc-9052-42d5-ad27-08dabc58eb9a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Pw6oMyWlcztfsZ89m8u/7L46ahGIjJrTKpqhXEn6eFFIX+56RCQNk42lVDA83tCMao18PDO86S9AehFKTmyEBOmZsrfLXvhKlj3LUJ5cf/RzuVRPdtDU/k2/vmDZ01Jzxg7QuccvStcA+50IwiGH8//sdc4pt8oH8GJKv9kfG4pvxTcdsPspAMTVbX0aK00e6Vkqe0UWXZNZU0pZh2lnkYqP1JJ5ddutYq208SuY+tR7dtTlSxfjHHxFv7ggNgZRXGxHucFt+935HZoTAkGZFw/vD2Y0n8VoJ8xu9PEldU8BzRmxScxuBF2DL05lCE8m2vJMjVAiXooyNDpr4U9iMo83GUubY4iq450xw7dt97KsrmBoI1tvN5PWqvEa1hVObd/quvOhPyNB5p9pZS8jwt4JlVW1/VF/Fgp6p+uiSm7nbYvrFNYTM4vHv3yHEheu6zinhvHOIeO/5SZCIMnhylcYmsJ6c6CsEn1FmPIMlYa09CYwqefUktQYuGaGGUdJi243ILgichRJkteVxCC5FDX24EJyMDBgmkjKRqATdS1Bcw42jE+pa0kd1iRKH7ZHS5gaw1BL/hcf5l6RjAfYvX/6hWjzwtTHesSVVuWadBpKp0yqc3r9qFV3HQ6qksjnFWkN6lw/OH3JbUnfOTDkiC3I/4rOpVFLQP7fZplwmr3oEcQ0vLi3Jszz+qi+X1KGpsYi+Xvtgds72G5QJgbXbiA5p4cENZBVxThqr+ArzV4E6qZrYTSv5yJ7Y/yetEWwBcE4pm/fWFyj1LvCPcX4bXn+ykmPMgqwzVp19XsueGmA22BFjOhWHlyQuiAy0I1/ X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CH0PR10MB5113.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(396003)(39860400002)(376002)(366004)(346002)(136003)(451199015)(38100700002)(36756003)(44832011)(6506007)(66476007)(107886003)(6666004)(2906002)(4326008)(316002)(66556008)(8676002)(6486002)(41300700001)(8936002)(66946007)(5660300002)(86362001)(83380400001)(186003)(1076003)(26005)(478600001)(6512007)(2616005)(14583001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: m6qj685AkQFrKy7gs3KRI5oYqJoxoYPOTfJr+XxVHG7ZJdFSTaNkkSm5/sydgPosG5Yt4QheykRuZ23YC9yUr1ZOTCUR7/NypchnQH2Di20BpQDVUNeO6WdXHmXA90E26TaYqA/IVLL7auH5u37c1k1ubohtCGt5xBwirk8yUY699kG8Rd11JQvWvQDaOZKNIY4/3uxCLVmqfzgFf+l8NSxy4UqE1DOo0e+LtMUR5oSxO/+Slaf6rXjqvWzkBjbjJFVidc9T1LfkM82cjmPmmQ1/03rJRkw949mQQZ17rulzHvb4xHs7eL1Td98UBUe6mH8Wiej9Dt+ZW8XRD/WgW9krRm3kYyvfFzIBJ2XC1he+pfPd5Xd1KS1HDFZq7MJ82spsi+kp2zHPQHI7QGeaoHjjOhysIUfXQs2q29i9pgeOhEyMmLjitM1VFW3WtlTNPDCCRVQPDN0G4rJP+SG5VFvd3TB3ie3D6p7MQtm8ssCbV5QQae9CxrNSmaLz7Cl/kfjOhkCdXXEqLAhi13WcCEC+13x+CjUVh5Xwt8ZLF7cx8T60CpOAhZHU5ejrJAzUc0lUBFd+fuwv5cQNF108eMd3Np0wcKXcik65oM0tcYp/Fn0JIuWGdSHvrN9GOxo4RKKOXBn4l0SRWPonZIB4hW9BOC99Li1kgPOA16QVIgDbomCtsIXgvK5bl77fDTDCcuefnKBPL+sC2VNDHlZ3skoa9iZssJDl2BB0fzqHLOl7uAlht0MITILMFYWFkkl/Ef4OEgG28pOgXjvqhFONfGqo4v+LzThO2ZM/9+PS3VBlAA0AKjtwfcuREBdsHK1+F135DM7D6L/9R/gCrvTrpidpCpbum3lS9fP4RlUBHlC/dszmitTCL4vg2KiLuApBSSTNsXRfbQW94U7JZNmnUS2DRQcd5vKWOlo/FlRwBw5TqJkcLAl7/U79ciKl8sIfxvi8F5cW552Kfdd4UJW34blJ2MPCm7DokvhCiNBTWsERkUjYNm7HzHTAJvOSgKb/B+wxjAA09rB9x7JNtQ6HVeoegWzal7g6s+ce6p0/5Mw2ZbNHg+4P+34P5/nFMTR0jkGlcpmHAjDPsCp/tJmpBzN1yfxNzQBaJlNHjDF8yPht0DAIfRz7q1AJzwiyxg6DqFf4wfcoxbmrK2Na8c261lPoCa/WFWAeGcDZwhob4GrPWdlbRRy+k+VhlBfdI86qczTwh5EBhpwffO9DfdWoqg1gsPS9Coe7948W2ex9lARdq41NBRQ0KChPkdziJ03g0BxPA9NnIbaBdZHy3v3sDhM0GYG4npji5vsnPgCsjrQ/YEOOCY0429IgRa0zxKRqIww1KCLMkg+B3wLut0R+XGydXDeBcsP8iB0ojz+qxVsYNa6eHaerz9m/neGuUKZ2FjkbQaEIVTFDzTiYQYtsiXdjsP0r71SlZKq12FJ7SWzqb/TEuEu30x6Xi3md4XtwMFSZyqHu8G20W81ug0KDhJoIYDdC6YpL7KaFkN756NdFehjGFMbpDwzlC73ZCvnW5CiWbaOr4/K2q1ABnJErEjNE24PVQf2gubTgW6+4J9h2LuEw5e896iZgl7c324dKjfwMKunrT/CgTvgP0WXR/w== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: de5eacdc-9052-42d5-ad27-08dabc58eb9a X-MS-Exchange-CrossTenant-AuthSource: CH0PR10MB5113.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Nov 2022 22:32:12.5253 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: xSNT3zbfWhBaxnu+VLB8KOqTmLvjMGEXNQ7rE47QQFi1njedjlUF5tzmAaTV6Ja/KdJ0hBN0aSXPIMqWA7Js0Fjnw6QMcAiunqSrSsACuvk= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR10MB6184 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-01_10,2022-11-01_02,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 mlxscore=0 suspectscore=0 malwarescore=0 spamscore=0 adultscore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211010153 X-Proofpoint-ORIG-GUID: -rUlxgouV6x5bNV-rYiiCj2VTvLnmCsg X-Proofpoint-GUID: -rUlxgouV6x5bNV-rYiiCj2VTvLnmCsg X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748334871115352130?= X-GMAIL-MSGID: =?utf-8?q?1748334871115352130?= |
Series |
convert hugetlb_cgroup helper functions to folios
|
|
Commit Message
Sidhartha Kumar
Nov. 1, 2022, 10:30 p.m. UTC
Removes a call to compound_head() by using a folio when operating on the head page of a hugetlb compound page. Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> --- mm/hugetlb.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-)
Comments
> On Nov 2, 2022, at 06:30, Sidhartha Kumar <sidhartha.kumar@oracle.com> wrote: > > Removes a call to compound_head() by using a folio when operating on the > head page of a hugetlb compound page. > > Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> > Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Muchun Song <songmuchun@bytedance.com> Thanks.
On Tue, Nov 01, 2022 at 03:30:55PM -0700, Sidhartha Kumar wrote: > +++ b/mm/hugetlb.c > @@ -2815,7 +2815,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, > int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) > { > struct hstate *h; > - struct page *head; > + struct folio *folio = page_folio(page); Is this safe? I was reviewing a different patch today, and I spotted this. With THP, we can relatively easily hit this case: struct page points to a page with pfn 0x40305, in a folio of order 2. We call page_folio() on it and the resulting pointer is for the folio with pfn 0x40304. If we don't have our own refcount (or some other protection ...) against freeing, the folio can now be freed and reallocated. Say it's now part of an order-3 folio. Our 'folio' pointer is now actually a pointer to a tail page, and we have various assertions that a folio pointer doesn't point to a tail page, so they trigger. It seems to me that this ... /* * The page might have been dissolved from under our feet, so make sure * to carefully check the state under the lock. * Return success when racing as if we dissolved the page ourselves. */ spin_lock_irq(&hugetlb_lock); if (folio_test_hugetlb(folio)) { h = folio_hstate(folio); } else { spin_unlock_irq(&hugetlb_lock); return 0; } implies that we don't have our own reference on the folio, so we might find a situation where the folio pointer we have is no longer a folio pointer. Maybe the page_folio() call should be moved inside the hugetlb_lock protection? Is that enough? I don't know enough about how hugetlb pages are split, freed & allocated to know what's going on. But then we _drop_ the lock, and keep referring to ... > @@ -2841,10 +2840,10 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) > if (hstate_is_gigantic(h)) > return -ENOMEM; > > - if (page_count(head) && !isolate_hugetlb(head, list)) > + if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) > ret = 0; > - else if (!page_count(head)) > - ret = alloc_and_dissolve_huge_page(h, head, list); > + else if (!folio_ref_count(folio)) > + ret = alloc_and_dissolve_huge_page(h, &folio->page, list); And I fall back to saying "I don't know enough to know if this is safe".
On 6/12/23 10:41 AM, Matthew Wilcox wrote: > On Tue, Nov 01, 2022 at 03:30:55PM -0700, Sidhartha Kumar wrote: >> +++ b/mm/hugetlb.c >> @@ -2815,7 +2815,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, >> int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) >> { >> struct hstate *h; >> - struct page *head; >> + struct folio *folio = page_folio(page); > > Is this safe? I was reviewing a different patch today, and I spotted > this. With THP, we can relatively easily hit this case: > > struct page points to a page with pfn 0x40305, in a folio of order 2. > We call page_folio() on it and the resulting pointer is for the folio > with pfn 0x40304. > If we don't have our own refcount (or some other protection ...) against > freeing, the folio can now be freed and reallocated. Say it's now part > of an order-3 folio. > Our 'folio' pointer is now actually a pointer to a tail page, and we > have various assertions that a folio pointer doesn't point to a tail > page, so they trigger. > > It seems to me that this ... > > /* > * The page might have been dissolved from under our feet, so make sure > * to carefully check the state under the lock. > * Return success when racing as if we dissolved the page ourselves. > */ > spin_lock_irq(&hugetlb_lock); > if (folio_test_hugetlb(folio)) { > h = folio_hstate(folio); > } else { > spin_unlock_irq(&hugetlb_lock); > return 0; > } > > implies that we don't have our own reference on the folio, so we might > find a situation where the folio pointer we have is no longer a folio > pointer. > If the folio became free and reallocated would this be considered a success? If the folio is no longer a hugetlb folio, isolate_or_dissolve_huge_page() returns as if it dissolved the page itself. Later in the call stack, within alloc_and_dissolve_hugetlb_folio() there is if (!folio_test_hugetlb(old_folio)) { /* * Freed from under us. Drop new_folio too. */ goto free_new; } which would imply it is safe for the old_folio to have been dropped/freed. > Maybe the page_folio() call should be moved inside the hugetlb_lock > protection? Is that enough? I don't know enough about how hugetlb > pages are split, freed & allocated to know what's going on. > But then we _drop_ the lock, and keep referring to ... > >> @@ -2841,10 +2840,10 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) >> if (hstate_is_gigantic(h)) >> return -ENOMEM; >> >> - if (page_count(head) && !isolate_hugetlb(head, list)) >> + if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) >> ret = 0; >> - else if (!page_count(head)) >> - ret = alloc_and_dissolve_huge_page(h, head, list); >> + else if (!folio_ref_count(folio)) >> + ret = alloc_and_dissolve_huge_page(h, &folio->page, list); > > And I fall back to saying "I don't know enough to know if this is safe".
On 06/12/23 18:41, Matthew Wilcox wrote: > On Tue, Nov 01, 2022 at 03:30:55PM -0700, Sidhartha Kumar wrote: > > +++ b/mm/hugetlb.c > > @@ -2815,7 +2815,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, > > int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) > > { > > struct hstate *h; > > - struct page *head; > > + struct folio *folio = page_folio(page); > > Is this safe? I was reviewing a different patch today, and I spotted > this. With THP, we can relatively easily hit this case: > > struct page points to a page with pfn 0x40305, in a folio of order 2. > We call page_folio() on it and the resulting pointer is for the folio > with pfn 0x40304. > If we don't have our own refcount (or some other protection ...) against > freeing, the folio can now be freed and reallocated. Say it's now part > of an order-3 folio. > Our 'folio' pointer is now actually a pointer to a tail page, and we > have various assertions that a folio pointer doesn't point to a tail > page, so they trigger. > > It seems to me that this ... > > /* > * The page might have been dissolved from under our feet, so make sure > * to carefully check the state under the lock. > * Return success when racing as if we dissolved the page ourselves. > */ > spin_lock_irq(&hugetlb_lock); > if (folio_test_hugetlb(folio)) { > h = folio_hstate(folio); > } else { > spin_unlock_irq(&hugetlb_lock); > return 0; > } > > implies that we don't have our own reference on the folio, so we might > find a situation where the folio pointer we have is no longer a folio > pointer. Your analysis is correct. This is not safe because we hold no locks or references. The folio pointer obtained via page_folio(page) may not be valid when calling folio_test_hugetlb(folio) and later. My bad for the Reviewed-by: :( > > Maybe the page_folio() call should be moved inside the hugetlb_lock > protection? Is that enough? I don't know enough about how hugetlb > pages are split, freed & allocated to know what's going on. > > But then we _drop_ the lock, and keep referring to ... > > > @@ -2841,10 +2840,10 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) > > if (hstate_is_gigantic(h)) > > return -ENOMEM; > > > > - if (page_count(head) && !isolate_hugetlb(head, list)) > > + if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) > > ret = 0; > > - else if (!page_count(head)) > > - ret = alloc_and_dissolve_huge_page(h, head, list); > > + else if (!folio_ref_count(folio)) > > + ret = alloc_and_dissolve_huge_page(h, &folio->page, list); The above was OK when using struct page instead of folio. The 'racy' part was getting the ref count on the head page. It was OK because this was only a check to see if we should TRY to isolate or dissolve. The code to actually isolate or dissolve would take the appropriate locks. I'm afraid the code is now making even more use of a potentially invalid folio. Here is how the above now looks in v6.3: spin_unlock_irq(&hugetlb_lock); /* * Fence off gigantic pages as there is a cyclic dependency between * alloc_contig_range and them. Return -ENOMEM as this has the effect * of bailing out right away without further retrying. */ if (hstate_is_gigantic(h)) return -ENOMEM; if (folio_ref_count(folio) && isolate_hugetlb(folio, list)) ret = 0; else if (!folio_ref_count(folio)) ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); Looks like that potentially invalid folio is being passed to other routines. Previous code would take lock and revalidate that struct page was still a hugetlb page. We can not do the same with a folio.
On 06/12/23 16:34, Mike Kravetz wrote: > On 06/12/23 18:41, Matthew Wilcox wrote: > > On Tue, Nov 01, 2022 at 03:30:55PM -0700, Sidhartha Kumar wrote: > > > +++ b/mm/hugetlb.c > > > @@ -2815,7 +2815,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, > > > int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) > > > { > > > struct hstate *h; > > > - struct page *head; > > > + struct folio *folio = page_folio(page); > > > > Is this safe? I was reviewing a different patch today, and I spotted > > this. With THP, we can relatively easily hit this case: > > > > struct page points to a page with pfn 0x40305, in a folio of order 2. > > We call page_folio() on it and the resulting pointer is for the folio > > with pfn 0x40304. > > If we don't have our own refcount (or some other protection ...) against > > freeing, the folio can now be freed and reallocated. Say it's now part > > of an order-3 folio. > > Our 'folio' pointer is now actually a pointer to a tail page, and we > > have various assertions that a folio pointer doesn't point to a tail > > page, so they trigger. > > > > It seems to me that this ... > > > > /* > > * The page might have been dissolved from under our feet, so make sure > > * to carefully check the state under the lock. > > * Return success when racing as if we dissolved the page ourselves. > > */ > > spin_lock_irq(&hugetlb_lock); > > if (folio_test_hugetlb(folio)) { > > h = folio_hstate(folio); > > } else { > > spin_unlock_irq(&hugetlb_lock); > > return 0; > > } > > > > implies that we don't have our own reference on the folio, so we might > > find a situation where the folio pointer we have is no longer a folio > > pointer. > > Your analysis is correct. > > This is not safe because we hold no locks or references. The folio > pointer obtained via page_folio(page) may not be valid when calling > folio_test_hugetlb(folio) and later. > > My bad for the Reviewed-by: :( > I was looking at this more closely and need a bit of clarification. As mentioned, your analysis is correct. However, it appears that there is other code doing: folio = page_folio(page); ... if (folio_test_hugetlb(folio)) without holding a folio ref or some type of lock. split_huge_pages_all() is one such example. So, either this code has the same issue or there are folio routines that can be called without holding a ref/lock. The kerneldoc for folio_test_hugetlb says "Caller should have a reference on the folio to prevent it from being turned into a tail page.". However, is that mostly to make sure the returned value is consistent/valid? Can it really lead to an assert if folio pointer is changed to point to something else? > > Maybe the page_folio() call should be moved inside the hugetlb_lock > > protection? Is that enough? I don't know enough about how hugetlb > > pages are split, freed & allocated to know what's going on. Upon further thought, I think we should move the page_folio() inside the lock just to be more correct. > > > > But then we _drop_ the lock, and keep referring to ... > > > > > @@ -2841,10 +2840,10 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) > > > if (hstate_is_gigantic(h)) > > > return -ENOMEM; > > > > > > - if (page_count(head) && !isolate_hugetlb(head, list)) > > > + if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) > > > ret = 0; > > > - else if (!page_count(head)) > > > - ret = alloc_and_dissolve_huge_page(h, head, list); > > > + else if (!folio_ref_count(folio)) > > > + ret = alloc_and_dissolve_huge_page(h, &folio->page, list); > > The above was OK when using struct page instead of folio. The 'racy' > part was getting the ref count on the head page. It was OK because this > was only a check to see if we should TRY to isolate or dissolve. The > code to actually isolate or dissolve would take the appropriate locks. page_count() is doing 'folio_ref_count(page_folio(page));' and there I suspect there are many places doing page_count without taking a page ref or locking. So, it seems like this would also be safe? > I'm afraid the code is now making even more use of a potentially invalid > folio. Here is how the above now looks in v6.3: > > spin_unlock_irq(&hugetlb_lock); > > /* > * Fence off gigantic pages as there is a cyclic dependency between > * alloc_contig_range and them. Return -ENOMEM as this has the effect > * of bailing out right away without further retrying. > */ > if (hstate_is_gigantic(h)) > return -ENOMEM; > > if (folio_ref_count(folio) && isolate_hugetlb(folio, list)) > ret = 0; > else if (!folio_ref_count(folio)) > ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); > > Looks like that potentially invalid folio is being passed to other > routines. Previous code would take lock and revalidate that struct page > was still a hugetlb page. We can not do the same with a folio. Perhaps I spoke too soon. Yes, we pass a potentially invalid folio pointer to isolate_hugetlb() and alloc_and_dissolve_hugetlb_folio(). However, it seems the validation they perform should be sufficient. bool isolate_hugetlb(struct folio *folio, struct list_head *list) { bool ret = true; spin_lock_irq(&hugetlb_lock); if (!folio_test_hugetlb(folio) || !folio_test_hugetlb_migratable(folio) || !folio_try_get(folio)) { ret = false; goto unlock; static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, struct folio *old_folio, struct list_head *list) { ... retry: spin_lock_irq(&hugetlb_lock); if (!folio_test_hugetlb(old_folio)) { ... } else if (folio_ref_count(old_folio)) { ... } else if (!folio_test_hugetlb_freed(old_folio)) { ... goto retry; } else { /* * Ok, old_folio is still a genuine free hugepage. Upon further consideration, I do not see an issue with the existing code. If there are issues with calling folio_test_hugetlb() or folio_ref_count() on a potentially invalid folio pointer, then we do have issues here. However, such an issue would be more widespread as there is more code doing the same.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2a48feadb41c..bcc39d2613b2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2815,7 +2815,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) { struct hstate *h; - struct page *head; + struct folio *folio = page_folio(page); int ret = -EBUSY; /* @@ -2824,9 +2824,8 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) * Return success when racing as if we dissolved the page ourselves. */ spin_lock_irq(&hugetlb_lock); - if (PageHuge(page)) { - head = compound_head(page); - h = page_hstate(head); + if (folio_test_hugetlb(folio)) { + h = folio_hstate(folio); } else { spin_unlock_irq(&hugetlb_lock); return 0; @@ -2841,10 +2840,10 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (hstate_is_gigantic(h)) return -ENOMEM; - if (page_count(head) && !isolate_hugetlb(head, list)) + if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) ret = 0; - else if (!page_count(head)) - ret = alloc_and_dissolve_huge_page(h, head, list); + else if (!folio_ref_count(folio)) + ret = alloc_and_dissolve_huge_page(h, &folio->page, list); return ret; }