[v5,23/23] integrity: Switch from rbtree to LSM-managed blob for integrity_iint_cache
Message ID | 20231107134012.682009-24-roberto.sassu@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:aa0b:0:b0:403:3b70:6f57 with SMTP id k11csp245880vqo; Tue, 7 Nov 2023 05:52:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IEnj3jIavIWE0zohz1aR/lkXF7SS/3woH3Gu9cVt1o6AgDu5RuaysS4CkpXIkLzruhu0Ss4 X-Received: by 2002:a05:6a00:218d:b0:6b2:5992:9e89 with SMTP id h13-20020a056a00218d00b006b259929e89mr37510781pfi.9.1699365141875; Tue, 07 Nov 2023 05:52:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699365141; cv=none; d=google.com; s=arc-20160816; b=PL/d74wy5JB8UL0RkXV8MK3FJNQyT51wQvtNdoFiB2bEol/oB5EPPBjLxMBiASZWQa 1FpiH/4tmwkVpuMjx8yfb3wJPHXrUSml2RTJLvJN4BT6UsocQ0kub7YXSm9J0Jrmo3+D CgBAi6razhceBPyN13j129eLw/ru1z63K1Z7nYi43KHU61Dn/+P3VsTT4fapsvfYPJzT Z/hAnHpZxcy8R8T+jHFyKK9mFu3LDXPCCPMPLtYhby1xiPsYdpk75m+pm/gFk9s3FS0O qONotzCa2WrnBznRdh8GXjN2hVPJ5bD5ICeKtSkwUfwhU3qZIM67pBG/3UanaswxSPCk +rZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=H/EqBS0Mfj+UoRTkhcF8f7o1HcCoZHmTXwMMYPnu/Rg=; fh=2uEWnGGvEpdqFtUqKQh3Y6uaGNgTGNOI0L2cxn3xouc=; b=Lk/w8FGMAdwztpBMNVnHej6XPWVYYJXvpepbUrp7fpmp4DJw4sFS8M1PVK1aIxzG0W EJdayT2xbg9PChdwMc+dS/Yltf1plY/DVXUzYxwuhXRO3ZgsLdZBcBxxG98R/Zvd8L9J VGQcapOiBFUe8f3ngF63whnVG5SX6Mz/0HgGVggployQ+ZfWFXSUqqJqLSMPQG4/wCcq /FyDYnwf0chSFrZiVcgP4LhBs/7SuQBQcrT9FzAnUpN95L8R8KW7pmTGTInqiszZJHeF IA+JXMiqkYzH+m5U05sYSGOsGvOlF2kDtVQYxMUqX02ZBBHqatjwclcJDRJmw/pJdqSV L57A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id bf10-20020a656d0a000000b00563f627f2easi2379716pgb.122.2023.11.07.05.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 05:52:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 503C78074045; Tue, 7 Nov 2023 05:52:10 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234905AbjKGNvF (ORCPT <rfc822;lhua1029@gmail.com> + 32 others); Tue, 7 Nov 2023 08:51:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232686AbjKGNug (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 7 Nov 2023 08:50:36 -0500 Received: from frasgout11.his.huawei.com (frasgout11.his.huawei.com [14.137.139.23]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9167719B2; Tue, 7 Nov 2023 05:48:26 -0800 (PST) Received: from mail02.huawei.com (unknown [172.18.147.229]) by frasgout11.his.huawei.com (SkyGuard) with ESMTP id 4SPq2v4Sdsz9y5h2; Tue, 7 Nov 2023 21:35:03 +0800 (CST) Received: from huaweicloud.com (unknown [10.204.63.22]) by APP1 (Coremail) with SMTP id LxC2BwDHtXXdP0pltoA3AA--.60646S5; Tue, 07 Nov 2023 14:47:58 +0100 (CET) From: Roberto Sassu <roberto.sassu@huaweicloud.com> To: viro@zeniv.linux.org.uk, brauner@kernel.org, chuck.lever@oracle.com, jlayton@kernel.org, neilb@suse.de, kolga@netapp.com, Dai.Ngo@oracle.com, tom@talpey.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, zohar@linux.ibm.com, dmitry.kasatkin@gmail.com, dhowells@redhat.com, jarkko@kernel.org, stephen.smalley.work@gmail.com, eparis@parisplace.org, casey@schaufler-ca.com, mic@digikod.net Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-security-module@vger.kernel.org, linux-integrity@vger.kernel.org, keyrings@vger.kernel.org, selinux@vger.kernel.org, Roberto Sassu <roberto.sassu@huawei.com> Subject: [PATCH v5 23/23] integrity: Switch from rbtree to LSM-managed blob for integrity_iint_cache Date: Tue, 7 Nov 2023 14:40:12 +0100 Message-Id: <20231107134012.682009-24-roberto.sassu@huaweicloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231107134012.682009-1-roberto.sassu@huaweicloud.com> References: <20231107134012.682009-1-roberto.sassu@huaweicloud.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: LxC2BwDHtXXdP0pltoA3AA--.60646S5 X-Coremail-Antispam: 1UD129KBjvJXoWxKryfJr4xZFyxGr4fuF1kuFg_yoWxuF48pF 42gay8Jws8ZFWq9F4vyFW5Zr4fKFyqgFZ7W34Ykw1kAFyvvr1YqFs8AryUZF15GrW5t34I qr1Ykr4UuF1qyrJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBYb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUWw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW5JVW7JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV WxJr0_GcWl84ACjcxK6I8E87Iv67AKxVW8JVWxJwA2z4x0Y4vEx4A2jsIEc7CjxVAFwI0_ Cr1j6rxdM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1Y6r17McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1l42xK82IYc2Ij64 vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s026x8G jcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r4a6rW5MIIYrxkI7VAKI48JMIIF0xvE2I x0cI8IcVAFwI0_Xr0_Ar1lIxAIcVC0I7IYx2IY6xkF7I0E14v26F4UJVW0owCI42IY6xAI w20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Gr0_Cr1lIxAIcVC2z280aVCY1x 0267AKxVWxJr0_GcJvcSsGvfC2KfnxnUUI43ZEXa7IU1ebytUUUUU== X-CM-SenderInfo: purev21wro2thvvxqx5xdzvxpfor3voofrz/1tbiAgAOBF1jj5IbhgAAs2 X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Tue, 07 Nov 2023 05:52:10 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781913502719042523 X-GMAIL-MSGID: 1781913502719042523 |
Series |
security: Move IMA and EVM to the LSM infrastructure
|
|
Commit Message
Roberto Sassu
Nov. 7, 2023, 1:40 p.m. UTC
From: Roberto Sassu <roberto.sassu@huawei.com> Before the security field of kernel objects could be shared among LSMs with the LSM stacking feature, IMA and EVM had to rely on an alternative storage of inode metadata. The association between inode metadata and inode is maintained through an rbtree. Because of this alternative storage mechanism, there was no need to use disjoint inode metadata, so IMA and EVM today still share them. With the reservation mechanism offered by the LSM infrastructure, the rbtree is no longer necessary, as each LSM could reserve a space in the security blob for each inode. However, since IMA and EVM share the inode metadata, they cannot directly reserve the space for them. Instead, request from the 'integrity' LSM a space in the security blob for the pointer of inode metadata (integrity_iint_cache structure). The other reason for keeping the 'integrity' LSM is to preserve the original ordering of IMA and EVM functions as when they were hardcoded. Prefer reserving space for a pointer to allocating the integrity_iint_cache structure directly, as IMA would require it only for a subset of inodes. Always allocating it would cause a waste of memory. Introduce two primitives for getting and setting the pointer of integrity_iint_cache in the security blob, respectively integrity_inode_get_iint() and integrity_inode_set_iint(). This would make the code more understandable, as they directly replace rbtree operations. Locking is not needed, as access to inode metadata is not shared, it is per inode. Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> --- security/integrity/iint.c | 71 +++++----------------------------- security/integrity/integrity.h | 20 +++++++++- 2 files changed, 29 insertions(+), 62 deletions(-)
Comments
On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > Before the security field of kernel objects could be shared among LSMs with > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > of inode metadata. The association between inode metadata and inode is > maintained through an rbtree. > > Because of this alternative storage mechanism, there was no need to use > disjoint inode metadata, so IMA and EVM today still share them. > > With the reservation mechanism offered by the LSM infrastructure, the > rbtree is no longer necessary, as each LSM could reserve a space in the > security blob for each inode. However, since IMA and EVM share the > inode metadata, they cannot directly reserve the space for them. > > Instead, request from the 'integrity' LSM a space in the security blob for > the pointer of inode metadata (integrity_iint_cache structure). The other > reason for keeping the 'integrity' LSM is to preserve the original ordering > of IMA and EVM functions as when they were hardcoded. > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > structure directly, as IMA would require it only for a subset of inodes. > Always allocating it would cause a waste of memory. > > Introduce two primitives for getting and setting the pointer of > integrity_iint_cache in the security blob, respectively > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > the code more understandable, as they directly replace rbtree operations. > > Locking is not needed, as access to inode metadata is not shared, it is per > inode. > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > --- > security/integrity/iint.c | 71 +++++----------------------------- > security/integrity/integrity.h | 20 +++++++++- > 2 files changed, 29 insertions(+), 62 deletions(-) > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > index 882fde2a2607..a5edd3c70784 100644 > --- a/security/integrity/iint.c > +++ b/security/integrity/iint.c > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > return 0; > } > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > + .lbs_inode = sizeof(struct integrity_iint_cache *), > +}; I'll admit that I'm likely missing an important detail, but is there a reason why you couldn't stash the integrity_iint_cache struct directly in the inode's security blob instead of the pointer? For example: struct lsm_blob_sizes ... = { .lbs_inode = sizeof(struct integrity_iint_cache), }; struct integrity_iint_cache *integrity_inode_get(inode) { if (unlikely(!inode->isecurity)) return NULL; return inode->i_security + integrity_blob_sizes.lbs_inode; } -- paul-moore.com
On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > > > Before the security field of kernel objects could be shared among LSMs with > > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > > of inode metadata. The association between inode metadata and inode is > > maintained through an rbtree. > > > > Because of this alternative storage mechanism, there was no need to use > > disjoint inode metadata, so IMA and EVM today still share them. > > > > With the reservation mechanism offered by the LSM infrastructure, the > > rbtree is no longer necessary, as each LSM could reserve a space in the > > security blob for each inode. However, since IMA and EVM share the > > inode metadata, they cannot directly reserve the space for them. > > > > Instead, request from the 'integrity' LSM a space in the security blob for > > the pointer of inode metadata (integrity_iint_cache structure). The other > > reason for keeping the 'integrity' LSM is to preserve the original ordering > > of IMA and EVM functions as when they were hardcoded. > > > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > > structure directly, as IMA would require it only for a subset of inodes. > > Always allocating it would cause a waste of memory. > > > > Introduce two primitives for getting and setting the pointer of > > integrity_iint_cache in the security blob, respectively > > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > > the code more understandable, as they directly replace rbtree operations. > > > > Locking is not needed, as access to inode metadata is not shared, it is per > > inode. > > > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > > --- > > security/integrity/iint.c | 71 +++++----------------------------- > > security/integrity/integrity.h | 20 +++++++++- > > 2 files changed, 29 insertions(+), 62 deletions(-) > > > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > > index 882fde2a2607..a5edd3c70784 100644 > > --- a/security/integrity/iint.c > > +++ b/security/integrity/iint.c > > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > > return 0; > > } > > > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > > + .lbs_inode = sizeof(struct integrity_iint_cache *), > > +}; > > I'll admit that I'm likely missing an important detail, but is there > a reason why you couldn't stash the integrity_iint_cache struct > directly in the inode's security blob instead of the pointer? For > example: > > struct lsm_blob_sizes ... = { > .lbs_inode = sizeof(struct integrity_iint_cache), > }; > > struct integrity_iint_cache *integrity_inode_get(inode) > { > if (unlikely(!inode->isecurity)) > return NULL; > return inode->i_security + integrity_blob_sizes.lbs_inode; > } It would increase memory occupation. Sometimes the IMA policy encompasses a small subset of the inodes. Allocating the full integrity_iint_cache would be a waste of memory, I guess? On the other hand... (did not think fully about that) if we embed the full structure in the security blob, we already have a mutex available to use, and we don't need to take the inode lock (?). I'm fully convinced that we can improve the implementation significantly. I just was really hoping to go step by step and not accumulating improvements as dependency for moving IMA and EVM to the LSM infrastructure. Thanks Roberto
On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > > > > > Before the security field of kernel objects could be shared among LSMs with > > > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > > > of inode metadata. The association between inode metadata and inode is > > > maintained through an rbtree. > > > > > > Because of this alternative storage mechanism, there was no need to use > > > disjoint inode metadata, so IMA and EVM today still share them. > > > > > > With the reservation mechanism offered by the LSM infrastructure, the > > > rbtree is no longer necessary, as each LSM could reserve a space in the > > > security blob for each inode. However, since IMA and EVM share the > > > inode metadata, they cannot directly reserve the space for them. > > > > > > Instead, request from the 'integrity' LSM a space in the security blob for > > > the pointer of inode metadata (integrity_iint_cache structure). The other > > > reason for keeping the 'integrity' LSM is to preserve the original ordering > > > of IMA and EVM functions as when they were hardcoded. > > > > > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > > > structure directly, as IMA would require it only for a subset of inodes. > > > Always allocating it would cause a waste of memory. > > > > > > Introduce two primitives for getting and setting the pointer of > > > integrity_iint_cache in the security blob, respectively > > > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > > > the code more understandable, as they directly replace rbtree operations. > > > > > > Locking is not needed, as access to inode metadata is not shared, it is per > > > inode. > > > > > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > > > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > > > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > > > --- > > > security/integrity/iint.c | 71 +++++----------------------------- > > > security/integrity/integrity.h | 20 +++++++++- > > > 2 files changed, 29 insertions(+), 62 deletions(-) > > > > > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > > > index 882fde2a2607..a5edd3c70784 100644 > > > --- a/security/integrity/iint.c > > > +++ b/security/integrity/iint.c > > > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > > > return 0; > > > } > > > > > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > > > + .lbs_inode = sizeof(struct integrity_iint_cache *), > > > +}; > > > > I'll admit that I'm likely missing an important detail, but is there > > a reason why you couldn't stash the integrity_iint_cache struct > > directly in the inode's security blob instead of the pointer? For > > example: > > > > struct lsm_blob_sizes ... = { > > .lbs_inode = sizeof(struct integrity_iint_cache), > > }; > > > > struct integrity_iint_cache *integrity_inode_get(inode) > > { > > if (unlikely(!inode->isecurity)) > > return NULL; > > return inode->i_security + integrity_blob_sizes.lbs_inode; > > } > > It would increase memory occupation. Sometimes the IMA policy > encompasses a small subset of the inodes. Allocating the full > integrity_iint_cache would be a waste of memory, I guess? Perhaps, but if it allows us to remove another layer of dynamic memory I would argue that it may be worth the cost. It's also worth considering the size of integrity_iint_cache, while it isn't small, it isn't exactly huge either. > On the other hand... (did not think fully about that) if we embed the > full structure in the security blob, we already have a mutex available > to use, and we don't need to take the inode lock (?). That would be excellent, getting rid of a layer of locking would be significant. > I'm fully convinced that we can improve the implementation > significantly. I just was really hoping to go step by step and not > accumulating improvements as dependency for moving IMA and EVM to the > LSM infrastructure. I understand, and I agree that an iterative approach is a good idea, I just want to make sure we keep things tidy from a user perspective, i.e. not exposing the "integrity" LSM when it isn't required. -- paul-moore.com
On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu > <roberto.sassu@huaweicloud.com> wrote: > > On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > > > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > Before the security field of kernel objects could be shared among LSMs with > > > > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > > > > of inode metadata. The association between inode metadata and inode is > > > > maintained through an rbtree. > > > > > > > > Because of this alternative storage mechanism, there was no need to use > > > > disjoint inode metadata, so IMA and EVM today still share them. > > > > > > > > With the reservation mechanism offered by the LSM infrastructure, the > > > > rbtree is no longer necessary, as each LSM could reserve a space in the > > > > security blob for each inode. However, since IMA and EVM share the > > > > inode metadata, they cannot directly reserve the space for them. > > > > > > > > Instead, request from the 'integrity' LSM a space in the security blob for > > > > the pointer of inode metadata (integrity_iint_cache structure). The other > > > > reason for keeping the 'integrity' LSM is to preserve the original ordering > > > > of IMA and EVM functions as when they were hardcoded. > > > > > > > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > > > > structure directly, as IMA would require it only for a subset of inodes. > > > > Always allocating it would cause a waste of memory. > > > > > > > > Introduce two primitives for getting and setting the pointer of > > > > integrity_iint_cache in the security blob, respectively > > > > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > > > > the code more understandable, as they directly replace rbtree operations. > > > > > > > > Locking is not needed, as access to inode metadata is not shared, it is per > > > > inode. > > > > > > > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > > > > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > > > > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > > > > --- > > > > security/integrity/iint.c | 71 +++++----------------------------- > > > > security/integrity/integrity.h | 20 +++++++++- > > > > 2 files changed, 29 insertions(+), 62 deletions(-) > > > > > > > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > > > > index 882fde2a2607..a5edd3c70784 100644 > > > > --- a/security/integrity/iint.c > > > > +++ b/security/integrity/iint.c > > > > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > > > > return 0; > > > > } > > > > > > > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > > > > + .lbs_inode = sizeof(struct integrity_iint_cache *), > > > > +}; > > > > > > I'll admit that I'm likely missing an important detail, but is there > > > a reason why you couldn't stash the integrity_iint_cache struct > > > directly in the inode's security blob instead of the pointer? For > > > example: > > > > > > struct lsm_blob_sizes ... = { > > > .lbs_inode = sizeof(struct integrity_iint_cache), > > > }; > > > > > > struct integrity_iint_cache *integrity_inode_get(inode) > > > { > > > if (unlikely(!inode->isecurity)) > > > return NULL; > > > return inode->i_security + integrity_blob_sizes.lbs_inode; > > > } > > > > It would increase memory occupation. Sometimes the IMA policy > > encompasses a small subset of the inodes. Allocating the full > > integrity_iint_cache would be a waste of memory, I guess? > > Perhaps, but if it allows us to remove another layer of dynamic memory > I would argue that it may be worth the cost. It's also worth > considering the size of integrity_iint_cache, while it isn't small, it > isn't exactly huge either. > > > On the other hand... (did not think fully about that) if we embed the > > full structure in the security blob, we already have a mutex available > > to use, and we don't need to take the inode lock (?). > > That would be excellent, getting rid of a layer of locking would be significant. > > > I'm fully convinced that we can improve the implementation > > significantly. I just was really hoping to go step by step and not > > accumulating improvements as dependency for moving IMA and EVM to the > > LSM infrastructure. > > I understand, and I agree that an iterative approach is a good idea, I > just want to make sure we keep things tidy from a user perspective, > i.e. not exposing the "integrity" LSM when it isn't required. Ok, I went back to it again. I think trying to separate integrity metadata is premature now, too many things at the same time. I started to think, does EVM really need integrity metadata or it can work without? The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have the same problem now. What if we make IMA the one that manages integrity metadata, so that we can remove the 'integrity' LSM? So, no embedding the full structure in the security blob now, move integrity_inode_free() and integrity_kernel_module_request() to IMA, call integrity_iintcache_init() from IMA. EVM verification of new files would fail without IMA, but it would be the same now. Also, evm_verifyxattr() would only work with IMA, as it assumes that the latter creates integrity metadata and passes them as argument. Regarding the LSM order, I would take Casey's suggestion of introducing LSM_ORDER_REALLY_LAST, for EVM. Thanks Roberto
On Wed, 2023-11-29 at 13:27 +0100, Roberto Sassu wrote: > On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > > On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu > > <roberto.sassu@huaweicloud.com> wrote: > > > On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > > > > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > > > Before the security field of kernel objects could be shared among LSMs with > > > > > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > > > > > of inode metadata. The association between inode metadata and inode is > > > > > maintained through an rbtree. > > > > > > > > > > Because of this alternative storage mechanism, there was no need to use > > > > > disjoint inode metadata, so IMA and EVM today still share them. > > > > > > > > > > With the reservation mechanism offered by the LSM infrastructure, the > > > > > rbtree is no longer necessary, as each LSM could reserve a space in the > > > > > security blob for each inode. However, since IMA and EVM share the > > > > > inode metadata, they cannot directly reserve the space for them. > > > > > > > > > > Instead, request from the 'integrity' LSM a space in the security blob for > > > > > the pointer of inode metadata (integrity_iint_cache structure). The other > > > > > reason for keeping the 'integrity' LSM is to preserve the original ordering > > > > > of IMA and EVM functions as when they were hardcoded. > > > > > > > > > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > > > > > structure directly, as IMA would require it only for a subset of inodes. > > > > > Always allocating it would cause a waste of memory. > > > > > > > > > > Introduce two primitives for getting and setting the pointer of > > > > > integrity_iint_cache in the security blob, respectively > > > > > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > > > > > the code more understandable, as they directly replace rbtree operations. > > > > > > > > > > Locking is not needed, as access to inode metadata is not shared, it is per > > > > > inode. > > > > > > > > > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > > > > > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > > > > > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > > > > > --- > > > > > security/integrity/iint.c | 71 +++++----------------------------- > > > > > security/integrity/integrity.h | 20 +++++++++- > > > > > 2 files changed, 29 insertions(+), 62 deletions(-) > > > > > > > > > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > > > > > index 882fde2a2607..a5edd3c70784 100644 > > > > > --- a/security/integrity/iint.c > > > > > +++ b/security/integrity/iint.c > > > > > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > > > > > return 0; > > > > > } > > > > > > > > > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > > > > > + .lbs_inode = sizeof(struct integrity_iint_cache *), > > > > > +}; > > > > > > > > I'll admit that I'm likely missing an important detail, but is there > > > > a reason why you couldn't stash the integrity_iint_cache struct > > > > directly in the inode's security blob instead of the pointer? For > > > > example: > > > > > > > > struct lsm_blob_sizes ... = { > > > > .lbs_inode = sizeof(struct integrity_iint_cache), > > > > }; > > > > > > > > struct integrity_iint_cache *integrity_inode_get(inode) > > > > { > > > > if (unlikely(!inode->isecurity)) > > > > return NULL; > > > > return inode->i_security + integrity_blob_sizes.lbs_inode; > > > > } > > > > > > It would increase memory occupation. Sometimes the IMA policy > > > encompasses a small subset of the inodes. Allocating the full > > > integrity_iint_cache would be a waste of memory, I guess? > > > > Perhaps, but if it allows us to remove another layer of dynamic memory > > I would argue that it may be worth the cost. It's also worth > > considering the size of integrity_iint_cache, while it isn't small, it > > isn't exactly huge either. > > > > > On the other hand... (did not think fully about that) if we embed the > > > full structure in the security blob, we already have a mutex available > > > to use, and we don't need to take the inode lock (?). > > > > That would be excellent, getting rid of a layer of locking would be significant. > > > > > I'm fully convinced that we can improve the implementation > > > significantly. I just was really hoping to go step by step and not > > > accumulating improvements as dependency for moving IMA and EVM to the > > > LSM infrastructure. > > > > I understand, and I agree that an iterative approach is a good idea, I > > just want to make sure we keep things tidy from a user perspective, > > i.e. not exposing the "integrity" LSM when it isn't required. > > Ok, I went back to it again. > > I think trying to separate integrity metadata is premature now, too > many things at the same time. > > I started to think, does EVM really need integrity metadata or it can > work without? > > The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have > the same problem now. What if we make IMA the one that manages > integrity metadata, so that we can remove the 'integrity' LSM? > > So, no embedding the full structure in the security blob now, move > integrity_inode_free() and integrity_kernel_module_request() to IMA, > call integrity_iintcache_init() from IMA. > > EVM verification of new files would fail without IMA, but it would be > the same now. > > Also, evm_verifyxattr() would only work with IMA, as it assumes that > the latter creates integrity metadata and passes them as argument. > > Regarding the LSM order, I would take Casey's suggestion of introducing > LSM_ORDER_REALLY_LAST, for EVM. I attach the diff v5..v7. Tests passes with both IMA and EVM enabled. I did minor tweaks to the tests to take into account the possibility that IMA is disabled, and tests pass also in this case. Roberto
On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > > On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu > > <roberto.sassu@huaweicloud.com> wrote: > > > On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > > > > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > > > Before the security field of kernel objects could be shared among LSMs with > > > > > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > > > > > of inode metadata. The association between inode metadata and inode is > > > > > maintained through an rbtree. > > > > > > > > > > Because of this alternative storage mechanism, there was no need to use > > > > > disjoint inode metadata, so IMA and EVM today still share them. > > > > > > > > > > With the reservation mechanism offered by the LSM infrastructure, the > > > > > rbtree is no longer necessary, as each LSM could reserve a space in the > > > > > security blob for each inode. However, since IMA and EVM share the > > > > > inode metadata, they cannot directly reserve the space for them. > > > > > > > > > > Instead, request from the 'integrity' LSM a space in the security blob for > > > > > the pointer of inode metadata (integrity_iint_cache structure). The other > > > > > reason for keeping the 'integrity' LSM is to preserve the original ordering > > > > > of IMA and EVM functions as when they were hardcoded. > > > > > > > > > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > > > > > structure directly, as IMA would require it only for a subset of inodes. > > > > > Always allocating it would cause a waste of memory. > > > > > > > > > > Introduce two primitives for getting and setting the pointer of > > > > > integrity_iint_cache in the security blob, respectively > > > > > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > > > > > the code more understandable, as they directly replace rbtree operations. > > > > > > > > > > Locking is not needed, as access to inode metadata is not shared, it is per > > > > > inode. > > > > > > > > > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > > > > > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > > > > > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > > > > > --- > > > > > security/integrity/iint.c | 71 +++++----------------------------- > > > > > security/integrity/integrity.h | 20 +++++++++- > > > > > 2 files changed, 29 insertions(+), 62 deletions(-) > > > > > > > > > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > > > > > index 882fde2a2607..a5edd3c70784 100644 > > > > > --- a/security/integrity/iint.c > > > > > +++ b/security/integrity/iint.c > > > > > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > > > > > return 0; > > > > > } > > > > > > > > > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > > > > > + .lbs_inode = sizeof(struct integrity_iint_cache *), > > > > > +}; > > > > > > > > I'll admit that I'm likely missing an important detail, but is there > > > > a reason why you couldn't stash the integrity_iint_cache struct > > > > directly in the inode's security blob instead of the pointer? For > > > > example: > > > > > > > > struct lsm_blob_sizes ... = { > > > > .lbs_inode = sizeof(struct integrity_iint_cache), > > > > }; > > > > > > > > struct integrity_iint_cache *integrity_inode_get(inode) > > > > { > > > > if (unlikely(!inode->isecurity)) > > > > return NULL; > > > > return inode->i_security + integrity_blob_sizes.lbs_inode; > > > > } > > > > > > It would increase memory occupation. Sometimes the IMA policy > > > encompasses a small subset of the inodes. Allocating the full > > > integrity_iint_cache would be a waste of memory, I guess? > > > > Perhaps, but if it allows us to remove another layer of dynamic memory > > I would argue that it may be worth the cost. It's also worth > > considering the size of integrity_iint_cache, while it isn't small, it > > isn't exactly huge either. > > > > > On the other hand... (did not think fully about that) if we embed the > > > full structure in the security blob, we already have a mutex available > > > to use, and we don't need to take the inode lock (?). > > > > That would be excellent, getting rid of a layer of locking would be significant. > > > > > I'm fully convinced that we can improve the implementation > > > significantly. I just was really hoping to go step by step and not > > > accumulating improvements as dependency for moving IMA and EVM to the > > > LSM infrastructure. > > > > I understand, and I agree that an iterative approach is a good idea, I > > just want to make sure we keep things tidy from a user perspective, > > i.e. not exposing the "integrity" LSM when it isn't required. > > Ok, I went back to it again. > > I think trying to separate integrity metadata is premature now, too > many things at the same time. I'm not bothered by the size of the patchset, it is more important that we do The Right Thing. I would like to hear in more detail why you don't think this will work, I'm not interested in hearing about difficult it may be, I'm interested in hearing about what challenges we need to solve to do this properly. > I started to think, does EVM really need integrity metadata or it can > work without? > > The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have > the same problem now. What if we make IMA the one that manages > integrity metadata, so that we can remove the 'integrity' LSM? I guess we should probably revisit the basic idea of if it even makes sense to enable EVM without IMA? Should we update the Kconfig to require IMA when EVM is enabled? > Regarding the LSM order, I would take Casey's suggestion of introducing > LSM_ORDER_REALLY_LAST, for EVM. Please understand that I really dislike that we have imposed ordering constraints at the LSM layer, but I do understand the necessity (the BPF LSM ordering upsets me the most). I really don't want to see us make things worse by adding yet another ordering bucket, I would rather that we document it well and leave it alone ... basically treat it like the BPF LSM (grrrrrr).
On 11/29/2023 6:22 PM, Paul Moore wrote: > On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu > <roberto.sassu@huaweicloud.com> wrote: >> >> On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: >>> On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu >>> <roberto.sassu@huaweicloud.com> wrote: >>>> On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: >>>>> On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: >>>>>> >>>>>> Before the security field of kernel objects could be shared among LSMs with >>>>>> the LSM stacking feature, IMA and EVM had to rely on an alternative storage >>>>>> of inode metadata. The association between inode metadata and inode is >>>>>> maintained through an rbtree. >>>>>> >>>>>> Because of this alternative storage mechanism, there was no need to use >>>>>> disjoint inode metadata, so IMA and EVM today still share them. >>>>>> >>>>>> With the reservation mechanism offered by the LSM infrastructure, the >>>>>> rbtree is no longer necessary, as each LSM could reserve a space in the >>>>>> security blob for each inode. However, since IMA and EVM share the >>>>>> inode metadata, they cannot directly reserve the space for them. >>>>>> >>>>>> Instead, request from the 'integrity' LSM a space in the security blob for >>>>>> the pointer of inode metadata (integrity_iint_cache structure). The other >>>>>> reason for keeping the 'integrity' LSM is to preserve the original ordering >>>>>> of IMA and EVM functions as when they were hardcoded. >>>>>> >>>>>> Prefer reserving space for a pointer to allocating the integrity_iint_cache >>>>>> structure directly, as IMA would require it only for a subset of inodes. >>>>>> Always allocating it would cause a waste of memory. >>>>>> >>>>>> Introduce two primitives for getting and setting the pointer of >>>>>> integrity_iint_cache in the security blob, respectively >>>>>> integrity_inode_get_iint() and integrity_inode_set_iint(). This would make >>>>>> the code more understandable, as they directly replace rbtree operations. >>>>>> >>>>>> Locking is not needed, as access to inode metadata is not shared, it is per >>>>>> inode. >>>>>> >>>>>> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> >>>>>> Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> >>>>>> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> >>>>>> --- >>>>>> security/integrity/iint.c | 71 +++++----------------------------- >>>>>> security/integrity/integrity.h | 20 +++++++++- >>>>>> 2 files changed, 29 insertions(+), 62 deletions(-) >>>>>> >>>>>> diff --git a/security/integrity/iint.c b/security/integrity/iint.c >>>>>> index 882fde2a2607..a5edd3c70784 100644 >>>>>> --- a/security/integrity/iint.c >>>>>> +++ b/security/integrity/iint.c >>>>>> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) >>>>>> return 0; >>>>>> } >>>>>> >>>>>> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { >>>>>> + .lbs_inode = sizeof(struct integrity_iint_cache *), >>>>>> +}; >>>>> >>>>> I'll admit that I'm likely missing an important detail, but is there >>>>> a reason why you couldn't stash the integrity_iint_cache struct >>>>> directly in the inode's security blob instead of the pointer? For >>>>> example: >>>>> >>>>> struct lsm_blob_sizes ... = { >>>>> .lbs_inode = sizeof(struct integrity_iint_cache), >>>>> }; >>>>> >>>>> struct integrity_iint_cache *integrity_inode_get(inode) >>>>> { >>>>> if (unlikely(!inode->isecurity)) >>>>> return NULL; >>>>> return inode->i_security + integrity_blob_sizes.lbs_inode; >>>>> } >>>> >>>> It would increase memory occupation. Sometimes the IMA policy >>>> encompasses a small subset of the inodes. Allocating the full >>>> integrity_iint_cache would be a waste of memory, I guess? >>> >>> Perhaps, but if it allows us to remove another layer of dynamic memory >>> I would argue that it may be worth the cost. It's also worth >>> considering the size of integrity_iint_cache, while it isn't small, it >>> isn't exactly huge either. >>> >>>> On the other hand... (did not think fully about that) if we embed the >>>> full structure in the security blob, we already have a mutex available >>>> to use, and we don't need to take the inode lock (?). >>> >>> That would be excellent, getting rid of a layer of locking would be significant. >>> >>>> I'm fully convinced that we can improve the implementation >>>> significantly. I just was really hoping to go step by step and not >>>> accumulating improvements as dependency for moving IMA and EVM to the >>>> LSM infrastructure. >>> >>> I understand, and I agree that an iterative approach is a good idea, I >>> just want to make sure we keep things tidy from a user perspective, >>> i.e. not exposing the "integrity" LSM when it isn't required. >> >> Ok, I went back to it again. >> >> I think trying to separate integrity metadata is premature now, too >> many things at the same time. > > I'm not bothered by the size of the patchset, it is more important > that we do The Right Thing. I would like to hear in more detail why > you don't think this will work, I'm not interested in hearing about > difficult it may be, I'm interested in hearing about what challenges > we need to solve to do this properly. The right thing in my opinion is to achieve the goal with the minimal set of changes, in the most intuitive way. Until now, there was no solution that could achieve the primary goal of this patch set (moving IMA and EVM to the LSM infrastructure) and, at the same time, achieve the additional goal you set of removing the 'integrity' LSM. If you see the diff, the changes compared to v5 that was already accepted by Mimi are very straightforward. If the assumption I made that in the end the 'ima' LSM could take over the role of the 'integrity' LSM, that for me is the preferable option. Given that the patch set is not doing any design change, but merely moving calls and storing pointers elsewhere, that leaves us with the option of thinking better what to do next, including like you suggested to make IMA and EVM use disjoint metadata. >> I started to think, does EVM really need integrity metadata or it can >> work without? >> >> The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have >> the same problem now. What if we make IMA the one that manages >> integrity metadata, so that we can remove the 'integrity' LSM? > > I guess we should probably revisit the basic idea of if it even makes > sense to enable EVM without IMA? Should we update the Kconfig to > require IMA when EVM is enabled? That would be up to Mimi. Also this does not seem the main focus of the patch set. >> Regarding the LSM order, I would take Casey's suggestion of introducing >> LSM_ORDER_REALLY_LAST, for EVM. > > Please understand that I really dislike that we have imposed ordering > constraints at the LSM layer, but I do understand the necessity (the > BPF LSM ordering upsets me the most). I really don't want to see us > make things worse by adding yet another ordering bucket, I would > rather that we document it well and leave it alone ... basically treat > it like the BPF LSM (grrrrrr). Uhm, that would not be possible right away (the BPF LSM is mutable), remember that we defined LSM_ORDER_LAST so that an LSM can be always enable and placed as last (requested by Mimi)? Thanks Roberto
On 11/29/2023 10:46 AM, Roberto Sassu wrote: > On 11/29/2023 6:22 PM, Paul Moore wrote: >> On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu >> <roberto.sassu@huaweicloud.com> wrote: >>> >>> On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: >>>> On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu >>>> <roberto.sassu@huaweicloud.com> wrote: >>>>> On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: >>>>>> On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: >>>>>>> >>>>>>> Before the security field of kernel objects could be shared >>>>>>> among LSMs with >>>>>>> the LSM stacking feature, IMA and EVM had to rely on an >>>>>>> alternative storage >>>>>>> of inode metadata. The association between inode metadata and >>>>>>> inode is >>>>>>> maintained through an rbtree. >>>>>>> >>>>>>> Because of this alternative storage mechanism, there was no need >>>>>>> to use >>>>>>> disjoint inode metadata, so IMA and EVM today still share them. >>>>>>> >>>>>>> With the reservation mechanism offered by the LSM >>>>>>> infrastructure, the >>>>>>> rbtree is no longer necessary, as each LSM could reserve a space >>>>>>> in the >>>>>>> security blob for each inode. However, since IMA and EVM share the >>>>>>> inode metadata, they cannot directly reserve the space for them. >>>>>>> >>>>>>> Instead, request from the 'integrity' LSM a space in the >>>>>>> security blob for >>>>>>> the pointer of inode metadata (integrity_iint_cache structure). >>>>>>> The other >>>>>>> reason for keeping the 'integrity' LSM is to preserve the >>>>>>> original ordering >>>>>>> of IMA and EVM functions as when they were hardcoded. >>>>>>> >>>>>>> Prefer reserving space for a pointer to allocating the >>>>>>> integrity_iint_cache >>>>>>> structure directly, as IMA would require it only for a subset of >>>>>>> inodes. >>>>>>> Always allocating it would cause a waste of memory. >>>>>>> >>>>>>> Introduce two primitives for getting and setting the pointer of >>>>>>> integrity_iint_cache in the security blob, respectively >>>>>>> integrity_inode_get_iint() and integrity_inode_set_iint(). This >>>>>>> would make >>>>>>> the code more understandable, as they directly replace rbtree >>>>>>> operations. >>>>>>> >>>>>>> Locking is not needed, as access to inode metadata is not >>>>>>> shared, it is per >>>>>>> inode. >>>>>>> >>>>>>> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> >>>>>>> Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> >>>>>>> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> >>>>>>> --- >>>>>>> security/integrity/iint.c | 71 >>>>>>> +++++----------------------------- >>>>>>> security/integrity/integrity.h | 20 +++++++++- >>>>>>> 2 files changed, 29 insertions(+), 62 deletions(-) >>>>>>> >>>>>>> diff --git a/security/integrity/iint.c b/security/integrity/iint.c >>>>>>> index 882fde2a2607..a5edd3c70784 100644 >>>>>>> --- a/security/integrity/iint.c >>>>>>> +++ b/security/integrity/iint.c >>>>>>> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) >>>>>>> return 0; >>>>>>> } >>>>>>> >>>>>>> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { >>>>>>> + .lbs_inode = sizeof(struct integrity_iint_cache *), >>>>>>> +}; >>>>>> >>>>>> I'll admit that I'm likely missing an important detail, but is there >>>>>> a reason why you couldn't stash the integrity_iint_cache struct >>>>>> directly in the inode's security blob instead of the pointer? For >>>>>> example: >>>>>> >>>>>> struct lsm_blob_sizes ... = { >>>>>> .lbs_inode = sizeof(struct integrity_iint_cache), >>>>>> }; >>>>>> >>>>>> struct integrity_iint_cache *integrity_inode_get(inode) >>>>>> { >>>>>> if (unlikely(!inode->isecurity)) >>>>>> return NULL; >>>>>> return inode->i_security + integrity_blob_sizes.lbs_inode; >>>>>> } >>>>> >>>>> It would increase memory occupation. Sometimes the IMA policy >>>>> encompasses a small subset of the inodes. Allocating the full >>>>> integrity_iint_cache would be a waste of memory, I guess? >>>> >>>> Perhaps, but if it allows us to remove another layer of dynamic memory >>>> I would argue that it may be worth the cost. It's also worth >>>> considering the size of integrity_iint_cache, while it isn't small, it >>>> isn't exactly huge either. >>>> >>>>> On the other hand... (did not think fully about that) if we embed the >>>>> full structure in the security blob, we already have a mutex >>>>> available >>>>> to use, and we don't need to take the inode lock (?). >>>> >>>> That would be excellent, getting rid of a layer of locking would be >>>> significant. >>>> >>>>> I'm fully convinced that we can improve the implementation >>>>> significantly. I just was really hoping to go step by step and not >>>>> accumulating improvements as dependency for moving IMA and EVM to the >>>>> LSM infrastructure. >>>> >>>> I understand, and I agree that an iterative approach is a good idea, I >>>> just want to make sure we keep things tidy from a user perspective, >>>> i.e. not exposing the "integrity" LSM when it isn't required. >>> >>> Ok, I went back to it again. >>> >>> I think trying to separate integrity metadata is premature now, too >>> many things at the same time. >> >> I'm not bothered by the size of the patchset, it is more important >> that we do The Right Thing. I would like to hear in more detail why >> you don't think this will work, I'm not interested in hearing about >> difficult it may be, I'm interested in hearing about what challenges >> we need to solve to do this properly. > > The right thing in my opinion is to achieve the goal with the minimal > set of changes, in the most intuitive way. > > Until now, there was no solution that could achieve the primary goal > of this patch set (moving IMA and EVM to the LSM infrastructure) and, > at the same time, achieve the additional goal you set of removing the > 'integrity' LSM. > > If you see the diff, the changes compared to v5 that was already > accepted by Mimi are very straightforward. If the assumption I made > that in the end the 'ima' LSM could take over the role of the > 'integrity' LSM, that for me is the preferable option. > > Given that the patch set is not doing any design change, but merely > moving calls and storing pointers elsewhere, that leaves us with the > option of thinking better what to do next, including like you > suggested to make IMA and EVM use disjoint metadata. > >>> I started to think, does EVM really need integrity metadata or it can >>> work without? >>> >>> The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have >>> the same problem now. What if we make IMA the one that manages >>> integrity metadata, so that we can remove the 'integrity' LSM? >> >> I guess we should probably revisit the basic idea of if it even makes >> sense to enable EVM without IMA? Should we update the Kconfig to >> require IMA when EVM is enabled? > > That would be up to Mimi. Also this does not seem the main focus of > the patch set. > >>> Regarding the LSM order, I would take Casey's suggestion of introducing >>> LSM_ORDER_REALLY_LAST, for EVM. >> >> Please understand that I really dislike that we have imposed ordering >> constraints at the LSM layer, but I do understand the necessity (the >> BPF LSM ordering upsets me the most). I really don't want to see us >> make things worse by adding yet another ordering bucket, I would >> rather that we document it well and leave it alone ... basically treat >> it like the BPF LSM (grrrrrr). > > Uhm, that would not be possible right away (the BPF LSM is mutable), > remember that we defined LSM_ORDER_LAST so that an LSM can be always > enable and placed as last (requested by Mimi)? It would be nice if the solution directly addresses the problem. EVM needs to be after the LSMs that use xattrs, not after all LSMs. I suggested LSM_ORDER_REALLY_LAST in part to identify the notion as unattractive. You could add an element to lsm_info: u64 *follows; which can be initialized to a list of LSM_ID values that this LSM cannot precede. LSM_ID_CAPABILITY would be included by all other LSMs, either implicitly or explicitly. EVM would include all the LSMs that use xattrs in the list. static u64 evm_follow_list[] = { LSM_ID_CAPABILITY, LSM_ID_SELINUX, LSM_ID_SMACK, ... }; ... .follows = evm_follow_list; ... > > Thanks > > Roberto >
Hi all, On 11/30/2023 1:41 AM, Casey Schaufler wrote: > On 11/29/2023 10:46 AM, Roberto Sassu wrote: >> On 11/29/2023 6:22 PM, Paul Moore wrote: >>> On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu >>> <roberto.sassu@huaweicloud.com> wrote: >>>> >>>> On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: >>>>> On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu >>>>> <roberto.sassu@huaweicloud.com> wrote: >>>>>> On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: >>>>>>> On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: >>>>>>>> >>>>>>>> Before the security field of kernel objects could be shared >>>>>>>> among LSMs with >>>>>>>> the LSM stacking feature, IMA and EVM had to rely on an >>>>>>>> alternative storage >>>>>>>> of inode metadata. The association between inode metadata and >>>>>>>> inode is >>>>>>>> maintained through an rbtree. >>>>>>>> >>>>>>>> Because of this alternative storage mechanism, there was no need >>>>>>>> to use >>>>>>>> disjoint inode metadata, so IMA and EVM today still share them. >>>>>>>> >>>>>>>> With the reservation mechanism offered by the LSM >>>>>>>> infrastructure, the >>>>>>>> rbtree is no longer necessary, as each LSM could reserve a space >>>>>>>> in the >>>>>>>> security blob for each inode. However, since IMA and EVM share the >>>>>>>> inode metadata, they cannot directly reserve the space for them. >>>>>>>> >>>>>>>> Instead, request from the 'integrity' LSM a space in the >>>>>>>> security blob for >>>>>>>> the pointer of inode metadata (integrity_iint_cache structure). >>>>>>>> The other >>>>>>>> reason for keeping the 'integrity' LSM is to preserve the >>>>>>>> original ordering >>>>>>>> of IMA and EVM functions as when they were hardcoded. >>>>>>>> >>>>>>>> Prefer reserving space for a pointer to allocating the >>>>>>>> integrity_iint_cache >>>>>>>> structure directly, as IMA would require it only for a subset of >>>>>>>> inodes. >>>>>>>> Always allocating it would cause a waste of memory. >>>>>>>> >>>>>>>> Introduce two primitives for getting and setting the pointer of >>>>>>>> integrity_iint_cache in the security blob, respectively >>>>>>>> integrity_inode_get_iint() and integrity_inode_set_iint(). This >>>>>>>> would make >>>>>>>> the code more understandable, as they directly replace rbtree >>>>>>>> operations. >>>>>>>> >>>>>>>> Locking is not needed, as access to inode metadata is not >>>>>>>> shared, it is per >>>>>>>> inode. >>>>>>>> >>>>>>>> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> >>>>>>>> Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> >>>>>>>> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> >>>>>>>> --- >>>>>>>> security/integrity/iint.c | 71 >>>>>>>> +++++----------------------------- >>>>>>>> security/integrity/integrity.h | 20 +++++++++- >>>>>>>> 2 files changed, 29 insertions(+), 62 deletions(-) >>>>>>>> >>>>>>>> diff --git a/security/integrity/iint.c b/security/integrity/iint.c >>>>>>>> index 882fde2a2607..a5edd3c70784 100644 >>>>>>>> --- a/security/integrity/iint.c >>>>>>>> +++ b/security/integrity/iint.c >>>>>>>> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) >>>>>>>> return 0; >>>>>>>> } >>>>>>>> >>>>>>>> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { >>>>>>>> + .lbs_inode = sizeof(struct integrity_iint_cache *), >>>>>>>> +}; >>>>>>> >>>>>>> I'll admit that I'm likely missing an important detail, but is there >>>>>>> a reason why you couldn't stash the integrity_iint_cache struct >>>>>>> directly in the inode's security blob instead of the pointer? For >>>>>>> example: >>>>>>> >>>>>>> struct lsm_blob_sizes ... = { >>>>>>> .lbs_inode = sizeof(struct integrity_iint_cache), >>>>>>> }; >>>>>>> >>>>>>> struct integrity_iint_cache *integrity_inode_get(inode) >>>>>>> { >>>>>>> if (unlikely(!inode->isecurity)) >>>>>>> return NULL; >>>>>>> return inode->i_security + integrity_blob_sizes.lbs_inode; >>>>>>> } >>>>>> >>>>>> It would increase memory occupation. Sometimes the IMA policy >>>>>> encompasses a small subset of the inodes. Allocating the full >>>>>> integrity_iint_cache would be a waste of memory, I guess? >>>>> >>>>> Perhaps, but if it allows us to remove another layer of dynamic memory >>>>> I would argue that it may be worth the cost. It's also worth >>>>> considering the size of integrity_iint_cache, while it isn't small, it >>>>> isn't exactly huge either. >>>>> >>>>>> On the other hand... (did not think fully about that) if we embed the >>>>>> full structure in the security blob, we already have a mutex >>>>>> available >>>>>> to use, and we don't need to take the inode lock (?). >>>>> >>>>> That would be excellent, getting rid of a layer of locking would be >>>>> significant. >>>>> >>>>>> I'm fully convinced that we can improve the implementation >>>>>> significantly. I just was really hoping to go step by step and not >>>>>> accumulating improvements as dependency for moving IMA and EVM to the >>>>>> LSM infrastructure. >>>>> >>>>> I understand, and I agree that an iterative approach is a good idea, I >>>>> just want to make sure we keep things tidy from a user perspective, >>>>> i.e. not exposing the "integrity" LSM when it isn't required. >>>> >>>> Ok, I went back to it again. >>>> >>>> I think trying to separate integrity metadata is premature now, too >>>> many things at the same time. >>> >>> I'm not bothered by the size of the patchset, it is more important >>> that we do The Right Thing. I would like to hear in more detail why >>> you don't think this will work, I'm not interested in hearing about >>> difficult it may be, I'm interested in hearing about what challenges >>> we need to solve to do this properly. >> >> The right thing in my opinion is to achieve the goal with the minimal >> set of changes, in the most intuitive way. >> >> Until now, there was no solution that could achieve the primary goal >> of this patch set (moving IMA and EVM to the LSM infrastructure) and, >> at the same time, achieve the additional goal you set of removing the >> 'integrity' LSM. >> >> If you see the diff, the changes compared to v5 that was already >> accepted by Mimi are very straightforward. If the assumption I made >> that in the end the 'ima' LSM could take over the role of the >> 'integrity' LSM, that for me is the preferable option. >> >> Given that the patch set is not doing any design change, but merely >> moving calls and storing pointers elsewhere, that leaves us with the >> option of thinking better what to do next, including like you >> suggested to make IMA and EVM use disjoint metadata. >> >>>> I started to think, does EVM really need integrity metadata or it can >>>> work without? >>>> >>>> The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have >>>> the same problem now. What if we make IMA the one that manages >>>> integrity metadata, so that we can remove the 'integrity' LSM? >>> >>> I guess we should probably revisit the basic idea of if it even makes >>> sense to enable EVM without IMA? Should we update the Kconfig to >>> require IMA when EVM is enabled? >> >> That would be up to Mimi. Also this does not seem the main focus of >> the patch set. >> >>>> Regarding the LSM order, I would take Casey's suggestion of introducing >>>> LSM_ORDER_REALLY_LAST, for EVM. >>> >>> Please understand that I really dislike that we have imposed ordering >>> constraints at the LSM layer, but I do understand the necessity (the >>> BPF LSM ordering upsets me the most). I really don't want to see us >>> make things worse by adding yet another ordering bucket, I would >>> rather that we document it well and leave it alone ... basically treat >>> it like the BPF LSM (grrrrrr). >> >> Uhm, that would not be possible right away (the BPF LSM is mutable), >> remember that we defined LSM_ORDER_LAST so that an LSM can be always >> enable and placed as last (requested by Mimi)? > > It would be nice if the solution directly addresses the problem. > EVM needs to be after the LSMs that use xattrs, not after all LSMs. > I suggested LSM_ORDER_REALLY_LAST in part to identify the notion as > unattractive. Excuse me to chime in, but do we really need the ordering in code? FWIW the linker guarantees that objects appear in the order they are seen during the link (unless --sort-section overrides that default, but this option is not used in the kernel). Since *.a archive files are used in kbuild, I have also verified that their use does not break the assumption; they are always created from scratch. In short, to enforce an ordering, you can simply list the corresponding object files in that order in the Makefile. Of course, add a big fat warning comment, so people understand the order is not arbitrary. Just my two eurocents, Petr T
On Wed, 2023-11-29 at 19:46 +0100, Roberto Sassu wrote: > On 11/29/2023 6:22 PM, Paul Moore wrote: > > On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu > > <roberto.sassu@huaweicloud.com> wrote: > >> > >> On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > >>> On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu > >>> <roberto.sassu@huaweicloud.com> wrote: > >>>> On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > >>>>> On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > >>>>>> > >>>>>> Before the security field of kernel objects could be shared among LSMs with > >>>>>> the LSM stacking feature, IMA and EVM had to rely on an alternative storage > >>>>>> of inode metadata. The association between inode metadata and inode is > >>>>>> maintained through an rbtree. > >>>>>> > >>>>>> Because of this alternative storage mechanism, there was no need to use > >>>>>> disjoint inode metadata, so IMA and EVM today still share them. > >>>>>> > >>>>>> With the reservation mechanism offered by the LSM infrastructure, the > >>>>>> rbtree is no longer necessary, as each LSM could reserve a space in the > >>>>>> security blob for each inode. However, since IMA and EVM share the > >>>>>> inode metadata, they cannot directly reserve the space for them. > >>>>>> > >>>>>> Instead, request from the 'integrity' LSM a space in the security blob for > >>>>>> the pointer of inode metadata (integrity_iint_cache structure). The other > >>>>>> reason for keeping the 'integrity' LSM is to preserve the original ordering > >>>>>> of IMA and EVM functions as when they were hardcoded. > >>>>>> > >>>>>> Prefer reserving space for a pointer to allocating the integrity_iint_cache > >>>>>> structure directly, as IMA would require it only for a subset of inodes. > >>>>>> Always allocating it would cause a waste of memory. > >>>>>> > >>>>>> Introduce two primitives for getting and setting the pointer of > >>>>>> integrity_iint_cache in the security blob, respectively > >>>>>> integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > >>>>>> the code more understandable, as they directly replace rbtree operations. > >>>>>> > >>>>>> Locking is not needed, as access to inode metadata is not shared, it is per > >>>>>> inode. > >>>>>> > >>>>>> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > >>>>>> Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > >>>>>> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > >>>>>> --- > >>>>>> security/integrity/iint.c | 71 +++++----------------------------- > >>>>>> security/integrity/integrity.h | 20 +++++++++- > >>>>>> 2 files changed, 29 insertions(+), 62 deletions(-) > >>>>>> > >>>>>> diff --git a/security/integrity/iint.c b/security/integrity/iint.c > >>>>>> index 882fde2a2607..a5edd3c70784 100644 > >>>>>> --- a/security/integrity/iint.c > >>>>>> +++ b/security/integrity/iint.c > >>>>>> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > >>>>>> return 0; > >>>>>> } > >>>>>> > >>>>>> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > >>>>>> + .lbs_inode = sizeof(struct integrity_iint_cache *), > >>>>>> +}; > >>>>> > >>>>> I'll admit that I'm likely missing an important detail, but is there > >>>>> a reason why you couldn't stash the integrity_iint_cache struct > >>>>> directly in the inode's security blob instead of the pointer? For > >>>>> example: > >>>>> > >>>>> struct lsm_blob_sizes ... = { > >>>>> .lbs_inode = sizeof(struct integrity_iint_cache), > >>>>> }; > >>>>> > >>>>> struct integrity_iint_cache *integrity_inode_get(inode) > >>>>> { > >>>>> if (unlikely(!inode->isecurity)) > >>>>> return NULL; > >>>>> return inode->i_security + integrity_blob_sizes.lbs_inode; > >>>>> } > >>>> > >>>> It would increase memory occupation. Sometimes the IMA policy > >>>> encompasses a small subset of the inodes. Allocating the full > >>>> integrity_iint_cache would be a waste of memory, I guess? > >>> > >>> Perhaps, but if it allows us to remove another layer of dynamic memory > >>> I would argue that it may be worth the cost. It's also worth > >>> considering the size of integrity_iint_cache, while it isn't small, it > >>> isn't exactly huge either. > >>> > >>>> On the other hand... (did not think fully about that) if we embed the > >>>> full structure in the security blob, we already have a mutex available > >>>> to use, and we don't need to take the inode lock (?). > >>> > >>> That would be excellent, getting rid of a layer of locking would be significant. > >>> > >>>> I'm fully convinced that we can improve the implementation > >>>> significantly. I just was really hoping to go step by step and not > >>>> accumulating improvements as dependency for moving IMA and EVM to the > >>>> LSM infrastructure. > >>> > >>> I understand, and I agree that an iterative approach is a good idea, I > >>> just want to make sure we keep things tidy from a user perspective, > >>> i.e. not exposing the "integrity" LSM when it isn't required. > >> > >> Ok, I went back to it again. > >> > >> I think trying to separate integrity metadata is premature now, too > >> many things at the same time. > > > > I'm not bothered by the size of the patchset, it is more important > > that we do The Right Thing. I would like to hear in more detail why > > you don't think this will work, I'm not interested in hearing about > > difficult it may be, I'm interested in hearing about what challenges > > we need to solve to do this properly. > > The right thing in my opinion is to achieve the goal with the minimal > set of changes, in the most intuitive way. > > Until now, there was no solution that could achieve the primary goal of > this patch set (moving IMA and EVM to the LSM infrastructure) and, at > the same time, achieve the additional goal you set of removing the > 'integrity' LSM. > > If you see the diff, the changes compared to v5 that was already > accepted by Mimi are very straightforward. If the assumption I made that > in the end the 'ima' LSM could take over the role of the 'integrity' > LSM, that for me is the preferable option. > > Given that the patch set is not doing any design change, but merely > moving calls and storing pointers elsewhere, that leaves us with the > option of thinking better what to do next, including like you suggested > to make IMA and EVM use disjoint metadata. > > >> I started to think, does EVM really need integrity metadata or it can > >> work without? > >> > >> The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have > >> the same problem now. What if we make IMA the one that manages > >> integrity metadata, so that we can remove the 'integrity' LSM? > > > > I guess we should probably revisit the basic idea of if it even makes > > sense to enable EVM without IMA? Should we update the Kconfig to > > require IMA when EVM is enabled? > > That would be up to Mimi. Also this does not seem the main focus of the > patch set. First you suggested lumping IMA and EVM together, dropping EVM entirely. Now you're suggesting making EVM dependent on IMA. Please stop. EVM and IMA should remain independent of each other. The first user of EVM is IMA. > >> Regarding the LSM order, I would take Casey's suggestion of introducing > >> LSM_ORDER_REALLY_LAST, for EVM. > > > > Please understand that I really dislike that we have imposed ordering > > constraints at the LSM layer, but I do understand the necessity (the > > BPF LSM ordering upsets me the most). I really don't want to see us > > make things worse by adding yet another ordering bucket, I would > > rather that we document it well and leave it alone ... basically treat > > it like the BPF LSM (grrrrrr). > > Uhm, that would not be possible right away (the BPF LSM is mutable), > remember that we defined LSM_ORDER_LAST so that an LSM can be always > enable and placed as last (requested by Mimi)? Making EVM a full fledged LSM was contingent on two things - EVM always being enabled if configured and being the last LSM. Using capability as a precedent for ordering requirement, Mickaël suggested defining LSM_ORDER_LAST, which you agreed to. It sounds like you're backtracking on an agreement. Mimi
On 11/30/2023 12:30 AM, Petr Tesarik wrote: > Hi all, > > On 11/30/2023 1:41 AM, Casey Schaufler wrote: >> ... >> It would be nice if the solution directly addresses the problem. >> EVM needs to be after the LSMs that use xattrs, not after all LSMs. >> I suggested LSM_ORDER_REALLY_LAST in part to identify the notion as >> unattractive. > Excuse me to chime in, but do we really need the ordering in code? tl;dr - Yes. > FWIW > the linker guarantees that objects appear in the order they are seen > during the link (unless --sort-section overrides that default, but this > option is not used in the kernel). Since *.a archive files are used in > kbuild, I have also verified that their use does not break the > assumption; they are always created from scratch. > > In short, to enforce an ordering, you can simply list the corresponding > object files in that order in the Makefile. Of course, add a big fat > warning comment, so people understand the order is not arbitrary. Not everyone builds custom kernels. > > Just my two eurocents, > Petr T > >
On Wed, Nov 29, 2023 at 1:47 PM Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > On 11/29/2023 6:22 PM, Paul Moore wrote: > > On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu > > <roberto.sassu@huaweicloud.com> wrote: > >> > >> On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > >>> On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu > >>> <roberto.sassu@huaweicloud.com> wrote: > >>>> On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > >>>>> On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > >>>>>> > >>>>>> Before the security field of kernel objects could be shared among LSMs with > >>>>>> the LSM stacking feature, IMA and EVM had to rely on an alternative storage > >>>>>> of inode metadata. The association between inode metadata and inode is > >>>>>> maintained through an rbtree. > >>>>>> > >>>>>> Because of this alternative storage mechanism, there was no need to use > >>>>>> disjoint inode metadata, so IMA and EVM today still share them. > >>>>>> > >>>>>> With the reservation mechanism offered by the LSM infrastructure, the > >>>>>> rbtree is no longer necessary, as each LSM could reserve a space in the > >>>>>> security blob for each inode. However, since IMA and EVM share the > >>>>>> inode metadata, they cannot directly reserve the space for them. > >>>>>> > >>>>>> Instead, request from the 'integrity' LSM a space in the security blob for > >>>>>> the pointer of inode metadata (integrity_iint_cache structure). The other > >>>>>> reason for keeping the 'integrity' LSM is to preserve the original ordering > >>>>>> of IMA and EVM functions as when they were hardcoded. > >>>>>> > >>>>>> Prefer reserving space for a pointer to allocating the integrity_iint_cache > >>>>>> structure directly, as IMA would require it only for a subset of inodes. > >>>>>> Always allocating it would cause a waste of memory. > >>>>>> > >>>>>> Introduce two primitives for getting and setting the pointer of > >>>>>> integrity_iint_cache in the security blob, respectively > >>>>>> integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > >>>>>> the code more understandable, as they directly replace rbtree operations. > >>>>>> > >>>>>> Locking is not needed, as access to inode metadata is not shared, it is per > >>>>>> inode. > >>>>>> > >>>>>> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > >>>>>> Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > >>>>>> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > >>>>>> --- > >>>>>> security/integrity/iint.c | 71 +++++----------------------------- > >>>>>> security/integrity/integrity.h | 20 +++++++++- > >>>>>> 2 files changed, 29 insertions(+), 62 deletions(-) > >>>>>> > >>>>>> diff --git a/security/integrity/iint.c b/security/integrity/iint.c > >>>>>> index 882fde2a2607..a5edd3c70784 100644 > >>>>>> --- a/security/integrity/iint.c > >>>>>> +++ b/security/integrity/iint.c > >>>>>> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > >>>>>> return 0; > >>>>>> } > >>>>>> > >>>>>> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > >>>>>> + .lbs_inode = sizeof(struct integrity_iint_cache *), > >>>>>> +}; > >>>>> > >>>>> I'll admit that I'm likely missing an important detail, but is there > >>>>> a reason why you couldn't stash the integrity_iint_cache struct > >>>>> directly in the inode's security blob instead of the pointer? For > >>>>> example: > >>>>> > >>>>> struct lsm_blob_sizes ... = { > >>>>> .lbs_inode = sizeof(struct integrity_iint_cache), > >>>>> }; > >>>>> > >>>>> struct integrity_iint_cache *integrity_inode_get(inode) > >>>>> { > >>>>> if (unlikely(!inode->isecurity)) > >>>>> return NULL; > >>>>> return inode->i_security + integrity_blob_sizes.lbs_inode; > >>>>> } > >>>> > >>>> It would increase memory occupation. Sometimes the IMA policy > >>>> encompasses a small subset of the inodes. Allocating the full > >>>> integrity_iint_cache would be a waste of memory, I guess? > >>> > >>> Perhaps, but if it allows us to remove another layer of dynamic memory > >>> I would argue that it may be worth the cost. It's also worth > >>> considering the size of integrity_iint_cache, while it isn't small, it > >>> isn't exactly huge either. > >>> > >>>> On the other hand... (did not think fully about that) if we embed the > >>>> full structure in the security blob, we already have a mutex available > >>>> to use, and we don't need to take the inode lock (?). > >>> > >>> That would be excellent, getting rid of a layer of locking would be significant. > >>> > >>>> I'm fully convinced that we can improve the implementation > >>>> significantly. I just was really hoping to go step by step and not > >>>> accumulating improvements as dependency for moving IMA and EVM to the > >>>> LSM infrastructure. > >>> > >>> I understand, and I agree that an iterative approach is a good idea, I > >>> just want to make sure we keep things tidy from a user perspective, > >>> i.e. not exposing the "integrity" LSM when it isn't required. > >> > >> Ok, I went back to it again. > >> > >> I think trying to separate integrity metadata is premature now, too > >> many things at the same time. > > > > I'm not bothered by the size of the patchset, it is more important > > that we do The Right Thing. I would like to hear in more detail why > > you don't think this will work, I'm not interested in hearing about > > difficult it may be, I'm interested in hearing about what challenges > > we need to solve to do this properly. > > The right thing in my opinion is to achieve the goal with the minimal > set of changes, in the most intuitive way. Once again, I want to stress that I don't care about the size of the change, the number of patches in a patchset, etc. While it's always nice to be able to minimize the number of changes in a patch/patchset, that is secondary to making sure we are doing the right thing over the long term. This is especially important when we are talking about things that are user visible. > Until now, there was no solution that could achieve the primary goal of > this patch set (moving IMA and EVM to the LSM infrastructure) and, at > the same time, achieve the additional goal you set of removing the > 'integrity' LSM. We need to stop thinking about the "integrity" code as a LSM, it isn't a LSM. It's a vestigial implementation detail that was necessary back when there could only be one LSM active at a time and there was a desire to have IMA/EVM active in conjunction with one of the LSMs, i.e. Smack, SELinux, etc. IMA and EVM are (or will be) LSMs, "integrity" is not. I recognize that eliminating the need for the "integrity" code is a relatively new addition to this effort, but that is only because I didn't properly understand the relationship between IMA, EVM, and the "integrity" code until recently. The elimination of the shared "integrity" code is consistent with promoting IMA and EVM as full LSMs, if there is core functionality that cannot be split up into the IMA and/or EVM LSMs then we need to look at how to support that without exposing that implementation detail/hack to userspace. Maybe that means direct calls between IMA and EVM, maybe that means preserving some of the common integrity code hidden from userspace, maybe that means adding functionality to the LSM layer, maybe that means something else? Let's think on this to come up with something that we can all accept as a long term solution instead of just doing the quick and easy option. > If you see the diff, the changes compared to v5 that was already > accepted by Mimi are very straightforward. If the assumption I made that > in the end the 'ima' LSM could take over the role of the 'integrity' > LSM, that for me is the preferable option. I looked at it quickly, but my workflow isn't well suited for patches as attachments; inline patches (the kernel standard) is preferable. > Given that the patch set is not doing any design change, but merely > moving calls and storing pointers elsewhere, that leaves us with the > option of thinking better what to do next, including like you suggested > to make IMA and EVM use disjoint metadata. > > >> I started to think, does EVM really need integrity metadata or it can > >> work without? > >> > >> The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have > >> the same problem now. What if we make IMA the one that manages > >> integrity metadata, so that we can remove the 'integrity' LSM? > > > > I guess we should probably revisit the basic idea of if it even makes > > sense to enable EVM without IMA? Should we update the Kconfig to > > require IMA when EVM is enabled? > > That would be up to Mimi. Also this does not seem the main focus of the > patch set. Yes, it is not part of the original main focus, but it is definitely relevant to the discussion we are having now. Once again, the most important thing to me is that we do The Right Thing for the long term maintenance of the code base; if that means scope creep, I've got no problem with that. > >> Regarding the LSM order, I would take Casey's suggestion of introducing > >> LSM_ORDER_REALLY_LAST, for EVM. > > > > Please understand that I really dislike that we have imposed ordering > > constraints at the LSM layer, but I do understand the necessity (the > > BPF LSM ordering upsets me the most). I really don't want to see us > > make things worse by adding yet another ordering bucket, I would > > rather that we document it well and leave it alone ... basically treat > > it like the BPF LSM (grrrrrr). > > Uhm, that would not be possible right away (the BPF LSM is mutable), > remember that we defined LSM_ORDER_LAST so that an LSM can be always > enable and placed as last (requested by Mimi)? To be clear, I can both dislike the bpf-always-last and LSM_ORDER_LAST concepts while accepting them as necessary evils. I'm willing to tolerate LSM_ORDER_LAST, but I'm not currently willing to tolerate LSM_ORDER_REALLY_LAST; that is one step too far right now. I brought up the BPF LSM simply as an example of ordering that is not enforced by code, but rather by documentation and convention.
On Thu, Nov 30, 2023 at 6:13 AM Mimi Zohar <zohar@linux.ibm.com> wrote: > On Wed, 2023-11-29 at 19:46 +0100, Roberto Sassu wrote: > > On 11/29/2023 6:22 PM, Paul Moore wrote: > > > On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu wrote: > > >> On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > > >>> On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu wrote: > > >>>> On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > > >>>>> On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: ... > First you suggested lumping IMA and EVM together, dropping EVM > entirely. Now you're suggesting making EVM dependent on IMA. Please > stop. Welcome to design discussions and brainstorming where changing opinions and unexpected suggestions are part of the process. When we are faced with difficult problems I want everyone to think creatively and not be afraid to adjust their thinking based on their changing understanding and the ongoing discussion. Asking people to stop thinking outside the status quo is not a good way to solve challenging problems. > EVM and IMA should remain independent of each other. A few posts back that was the goal, then Roberto mentioned EVM breakage when IMA was disabled so I simply asked if it was worth "revisit the basic idea of if it even makes sense to enable EVM without IMA?". A bad answer to that question is what you provided above (and to be fair, we are all guilty of that at times), a good answer is to explain why IMA and EVM need to remain independent with bonus points awarded for realistic use cases that support the assertion of independence. > > >> Regarding the LSM order, I would take Casey's suggestion of introducing > > >> LSM_ORDER_REALLY_LAST, for EVM. > > > > > > Please understand that I really dislike that we have imposed ordering > > > constraints at the LSM layer, but I do understand the necessity (the > > > BPF LSM ordering upsets me the most). I really don't want to see us > > > make things worse by adding yet another ordering bucket, I would > > > rather that we document it well and leave it alone ... basically treat > > > it like the BPF LSM (grrrrrr). > > > > Uhm, that would not be possible right away (the BPF LSM is mutable), > > remember that we defined LSM_ORDER_LAST so that an LSM can be always > > enable and placed as last (requested by Mimi)? > > Making EVM a full fledged LSM was contingent on two things - EVM always > being enabled if configured and being the last LSM. Using capability > as a precedent for ordering requirement, Mickaël suggested defining > LSM_ORDER_LAST, which you agreed to. It sounds like you're > backtracking on an agreement. I not only agreed to LSM_ORDER_LAST, I merged the code and it is currently in Linus' tree. See my last reply to Roberto; I see no reason to change that. I never would have merged that code or sent it to Linus if I didn't feel it was necessary. I'm guessing that you misread my reply above (perhaps you missed the "another" in "... I really don't want to see us make things worse by adding yet another ordering bucket ..."), but regardless of that, I want to deal with your "backtracking" comment. Similar to my comments above about brainstorming, I don't want people to feel that they can't change their mind about something. Call it backtracking if you want (although that has a negative connotation for many), but I want people to feel free to adjust their opinions as they learn more about something or as the conversation evolves. I believe this is the primary (only?) way for us to reach consensus on challenging problems. If you are uncomfortable with new, different, and changing ideas this may not be the right place for you. I might suggest a career in politics as an alternative.
A quick note that I'm going to have some networking disruptions at home and email responses may be delayed for the next few days.
On 11/30/2023 5:15 PM, Casey Schaufler wrote: > On 11/30/2023 12:30 AM, Petr Tesarik wrote: >> Hi all, >> >> On 11/30/2023 1:41 AM, Casey Schaufler wrote: >>> ... >>> It would be nice if the solution directly addresses the problem. >>> EVM needs to be after the LSMs that use xattrs, not after all LSMs. >>> I suggested LSM_ORDER_REALLY_LAST in part to identify the notion as >>> unattractive. >> Excuse me to chime in, but do we really need the ordering in code? > > tl;dr - Yes. > >> FWIW >> the linker guarantees that objects appear in the order they are seen >> during the link (unless --sort-section overrides that default, but this >> option is not used in the kernel). Since *.a archive files are used in >> kbuild, I have also verified that their use does not break the >> assumption; they are always created from scratch. >> >> In short, to enforce an ordering, you can simply list the corresponding >> object files in that order in the Makefile. Of course, add a big fat >> warning comment, so people understand the order is not arbitrary. > > Not everyone builds custom kernels. Sorry, I didn't understand your comment. Everyone builds the kernel, also Linux distros. What Petr was suggesting was that it does not matter how you build the kernel, the linker will place the LSMs in the order they appear in the Makefile. And for this particular case, we have: obj-$(CONFIG_IMA) += ima/ obj-$(CONFIG_EVM) += evm/ In the past, I also verified that swapping these two resulted in the swapped order of LSMs. Petr confirmed that it would always happen. Thanks Roberto
On 11/30/2023 5:34 PM, Paul Moore wrote: > On Wed, Nov 29, 2023 at 1:47 PM Roberto Sassu > <roberto.sassu@huaweicloud.com> wrote: >> On 11/29/2023 6:22 PM, Paul Moore wrote: >>> On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu >>> <roberto.sassu@huaweicloud.com> wrote: >>>> >>>> On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: >>>>> On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu >>>>> <roberto.sassu@huaweicloud.com> wrote: >>>>>> On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: >>>>>>> On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: >>>>>>>> >>>>>>>> Before the security field of kernel objects could be shared among LSMs with >>>>>>>> the LSM stacking feature, IMA and EVM had to rely on an alternative storage >>>>>>>> of inode metadata. The association between inode metadata and inode is >>>>>>>> maintained through an rbtree. >>>>>>>> >>>>>>>> Because of this alternative storage mechanism, there was no need to use >>>>>>>> disjoint inode metadata, so IMA and EVM today still share them. >>>>>>>> >>>>>>>> With the reservation mechanism offered by the LSM infrastructure, the >>>>>>>> rbtree is no longer necessary, as each LSM could reserve a space in the >>>>>>>> security blob for each inode. However, since IMA and EVM share the >>>>>>>> inode metadata, they cannot directly reserve the space for them. >>>>>>>> >>>>>>>> Instead, request from the 'integrity' LSM a space in the security blob for >>>>>>>> the pointer of inode metadata (integrity_iint_cache structure). The other >>>>>>>> reason for keeping the 'integrity' LSM is to preserve the original ordering >>>>>>>> of IMA and EVM functions as when they were hardcoded. >>>>>>>> >>>>>>>> Prefer reserving space for a pointer to allocating the integrity_iint_cache >>>>>>>> structure directly, as IMA would require it only for a subset of inodes. >>>>>>>> Always allocating it would cause a waste of memory. >>>>>>>> >>>>>>>> Introduce two primitives for getting and setting the pointer of >>>>>>>> integrity_iint_cache in the security blob, respectively >>>>>>>> integrity_inode_get_iint() and integrity_inode_set_iint(). This would make >>>>>>>> the code more understandable, as they directly replace rbtree operations. >>>>>>>> >>>>>>>> Locking is not needed, as access to inode metadata is not shared, it is per >>>>>>>> inode. >>>>>>>> >>>>>>>> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> >>>>>>>> Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> >>>>>>>> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> >>>>>>>> --- >>>>>>>> security/integrity/iint.c | 71 +++++----------------------------- >>>>>>>> security/integrity/integrity.h | 20 +++++++++- >>>>>>>> 2 files changed, 29 insertions(+), 62 deletions(-) >>>>>>>> >>>>>>>> diff --git a/security/integrity/iint.c b/security/integrity/iint.c >>>>>>>> index 882fde2a2607..a5edd3c70784 100644 >>>>>>>> --- a/security/integrity/iint.c >>>>>>>> +++ b/security/integrity/iint.c >>>>>>>> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) >>>>>>>> return 0; >>>>>>>> } >>>>>>>> >>>>>>>> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { >>>>>>>> + .lbs_inode = sizeof(struct integrity_iint_cache *), >>>>>>>> +}; >>>>>>> >>>>>>> I'll admit that I'm likely missing an important detail, but is there >>>>>>> a reason why you couldn't stash the integrity_iint_cache struct >>>>>>> directly in the inode's security blob instead of the pointer? For >>>>>>> example: >>>>>>> >>>>>>> struct lsm_blob_sizes ... = { >>>>>>> .lbs_inode = sizeof(struct integrity_iint_cache), >>>>>>> }; >>>>>>> >>>>>>> struct integrity_iint_cache *integrity_inode_get(inode) >>>>>>> { >>>>>>> if (unlikely(!inode->isecurity)) >>>>>>> return NULL; >>>>>>> return inode->i_security + integrity_blob_sizes.lbs_inode; >>>>>>> } >>>>>> >>>>>> It would increase memory occupation. Sometimes the IMA policy >>>>>> encompasses a small subset of the inodes. Allocating the full >>>>>> integrity_iint_cache would be a waste of memory, I guess? >>>>> >>>>> Perhaps, but if it allows us to remove another layer of dynamic memory >>>>> I would argue that it may be worth the cost. It's also worth >>>>> considering the size of integrity_iint_cache, while it isn't small, it >>>>> isn't exactly huge either. >>>>> >>>>>> On the other hand... (did not think fully about that) if we embed the >>>>>> full structure in the security blob, we already have a mutex available >>>>>> to use, and we don't need to take the inode lock (?). >>>>> >>>>> That would be excellent, getting rid of a layer of locking would be significant. >>>>> >>>>>> I'm fully convinced that we can improve the implementation >>>>>> significantly. I just was really hoping to go step by step and not >>>>>> accumulating improvements as dependency for moving IMA and EVM to the >>>>>> LSM infrastructure. >>>>> >>>>> I understand, and I agree that an iterative approach is a good idea, I >>>>> just want to make sure we keep things tidy from a user perspective, >>>>> i.e. not exposing the "integrity" LSM when it isn't required. >>>> >>>> Ok, I went back to it again. >>>> >>>> I think trying to separate integrity metadata is premature now, too >>>> many things at the same time. >>> >>> I'm not bothered by the size of the patchset, it is more important >>> that we do The Right Thing. I would like to hear in more detail why >>> you don't think this will work, I'm not interested in hearing about >>> difficult it may be, I'm interested in hearing about what challenges >>> we need to solve to do this properly. >> >> The right thing in my opinion is to achieve the goal with the minimal >> set of changes, in the most intuitive way. > > Once again, I want to stress that I don't care about the size of the > change, the number of patches in a patchset, etc. While it's always > nice to be able to minimize the number of changes in a patch/patchset, > that is secondary to making sure we are doing the right thing over the > long term. This is especially important when we are talking about > things that are user visible. If we successfully remove the 'integrity' LSM we achieve the goal. What you say is beyond the scope of this patch set, which is just moving IMA and EVM to the LSM infrastructure. Of course we can discuss about nice ideas, how to improve IMA and EVM, but again this is beyond scope. >> Until now, there was no solution that could achieve the primary goal of >> this patch set (moving IMA and EVM to the LSM infrastructure) and, at >> the same time, achieve the additional goal you set of removing the >> 'integrity' LSM. > > We need to stop thinking about the "integrity" code as a LSM, it isn't > a LSM. It's a vestigial implementation detail that was necessary back > when there could only be one LSM active at a time and there was a > desire to have IMA/EVM active in conjunction with one of the LSMs, > i.e. Smack, SELinux, etc. > > IMA and EVM are (or will be) LSMs, "integrity" is not. I recognize > that eliminating the need for the "integrity" code is a relatively new > addition to this effort, but that is only because I didn't properly > understand the relationship between IMA, EVM, and the "integrity" code > until recently. The elimination of the shared "integrity" code is > consistent with promoting IMA and EVM as full LSMs, if there is core > functionality that cannot be split up into the IMA and/or EVM LSMs > then we need to look at how to support that without exposing that > implementation detail/hack to userspace. Maybe that means direct > calls between IMA and EVM, maybe that means preserving some of the > common integrity code hidden from userspace, maybe that means adding > functionality to the LSM layer, maybe that means something else? > Let's think on this to come up with something that we can all accept > as a long term solution instead of just doing the quick and easy > option. Sorry, once we find the proper way to interface the 'ima' and 'evm' LSM with the LSM infrastructure, that is all we need to do. Not changing any internal gives the best guarantee that the behavior remains unchanged. And the best thing is that we are not doing a hack, we are just preserving what is there. Sorry, again, we are not exposing to user space any interface that is going to change in the future once we refactor the integrity metadata management. So, given that, I still haven't seen any compelling reason to do the change you suggested. >> If you see the diff, the changes compared to v5 that was already >> accepted by Mimi are very straightforward. If the assumption I made that >> in the end the 'ima' LSM could take over the role of the 'integrity' >> LSM, that for me is the preferable option. > > I looked at it quickly, but my workflow isn't well suited for patches > as attachments; inline patches (the kernel standard) is preferable. Ok, no problem. I send the patches. >> Given that the patch set is not doing any design change, but merely >> moving calls and storing pointers elsewhere, that leaves us with the >> option of thinking better what to do next, including like you suggested >> to make IMA and EVM use disjoint metadata. >> >>>> I started to think, does EVM really need integrity metadata or it can >>>> work without? >>>> >>>> The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have >>>> the same problem now. What if we make IMA the one that manages >>>> integrity metadata, so that we can remove the 'integrity' LSM? >>> >>> I guess we should probably revisit the basic idea of if it even makes >>> sense to enable EVM without IMA? Should we update the Kconfig to >>> require IMA when EVM is enabled? >> >> That would be up to Mimi. Also this does not seem the main focus of the >> patch set. > > Yes, it is not part of the original main focus, but it is definitely > relevant to the discussion we are having now. Once again, the most > important thing to me is that we do The Right Thing for the long term > maintenance of the code base; if that means scope creep, I've got no > problem with that. > >>>> Regarding the LSM order, I would take Casey's suggestion of introducing >>>> LSM_ORDER_REALLY_LAST, for EVM. >>> >>> Please understand that I really dislike that we have imposed ordering >>> constraints at the LSM layer, but I do understand the necessity (the >>> BPF LSM ordering upsets me the most). I really don't want to see us >>> make things worse by adding yet another ordering bucket, I would >>> rather that we document it well and leave it alone ... basically treat >>> it like the BPF LSM (grrrrrr). >> >> Uhm, that would not be possible right away (the BPF LSM is mutable), >> remember that we defined LSM_ORDER_LAST so that an LSM can be always >> enable and placed as last (requested by Mimi)? > > To be clear, I can both dislike the bpf-always-last and LSM_ORDER_LAST > concepts while accepting them as necessary evils. I'm willing to > tolerate LSM_ORDER_LAST, but I'm not currently willing to tolerate > LSM_ORDER_REALLY_LAST; that is one step too far right now. I brought > up the BPF LSM simply as an example of ordering that is not enforced > by code, but rather by documentation and convention. Given what Petr found, we don't need an additional order. Thanks Roberto
On 11/30/2023 1:34 PM, Roberto Sassu wrote: > On 11/30/2023 5:15 PM, Casey Schaufler wrote: >> On 11/30/2023 12:30 AM, Petr Tesarik wrote: >>> Hi all, >>> >>> On 11/30/2023 1:41 AM, Casey Schaufler wrote: >>>> ... >>>> It would be nice if the solution directly addresses the problem. >>>> EVM needs to be after the LSMs that use xattrs, not after all LSMs. >>>> I suggested LSM_ORDER_REALLY_LAST in part to identify the notion as >>>> unattractive. >>> Excuse me to chime in, but do we really need the ordering in code? >> >> tl;dr - Yes. >> >>> FWIW >>> the linker guarantees that objects appear in the order they are seen >>> during the link (unless --sort-section overrides that default, but this >>> option is not used in the kernel). Since *.a archive files are used in >>> kbuild, I have also verified that their use does not break the >>> assumption; they are always created from scratch. >>> >>> In short, to enforce an ordering, you can simply list the corresponding >>> object files in that order in the Makefile. Of course, add a big fat >>> warning comment, so people understand the order is not arbitrary. >> >> Not everyone builds custom kernels. > > Sorry, I didn't understand your comment. Most people run a disto supplied kernel. If the LSM ordering were determined only at compile time you could never run a kernel that omitted an LSM. > Everyone builds the kernel, also Linux distros. What Petr was > suggesting was that it does not matter how you build the kernel, the > linker will place the LSMs in the order they appear in the Makefile. > And for this particular case, we have: > > obj-$(CONFIG_IMA) += ima/ > obj-$(CONFIG_EVM) += evm/ > > In the past, I also verified that swapping these two resulted in the > swapped order of LSMs. Petr confirmed that it would always happen. LSM execution order is not based on compilation order. It is specified by CONFIG_LSM, and may be modified by the LSM_ORDER value. I don't understand why the linker is even being brought into the discussion. > > Thanks > > Roberto
On 12/1/2023 12:31 AM, Casey Schaufler wrote: > On 11/30/2023 1:34 PM, Roberto Sassu wrote: >> On 11/30/2023 5:15 PM, Casey Schaufler wrote: >>> On 11/30/2023 12:30 AM, Petr Tesarik wrote: >>>> Hi all, >>>> >>>> On 11/30/2023 1:41 AM, Casey Schaufler wrote: >>>>> ... >>>>> It would be nice if the solution directly addresses the problem. >>>>> EVM needs to be after the LSMs that use xattrs, not after all LSMs. >>>>> I suggested LSM_ORDER_REALLY_LAST in part to identify the notion as >>>>> unattractive. >>>> Excuse me to chime in, but do we really need the ordering in code? >>> >>> tl;dr - Yes. >>> >>>> FWIW >>>> the linker guarantees that objects appear in the order they are seen >>>> during the link (unless --sort-section overrides that default, but this >>>> option is not used in the kernel). Since *.a archive files are used in >>>> kbuild, I have also verified that their use does not break the >>>> assumption; they are always created from scratch. >>>> >>>> In short, to enforce an ordering, you can simply list the corresponding >>>> object files in that order in the Makefile. Of course, add a big fat >>>> warning comment, so people understand the order is not arbitrary. >>> >>> Not everyone builds custom kernels. >> >> Sorry, I didn't understand your comment. > > Most people run a disto supplied kernel. If the LSM ordering were determined > only at compile time you could never run a kernel that omitted an LSM. Ah, ok. We are talking about the LSMs with order LSM_ORDER_LAST which are always enabled and the last. This is the code in security.c to handle them: /* LSM_ORDER_LAST is always last. */ for (lsm = __start_lsm_info; lsm < __end_lsm_info; lsm++) { if (lsm->order == LSM_ORDER_LAST) append_ordered_lsm(lsm, " last"); } Those LSMs are not affected by lsm= in the kernel command line, or the order in the kernel configuration (those are the mutable LSMs). In this case, clearly, what matters is how LSMs are stored in the .lsm_info.init section. See the DEFINE_LSM() macro: #define DEFINE_LSM(lsm) \ static struct lsm_info __lsm_##lsm \ __used __section(".lsm_info.init") \ __aligned(sizeof(unsigned long)) With Petr, we started to wonder if somehow the order in which LSMs are placed in this section is deterministic. I empirically tried to swap the order in which IMA and EVM are compiled in the Makefile, and that led to 'evm' being placed in the LSM list before 'ima'. The question is if this behavior is deterministic, or there is a case where 'evm' is before 'ima', despite they are in the inverse order in the Makefile. Petr looked at the kernel linking process, which is relevant for the order of LSMs in the .lsm_info.init section, and he found that the order in the section always corresponds to the order in the Makefile. Thanks Roberto >> Everyone builds the kernel, also Linux distros. What Petr was >> suggesting was that it does not matter how you build the kernel, the >> linker will place the LSMs in the order they appear in the Makefile. >> And for this particular case, we have: >> >> obj-$(CONFIG_IMA) += ima/ >> obj-$(CONFIG_EVM) += evm/ >> >> In the past, I also verified that swapping these two resulted in the >> swapped order of LSMs. Petr confirmed that it would always happen. > > LSM execution order is not based on compilation order. It is specified > by CONFIG_LSM, and may be modified by the LSM_ORDER value. I don't > understand why the linker is even being brought into the discussion. > >> >> Thanks >> >> Roberto
On 11/30/2023 3:43 PM, Roberto Sassu wrote: > On 12/1/2023 12:31 AM, Casey Schaufler wrote: >> On 11/30/2023 1:34 PM, Roberto Sassu wrote: >>> On 11/30/2023 5:15 PM, Casey Schaufler wrote: >>>> On 11/30/2023 12:30 AM, Petr Tesarik wrote: >>>>> Hi all, >>>>> >>>>> On 11/30/2023 1:41 AM, Casey Schaufler wrote: >>>>>> ... >>>>>> It would be nice if the solution directly addresses the problem. >>>>>> EVM needs to be after the LSMs that use xattrs, not after all LSMs. >>>>>> I suggested LSM_ORDER_REALLY_LAST in part to identify the notion as >>>>>> unattractive. >>>>> Excuse me to chime in, but do we really need the ordering in code? >>>> >>>> tl;dr - Yes. >>>> >>>>> FWIW >>>>> the linker guarantees that objects appear in the order they are seen >>>>> during the link (unless --sort-section overrides that default, but >>>>> this >>>>> option is not used in the kernel). Since *.a archive files are >>>>> used in >>>>> kbuild, I have also verified that their use does not break the >>>>> assumption; they are always created from scratch. >>>>> >>>>> In short, to enforce an ordering, you can simply list the >>>>> corresponding >>>>> object files in that order in the Makefile. Of course, add a big fat >>>>> warning comment, so people understand the order is not arbitrary. >>>> >>>> Not everyone builds custom kernels. >>> >>> Sorry, I didn't understand your comment. >> >> Most people run a disto supplied kernel. If the LSM ordering were >> determined >> only at compile time you could never run a kernel that omitted an LSM. > > Ah, ok. We are talking about the LSMs with order LSM_ORDER_LAST which > are always enabled and the last. > > This is the code in security.c to handle them: > > /* LSM_ORDER_LAST is always last. */ > for (lsm = __start_lsm_info; lsm < __end_lsm_info; lsm++) { > if (lsm->order == LSM_ORDER_LAST) > append_ordered_lsm(lsm, " last"); > } > > Those LSMs are not affected by lsm= in the kernel command line, or the > order in the kernel configuration (those are the mutable LSMs). > > In this case, clearly, what matters is how LSMs are stored in the > .lsm_info.init section. See the DEFINE_LSM() macro: > > #define DEFINE_LSM(lsm) \ > static struct lsm_info __lsm_##lsm \ > __used __section(".lsm_info.init") \ > __aligned(sizeof(unsigned long)) > > With Petr, we started to wonder if somehow the order in which LSMs are > placed in this section is deterministic. I empirically tried to swap > the order in which IMA and EVM are compiled in the Makefile, and that > led to 'evm' being placed in the LSM list before 'ima'. > > The question is if this behavior is deterministic, or there is a case > where 'evm' is before 'ima', despite they are in the inverse order in > the Makefile. > > Petr looked at the kernel linking process, which is relevant for the > order of LSMs in the .lsm_info.init section, and he found that the > order in the section always corresponds to the order in the Makefile. OK, that's staring to make sense. My recollection is that there wasn't an expectation for multiple LSM_ORDER_FIRST or LSM_ORDER_LAST entries in the beginning. They were supposed to be special cases, not general features. > > Thanks > > Roberto >>> Everyone builds the kernel, also Linux distros. What Petr was >>> suggesting was that it does not matter how you build the kernel, the >>> linker will place the LSMs in the order they appear in the Makefile. >>> And for this particular case, we have: >>> >>> obj-$(CONFIG_IMA) += ima/ >>> obj-$(CONFIG_EVM) += evm/ >>> >>> In the past, I also verified that swapping these two resulted in the >>> swapped order of LSMs. Petr confirmed that it would always happen. >> >> LSM execution order is not based on compilation order. It is specified >> by CONFIG_LSM, and may be modified by the LSM_ORDER value. I don't >> understand why the linker is even being brought into the discussion. >> >>> >>> Thanks >>> >>> Roberto >
On Wed, Nov 29, 2023 at 07:46:43PM +0100, Roberto Sassu wrote: Good evening, I hope the week has gone well for everyone. > On 11/29/2023 6:22 PM, Paul Moore wrote: > >On Wed, Nov 29, 2023 at 7:28???AM Roberto Sassu > ><roberto.sassu@huaweicloud.com> wrote: > >> > >>On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > >>>On Mon, Nov 20, 2023 at 3:16???AM Roberto Sassu > >>><roberto.sassu@huaweicloud.com> wrote: > >>>>On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > >>>>>On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > >>>>>> > >>>>>>Before the security field of kernel objects could be shared among > >>>>>>LSMs with > >>>>>>the LSM stacking feature, IMA and EVM had to rely on an alternative > >>>>>>storage > >>>>>>of inode metadata. The association between inode metadata and inode is > >>>>>>maintained through an rbtree. > >>>>>> > >>>>>>Because of this alternative storage mechanism, there was no need to > >>>>>>use > >>>>>>disjoint inode metadata, so IMA and EVM today still share them. > >>>>>> > >>>>>>With the reservation mechanism offered by the LSM infrastructure, the > >>>>>>rbtree is no longer necessary, as each LSM could reserve a space in > >>>>>>the > >>>>>>security blob for each inode. However, since IMA and EVM share the > >>>>>>inode metadata, they cannot directly reserve the space for them. > >>>>>> > >>>>>>Instead, request from the 'integrity' LSM a space in the security > >>>>>>blob for > >>>>>>the pointer of inode metadata (integrity_iint_cache structure). The > >>>>>>other > >>>>>>reason for keeping the 'integrity' LSM is to preserve the original > >>>>>>ordering > >>>>>>of IMA and EVM functions as when they were hardcoded. > >>>>>> > >>>>>>Prefer reserving space for a pointer to allocating the > >>>>>>integrity_iint_cache > >>>>>>structure directly, as IMA would require it only for a subset of > >>>>>>inodes. > >>>>>>Always allocating it would cause a waste of memory. > >>>>>> > >>>>>>Introduce two primitives for getting and setting the pointer of > >>>>>>integrity_iint_cache in the security blob, respectively > >>>>>>integrity_inode_get_iint() and integrity_inode_set_iint(). This would > >>>>>>make > >>>>>>the code more understandable, as they directly replace rbtree > >>>>>>operations. > >>>>>> > >>>>>>Locking is not needed, as access to inode metadata is not shared, it > >>>>>>is per > >>>>>>inode. > >>>>>> > >>>>>>Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > >>>>>>Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > >>>>>>Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > >>>>>>--- > >>>>>> security/integrity/iint.c | 71 > >>>>>> +++++----------------------------- > >>>>>> security/integrity/integrity.h | 20 +++++++++- > >>>>>> 2 files changed, 29 insertions(+), 62 deletions(-) > >>>>>> > >>>>>>diff --git a/security/integrity/iint.c b/security/integrity/iint.c > >>>>>>index 882fde2a2607..a5edd3c70784 100644 > >>>>>>--- a/security/integrity/iint.c > >>>>>>+++ b/security/integrity/iint.c > >>>>>>@@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > >>>>>> return 0; > >>>>>> } > >>>>>> > >>>>>>+struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > >>>>>>+ .lbs_inode = sizeof(struct integrity_iint_cache *), > >>>>>>+}; > >>>>> > >>>>>I'll admit that I'm likely missing an important detail, but is there > >>>>>a reason why you couldn't stash the integrity_iint_cache struct > >>>>>directly in the inode's security blob instead of the pointer? For > >>>>>example: > >>>>> > >>>>> struct lsm_blob_sizes ... = { > >>>>> .lbs_inode = sizeof(struct integrity_iint_cache), > >>>>> }; > >>>>> > >>>>> struct integrity_iint_cache *integrity_inode_get(inode) > >>>>> { > >>>>> if (unlikely(!inode->isecurity)) > >>>>> return NULL; > >>>>> return inode->i_security + integrity_blob_sizes.lbs_inode; > >>>>> } > >>>> > >>>>It would increase memory occupation. Sometimes the IMA policy > >>>>encompasses a small subset of the inodes. Allocating the full > >>>>integrity_iint_cache would be a waste of memory, I guess? > >>> > >>>Perhaps, but if it allows us to remove another layer of dynamic memory > >>>I would argue that it may be worth the cost. It's also worth > >>>considering the size of integrity_iint_cache, while it isn't small, it > >>>isn't exactly huge either. > >>> > >>>>On the other hand... (did not think fully about that) if we embed the > >>>>full structure in the security blob, we already have a mutex available > >>>>to use, and we don't need to take the inode lock (?). > >>> > >>>That would be excellent, getting rid of a layer of locking would be > >>>significant. > >>> > >>>>I'm fully convinced that we can improve the implementation > >>>>significantly. I just was really hoping to go step by step and not > >>>>accumulating improvements as dependency for moving IMA and EVM to the > >>>>LSM infrastructure. > >>> > >>>I understand, and I agree that an iterative approach is a good idea, I > >>>just want to make sure we keep things tidy from a user perspective, > >>>i.e. not exposing the "integrity" LSM when it isn't required. > >> > >>Ok, I went back to it again. > >> > >>I think trying to separate integrity metadata is premature now, too > >>many things at the same time. > > > >I'm not bothered by the size of the patchset, it is more important > >that we do The Right Thing. I would like to hear in more detail why > >you don't think this will work, I'm not interested in hearing about > >difficult it may be, I'm interested in hearing about what challenges > >we need to solve to do this properly. > > The right thing in my opinion is to achieve the goal with the minimal > set of changes, in the most intuitive way. > > Until now, there was no solution that could achieve the primary goal of > this patch set (moving IMA and EVM to the LSM infrastructure) and, at > the same time, achieve the additional goal you set of removing the > 'integrity' LSM. > > If you see the diff, the changes compared to v5 that was already > accepted by Mimi are very straightforward. If the assumption I made that > in the end the 'ima' LSM could take over the role of the 'integrity' > LSM, that for me is the preferable option. > > Given that the patch set is not doing any design change, but merely > moving calls and storing pointers elsewhere, that leaves us with the > option of thinking better what to do next, including like you suggested > to make IMA and EVM use disjoint metadata. A suggestion has been made in this thread that there needs to be broad thinking on this issue, and by extension, other tough problems. On that note, we would be interested in any thoughts regarding the notion of a long term solution for this issue being the migration of EVM to a BPF based implementation? There appears to be consensus that the BPF LSM will always go last, a BPF implementation would seem to address the EVM ordering issue. In a larger context, there have been suggestions in other LSM threads that BPF is the future for doing LSM's. Coincident with that has come some disagreement about whether or not BPF embodies sufficient functionality for this role. The EVM codebase is reasonably modest with a very limited footprint of hooks that it handles. A BPF implementation on this scale would seem to go a long ways in placing BPF sufficiency concerns to rest. Thoughts/issues? > Thanks > > Roberto Have a good weekend. As always, Dr. Greg The Quixote Project - Flailing at the Travails of Cybersecurity
On 11/30/2023 5:05 PM, Dr. Greg wrote: > A suggestion has been made in this thread that there needs to be broad > thinking on this issue, and by extension, other tough problems. On > that note, we would be interested in any thoughts regarding the notion > of a long term solution for this issue being the migration of EVM to a > BPF based implementation? > > There appears to be consensus that the BPF LSM will always go last, a > BPF implementation would seem to address the EVM ordering issue. > > In a larger context, there have been suggestions in other LSM threads > that BPF is the future for doing LSM's. Coincident with that has come > some disagreement about whether or not BPF embodies sufficient > functionality for this role. > > The EVM codebase is reasonably modest with a very limited footprint of > hooks that it handles. A BPF implementation on this scale would seem > to go a long ways in placing BPF sufficiency concerns to rest. > > Thoughts/issues? Converting EVM to BPF looks like a 5 to 10 year process. Creating a EVM design description to work from, building all the support functions required, then getting sufficient reviews and testing isn't going to be a walk in the park. That leaves out the issue of distribution of the EVM-BPF programs. Consider how the rush to convert kernel internals to Rust is progressing. EVM isn't huge, but it isn't trivial, either. Tetsuo had a good hard look at converting TOMOYO to BPF, and concluded that it wasn't practical. TOMOYO is considerably less complicated than EVM.
On Fri, Dec 01, 2023 at 10:54:54AM -0800, Casey Schaufler wrote: Good evening Casey, thanks for taking the time to respond. > On 11/30/2023 5:05 PM, Dr. Greg wrote: > > A suggestion has been made in this thread that there needs to be broad > > thinking on this issue, and by extension, other tough problems. On > > that note, we would be interested in any thoughts regarding the notion > > of a long term solution for this issue being the migration of EVM to a > > BPF based implementation? > > > > There appears to be consensus that the BPF LSM will always go last, a > > BPF implementation would seem to address the EVM ordering issue. > > > > In a larger context, there have been suggestions in other LSM threads > > that BPF is the future for doing LSM's. Coincident with that has come > > some disagreement about whether or not BPF embodies sufficient > > functionality for this role. > > > > The EVM codebase is reasonably modest with a very limited footprint of > > hooks that it handles. A BPF implementation on this scale would seem > > to go a long ways in placing BPF sufficiency concerns to rest. > > > > Thoughts/issues? > Converting EVM to BPF looks like a 5 to 10 year process. Creating a > EVM design description to work from, building all the support functions > required, then getting sufficient reviews and testing isn't going to be > a walk in the park. That leaves out the issue of distribution of the > EVM-BPF programs. Consider how the rush to convert kernel internals to > Rust is progressing. EVM isn't huge, but it isn't trivial, either. Tetsuo > had a good hard look at converting TOMOYO to BPF, and concluded that it > wasn't practical. TOMOYO is considerably less complicated than EVM. Interesting, thanks for the reflections. On a functional line basis, EVM is 14% of the TOMOYO codebase, not counting the IMA code. Given your observations, one would than presume around a decade of development effort to deliver a full featured LSM, ie. SELINUX, SMACK, APPARMOR, TOMOYO in BPF form. Very useful information, we can now return the thread to what appears is going to be the vexing implementation of: lsm_set_order(LSM_ORDER_FU_I_REALLY_AM_GOING_TO_BE_THE_LAST_ONE_TO_RUN); :-) Have a good weekend. As always, Dr. Greg The Quixote Project - Flailing at the Travails of Cybersecurity
On 12/1/2023 3:53 PM, Dr. Greg wrote: > On Fri, Dec 01, 2023 at 10:54:54AM -0800, Casey Schaufler wrote: > > Good evening Casey, thanks for taking the time to respond. > >> On 11/30/2023 5:05 PM, Dr. Greg wrote: >>> A suggestion has been made in this thread that there needs to be broad >>> thinking on this issue, and by extension, other tough problems. On >>> that note, we would be interested in any thoughts regarding the notion >>> of a long term solution for this issue being the migration of EVM to a >>> BPF based implementation? >>> >>> There appears to be consensus that the BPF LSM will always go last, a >>> BPF implementation would seem to address the EVM ordering issue. >>> >>> In a larger context, there have been suggestions in other LSM threads >>> that BPF is the future for doing LSM's. Coincident with that has come >>> some disagreement about whether or not BPF embodies sufficient >>> functionality for this role. >>> >>> The EVM codebase is reasonably modest with a very limited footprint of >>> hooks that it handles. A BPF implementation on this scale would seem >>> to go a long ways in placing BPF sufficiency concerns to rest. >>> >>> Thoughts/issues? >> Converting EVM to BPF looks like a 5 to 10 year process. Creating a >> EVM design description to work from, building all the support functions >> required, then getting sufficient reviews and testing isn't going to be >> a walk in the park. That leaves out the issue of distribution of the >> EVM-BPF programs. Consider how the rush to convert kernel internals to >> Rust is progressing. EVM isn't huge, but it isn't trivial, either. Tetsuo >> had a good hard look at converting TOMOYO to BPF, and concluded that it >> wasn't practical. TOMOYO is considerably less complicated than EVM. > Interesting, thanks for the reflections. > > On a functional line basis, EVM is 14% of the TOMOYO codebase, not > counting the IMA code. For EVM to be completely converted to BPF you'll need significant, but as yet undiscovered, changes in IMA and, most likely, the LSM infrastructure. > Given your observations, one would than presume around a decade of > development effort to deliver a full featured LSM, ie. SELINUX, SMACK, > APPARMOR, TOMOYO in BPF form. That's not quite true. A new, from scratch LSM implementing something like SELinux, Smack or AppArmor would take considerably less time. Converting an existing LSM and being "bug compatible" is going to be painful. > Very useful information, we can now return the thread to what appears > is going to be the vexing implementation of: > > lsm_set_order(LSM_ORDER_FU_I_REALLY_AM_GOING_TO_BE_THE_LAST_ONE_TO_RUN); Just so. > > :-) > > Have a good weekend. > > As always, > Dr. Greg > > The Quixote Project - Flailing at the Travails of Cybersecurity >
On Thu, 2023-11-30 at 11:34 -0500, Paul Moore wrote: > On Wed, Nov 29, 2023 at 1:47 PM Roberto Sassu > <roberto.sassu@huaweicloud.com> wrote: > > On 11/29/2023 6:22 PM, Paul Moore wrote: > > > On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu > > > <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > > > > > On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu > > > > > <roberto.sassu@huaweicloud.com> wrote: > > > > > > On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > > > > > > > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > > > > > > > > > Before the security field of kernel objects could be shared among LSMs with > > > > > > > > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > > > > > > > > of inode metadata. The association between inode metadata and inode is > > > > > > > > maintained through an rbtree. > > > > > > > > > > > > > > > > Because of this alternative storage mechanism, there was no need to use > > > > > > > > disjoint inode metadata, so IMA and EVM today still share them. > > > > > > > > > > > > > > > > With the reservation mechanism offered by the LSM infrastructure, the > > > > > > > > rbtree is no longer necessary, as each LSM could reserve a space in the > > > > > > > > security blob for each inode. However, since IMA and EVM share the > > > > > > > > inode metadata, they cannot directly reserve the space for them. > > > > > > > > > > > > > > > > Instead, request from the 'integrity' LSM a space in the security blob for > > > > > > > > the pointer of inode metadata (integrity_iint_cache structure). The other > > > > > > > > reason for keeping the 'integrity' LSM is to preserve the original ordering > > > > > > > > of IMA and EVM functions as when they were hardcoded. > > > > > > > > > > > > > > > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > > > > > > > > structure directly, as IMA would require it only for a subset of inodes. > > > > > > > > Always allocating it would cause a waste of memory. > > > > > > > > > > > > > > > > Introduce two primitives for getting and setting the pointer of > > > > > > > > integrity_iint_cache in the security blob, respectively > > > > > > > > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > > > > > > > > the code more understandable, as they directly replace rbtree operations. > > > > > > > > > > > > > > > > Locking is not needed, as access to inode metadata is not shared, it is per > > > > > > > > inode. > > > > > > > > > > > > > > > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > > > > > > > > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > > > > > > > > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > > > > > > > > --- > > > > > > > > security/integrity/iint.c | 71 +++++----------------------------- > > > > > > > > security/integrity/integrity.h | 20 +++++++++- > > > > > > > > 2 files changed, 29 insertions(+), 62 deletions(-) > > > > > > > > > > > > > > > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > > > > > > > > index 882fde2a2607..a5edd3c70784 100644 > > > > > > > > --- a/security/integrity/iint.c > > > > > > > > +++ b/security/integrity/iint.c > > > > > > > > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > > > > > > > > return 0; > > > > > > > > } > > > > > > > > > > > > > > > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > > > > > > > > + .lbs_inode = sizeof(struct integrity_iint_cache *), > > > > > > > > +}; > > > > > > > > > > > > > > I'll admit that I'm likely missing an important detail, but is there > > > > > > > a reason why you couldn't stash the integrity_iint_cache struct > > > > > > > directly in the inode's security blob instead of the pointer? For > > > > > > > example: > > > > > > > > > > > > > > struct lsm_blob_sizes ... = { > > > > > > > .lbs_inode = sizeof(struct integrity_iint_cache), > > > > > > > }; > > > > > > > > > > > > > > struct integrity_iint_cache *integrity_inode_get(inode) > > > > > > > { > > > > > > > if (unlikely(!inode->isecurity)) > > > > > > > return NULL; > > > > > > > return inode->i_security + integrity_blob_sizes.lbs_inode; > > > > > > > } > > > > > > > > > > > > It would increase memory occupation. Sometimes the IMA policy > > > > > > encompasses a small subset of the inodes. Allocating the full > > > > > > integrity_iint_cache would be a waste of memory, I guess? > > > > > > > > > > Perhaps, but if it allows us to remove another layer of dynamic memory > > > > > I would argue that it may be worth the cost. It's also worth > > > > > considering the size of integrity_iint_cache, while it isn't small, it > > > > > isn't exactly huge either. > > > > > > > > > > > On the other hand... (did not think fully about that) if we embed the > > > > > > full structure in the security blob, we already have a mutex available > > > > > > to use, and we don't need to take the inode lock (?). > > > > > > > > > > That would be excellent, getting rid of a layer of locking would be significant. > > > > > > > > > > > I'm fully convinced that we can improve the implementation > > > > > > significantly. I just was really hoping to go step by step and not > > > > > > accumulating improvements as dependency for moving IMA and EVM to the > > > > > > LSM infrastructure. > > > > > > > > > > I understand, and I agree that an iterative approach is a good idea, I > > > > > just want to make sure we keep things tidy from a user perspective, > > > > > i.e. not exposing the "integrity" LSM when it isn't required. > > > > > > > > Ok, I went back to it again. > > > > > > > > I think trying to separate integrity metadata is premature now, too > > > > many things at the same time. > > > > > > I'm not bothered by the size of the patchset, it is more important > > > that we do The Right Thing. I would like to hear in more detail why > > > you don't think this will work, I'm not interested in hearing about > > > difficult it may be, I'm interested in hearing about what challenges > > > we need to solve to do this properly. > > > > The right thing in my opinion is to achieve the goal with the minimal > > set of changes, in the most intuitive way. > > Once again, I want to stress that I don't care about the size of the > change, the number of patches in a patchset, etc. While it's always > nice to be able to minimize the number of changes in a patch/patchset, > that is secondary to making sure we are doing the right thing over the > long term. This is especially important when we are talking about > things that are user visible. > > > Until now, there was no solution that could achieve the primary goal of > > this patch set (moving IMA and EVM to the LSM infrastructure) and, at > > the same time, achieve the additional goal you set of removing the > > 'integrity' LSM. > > We need to stop thinking about the "integrity" code as a LSM, it isn't > a LSM. It's a vestigial implementation detail that was necessary back > when there could only be one LSM active at a time and there was a > desire to have IMA/EVM active in conjunction with one of the LSMs, > i.e. Smack, SELinux, etc. > > IMA and EVM are (or will be) LSMs, "integrity" is not. I recognize > that eliminating the need for the "integrity" code is a relatively new > addition to this effort, but that is only because I didn't properly > understand the relationship between IMA, EVM, and the "integrity" code > until recently. The elimination of the shared "integrity" code is > consistent with promoting IMA and EVM as full LSMs, if there is core > functionality that cannot be split up into the IMA and/or EVM LSMs > then we need to look at how to support that without exposing that > implementation detail/hack to userspace. Maybe that means direct > calls between IMA and EVM, maybe that means preserving some of the > common integrity code hidden from userspace, maybe that means adding > functionality to the LSM layer, maybe that means something else? > Let's think on this to come up with something that we can all accept > as a long term solution instead of just doing the quick and easy > option. If the result of this patch set should be that IMA and EVM become proper LSMs without the shared integrity layer, instead of collapsing all changes in this patch set, I think we should first verify if IMA and EVM can be really independent. Once we guarantee that, we can proceed making the proper LSMs. These are the changes I have in mind: 1) Fix evm_verifyxattr(), and make it work without integrity_iint_cache 2) Remove the integrity_iint_cache parameter from evm_verifyxattr(), since the other callers are not going to use it 3) Create an internal function with the original parameters to be used by IMA 4) Introduce evm_post_path_mknod(), which similarly to ima_post_path_mknod(), sets IMA_NEW_FILE for new files 5) Add hardcoded call to evm_post_path_mknod() after ima_post_path_mknod() in security.c If we think that this is good enough, we proceed with the move of IMA and EVM functions to the LSM infrastructure (patches v7 19-21). The next patches are going to be similar to patches v6 22-23, but unlike those, their goal would be simply to split metadata, not to make IMA and EVM independent, which at this point has been addressed separately in the prerequisite patches. The final patch is to remove the 'integrity' LSM and the integrity metadata management code, which now is not used anymore. Would that work? Thanks Roberto > > If you see the diff, the changes compared to v5 that was already > > accepted by Mimi are very straightforward. If the assumption I made that > > in the end the 'ima' LSM could take over the role of the 'integrity' > > LSM, that for me is the preferable option. > > I looked at it quickly, but my workflow isn't well suited for patches > as attachments; inline patches (the kernel standard) is preferable. > > > Given that the patch set is not doing any design change, but merely > > moving calls and storing pointers elsewhere, that leaves us with the > > option of thinking better what to do next, including like you suggested > > to make IMA and EVM use disjoint metadata. > > > > > > I started to think, does EVM really need integrity metadata or it can > > > > work without? > > > > > > > > The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have > > > > the same problem now. What if we make IMA the one that manages > > > > integrity metadata, so that we can remove the 'integrity' LSM? > > > > > > I guess we should probably revisit the basic idea of if it even makes > > > sense to enable EVM without IMA? Should we update the Kconfig to > > > require IMA when EVM is enabled? > > > > That would be up to Mimi. Also this does not seem the main focus of the > > patch set. > > Yes, it is not part of the original main focus, but it is definitely > relevant to the discussion we are having now. Once again, the most > important thing to me is that we do The Right Thing for the long term > maintenance of the code base; if that means scope creep, I've got no > problem with that. > > > > > Regarding the LSM order, I would take Casey's suggestion of introducing > > > > LSM_ORDER_REALLY_LAST, for EVM. > > > > > > Please understand that I really dislike that we have imposed ordering > > > constraints at the LSM layer, but I do understand the necessity (the > > > BPF LSM ordering upsets me the most). I really don't want to see us > > > make things worse by adding yet another ordering bucket, I would > > > rather that we document it well and leave it alone ... basically treat > > > it like the BPF LSM (grrrrrr). > > > > Uhm, that would not be possible right away (the BPF LSM is mutable), > > remember that we defined LSM_ORDER_LAST so that an LSM can be always > > enable and placed as last (requested by Mimi)? > > To be clear, I can both dislike the bpf-always-last and LSM_ORDER_LAST > concepts while accepting them as necessary evils. I'm willing to > tolerate LSM_ORDER_LAST, but I'm not currently willing to tolerate > LSM_ORDER_REALLY_LAST; that is one step too far right now. I brought > up the BPF LSM simply as an example of ordering that is not enforced > by code, but rather by documentation and convention. >
On Mon, 2023-12-04 at 14:26 +0100, Roberto Sassu wrote: > On Thu, 2023-11-30 at 11:34 -0500, Paul Moore wrote: > > On Wed, Nov 29, 2023 at 1:47 PM Roberto Sassu > > <roberto.sassu@huaweicloud.com> wrote: > > > On 11/29/2023 6:22 PM, Paul Moore wrote: > > > > On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu > > > > <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > > > On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > > > > > > On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu > > > > > > <roberto.sassu@huaweicloud.com> wrote: > > > > > > > On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > > > > > > > > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > > > > > > > > > > > Before the security field of kernel objects could be shared among LSMs with > > > > > > > > > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > > > > > > > > > of inode metadata. The association between inode metadata and inode is > > > > > > > > > maintained through an rbtree. > > > > > > > > > > > > > > > > > > Because of this alternative storage mechanism, there was no need to use > > > > > > > > > disjoint inode metadata, so IMA and EVM today still share them. > > > > > > > > > > > > > > > > > > With the reservation mechanism offered by the LSM infrastructure, the > > > > > > > > > rbtree is no longer necessary, as each LSM could reserve a space in the > > > > > > > > > security blob for each inode. However, since IMA and EVM share the > > > > > > > > > inode metadata, they cannot directly reserve the space for them. > > > > > > > > > > > > > > > > > > Instead, request from the 'integrity' LSM a space in the security blob for > > > > > > > > > the pointer of inode metadata (integrity_iint_cache structure). The other > > > > > > > > > reason for keeping the 'integrity' LSM is to preserve the original ordering > > > > > > > > > of IMA and EVM functions as when they were hardcoded. > > > > > > > > > > > > > > > > > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > > > > > > > > > structure directly, as IMA would require it only for a subset of inodes. > > > > > > > > > Always allocating it would cause a waste of memory. > > > > > > > > > > > > > > > > > > Introduce two primitives for getting and setting the pointer of > > > > > > > > > integrity_iint_cache in the security blob, respectively > > > > > > > > > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > > > > > > > > > the code more understandable, as they directly replace rbtree operations. > > > > > > > > > > > > > > > > > > Locking is not needed, as access to inode metadata is not shared, it is per > > > > > > > > > inode. > > > > > > > > > > > > > > > > > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > > > > > > > > > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > > > > > > > > > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > > > > > > > > > --- > > > > > > > > > security/integrity/iint.c | 71 +++++----------------------------- > > > > > > > > > security/integrity/integrity.h | 20 +++++++++- > > > > > > > > > 2 files changed, 29 insertions(+), 62 deletions(-) > > > > > > > > > > > > > > > > > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > > > > > > > > > index 882fde2a2607..a5edd3c70784 100644 > > > > > > > > > --- a/security/integrity/iint.c > > > > > > > > > +++ b/security/integrity/iint.c > > > > > > > > > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > > > > > > > > > return 0; > > > > > > > > > } > > > > > > > > > > > > > > > > > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > > > > > > > > > + .lbs_inode = sizeof(struct integrity_iint_cache *), > > > > > > > > > +}; > > > > > > > > > > > > > > > > I'll admit that I'm likely missing an important detail, but is there > > > > > > > > a reason why you couldn't stash the integrity_iint_cache struct > > > > > > > > directly in the inode's security blob instead of the pointer? For > > > > > > > > example: > > > > > > > > > > > > > > > > struct lsm_blob_sizes ... = { > > > > > > > > .lbs_inode = sizeof(struct integrity_iint_cache), > > > > > > > > }; > > > > > > > > > > > > > > > > struct integrity_iint_cache *integrity_inode_get(inode) > > > > > > > > { > > > > > > > > if (unlikely(!inode->isecurity)) > > > > > > > > return NULL; > > > > > > > > return inode->i_security + integrity_blob_sizes.lbs_inode; > > > > > > > > } > > > > > > > > > > > > > > It would increase memory occupation. Sometimes the IMA policy > > > > > > > encompasses a small subset of the inodes. Allocating the full > > > > > > > integrity_iint_cache would be a waste of memory, I guess? > > > > > > > > > > > > Perhaps, but if it allows us to remove another layer of dynamic memory > > > > > > I would argue that it may be worth the cost. It's also worth > > > > > > considering the size of integrity_iint_cache, while it isn't small, it > > > > > > isn't exactly huge either. > > > > > > > > > > > > > On the other hand... (did not think fully about that) if we embed the > > > > > > > full structure in the security blob, we already have a mutex available > > > > > > > to use, and we don't need to take the inode lock (?). > > > > > > > > > > > > That would be excellent, getting rid of a layer of locking would be significant. > > > > > > > > > > > > > I'm fully convinced that we can improve the implementation > > > > > > > significantly. I just was really hoping to go step by step and not > > > > > > > accumulating improvements as dependency for moving IMA and EVM to the > > > > > > > LSM infrastructure. > > > > > > > > > > > > I understand, and I agree that an iterative approach is a good idea, I > > > > > > just want to make sure we keep things tidy from a user perspective, > > > > > > i.e. not exposing the "integrity" LSM when it isn't required. > > > > > > > > > > Ok, I went back to it again. > > > > > > > > > > I think trying to separate integrity metadata is premature now, too > > > > > many things at the same time. > > > > > > > > I'm not bothered by the size of the patchset, it is more important > > > > that we do The Right Thing. I would like to hear in more detail why > > > > you don't think this will work, I'm not interested in hearing about > > > > difficult it may be, I'm interested in hearing about what challenges > > > > we need to solve to do this properly. > > > > > > The right thing in my opinion is to achieve the goal with the minimal > > > set of changes, in the most intuitive way. > > > > Once again, I want to stress that I don't care about the size of the > > change, the number of patches in a patchset, etc. While it's always > > nice to be able to minimize the number of changes in a patch/patchset, > > that is secondary to making sure we are doing the right thing over the > > long term. This is especially important when we are talking about > > things that are user visible. > > > > > Until now, there was no solution that could achieve the primary goal of > > > this patch set (moving IMA and EVM to the LSM infrastructure) and, at > > > the same time, achieve the additional goal you set of removing the > > > 'integrity' LSM. > > > > We need to stop thinking about the "integrity" code as a LSM, it isn't > > a LSM. It's a vestigial implementation detail that was necessary back > > when there could only be one LSM active at a time and there was a > > desire to have IMA/EVM active in conjunction with one of the LSMs, > > i.e. Smack, SELinux, etc. > > > > IMA and EVM are (or will be) LSMs, "integrity" is not. I recognize > > that eliminating the need for the "integrity" code is a relatively new > > addition to this effort, but that is only because I didn't properly > > understand the relationship between IMA, EVM, and the "integrity" code > > until recently. The elimination of the shared "integrity" code is > > consistent with promoting IMA and EVM as full LSMs, if there is core > > functionality that cannot be split up into the IMA and/or EVM LSMs > > then we need to look at how to support that without exposing that > > implementation detail/hack to userspace. Maybe that means direct > > calls between IMA and EVM, maybe that means preserving some of the > > common integrity code hidden from userspace, maybe that means adding > > functionality to the LSM layer, maybe that means something else? > > Let's think on this to come up with something that we can all accept > > as a long term solution instead of just doing the quick and easy > > option. > > If the result of this patch set should be that IMA and EVM become > proper LSMs without the shared integrity layer, instead of collapsing > all changes in this patch set, I think we should first verify if IMA > and EVM can be really independent. Once we guarantee that, we can > proceed making the proper LSMs. > > These are the changes I have in mind: > > 1) Fix evm_verifyxattr(), and make it work without integrity_iint_cache > 2) Remove the integrity_iint_cache parameter from evm_verifyxattr(), > since the other callers are not going to use it > 3) Create an internal function with the original parameters to be used > by IMA > 4) Introduce evm_post_path_mknod(), which similarly to > ima_post_path_mknod(), sets IMA_NEW_FILE for new files > 5) Add hardcoded call to evm_post_path_mknod() after > ima_post_path_mknod() in security.c > > If we think that this is good enough, we proceed with the move of IMA > and EVM functions to the LSM infrastructure (patches v7 19-21). > > The next patches are going to be similar to patches v6 22-23, but > unlike those, their goal would be simply to split metadata, not to make > IMA and EVM independent, which at this point has been addressed > separately in the prerequisite patches. > > The final patch is to remove the 'integrity' LSM and the integrity > metadata management code, which now is not used anymore. > > Would that work? Sounds good to me. Mimi
On Mon, 2023-12-04 at 14:26 +0100, Roberto Sassu wrote: > On Thu, 2023-11-30 at 11:34 -0500, Paul Moore wrote: > > On Wed, Nov 29, 2023 at 1:47 PM Roberto Sassu > > <roberto.sassu@huaweicloud.com> wrote: > > > On 11/29/2023 6:22 PM, Paul Moore wrote: > > > > On Wed, Nov 29, 2023 at 7:28 AM Roberto Sassu > > > > <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > > > On Mon, 2023-11-20 at 16:06 -0500, Paul Moore wrote: > > > > > > On Mon, Nov 20, 2023 at 3:16 AM Roberto Sassu > > > > > > <roberto.sassu@huaweicloud.com> wrote: > > > > > > > On Fri, 2023-11-17 at 15:57 -0500, Paul Moore wrote: > > > > > > > > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: > > > > > > > > > > > > > > > > > > Before the security field of kernel objects could be shared among LSMs with > > > > > > > > > the LSM stacking feature, IMA and EVM had to rely on an alternative storage > > > > > > > > > of inode metadata. The association between inode metadata and inode is > > > > > > > > > maintained through an rbtree. > > > > > > > > > > > > > > > > > > Because of this alternative storage mechanism, there was no need to use > > > > > > > > > disjoint inode metadata, so IMA and EVM today still share them. > > > > > > > > > > > > > > > > > > With the reservation mechanism offered by the LSM infrastructure, the > > > > > > > > > rbtree is no longer necessary, as each LSM could reserve a space in the > > > > > > > > > security blob for each inode. However, since IMA and EVM share the > > > > > > > > > inode metadata, they cannot directly reserve the space for them. > > > > > > > > > > > > > > > > > > Instead, request from the 'integrity' LSM a space in the security blob for > > > > > > > > > the pointer of inode metadata (integrity_iint_cache structure). The other > > > > > > > > > reason for keeping the 'integrity' LSM is to preserve the original ordering > > > > > > > > > of IMA and EVM functions as when they were hardcoded. > > > > > > > > > > > > > > > > > > Prefer reserving space for a pointer to allocating the integrity_iint_cache > > > > > > > > > structure directly, as IMA would require it only for a subset of inodes. > > > > > > > > > Always allocating it would cause a waste of memory. > > > > > > > > > > > > > > > > > > Introduce two primitives for getting and setting the pointer of > > > > > > > > > integrity_iint_cache in the security blob, respectively > > > > > > > > > integrity_inode_get_iint() and integrity_inode_set_iint(). This would make > > > > > > > > > the code more understandable, as they directly replace rbtree operations. > > > > > > > > > > > > > > > > > > Locking is not needed, as access to inode metadata is not shared, it is per > > > > > > > > > inode. > > > > > > > > > > > > > > > > > > Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> > > > > > > > > > Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> > > > > > > > > > Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> > > > > > > > > > --- > > > > > > > > > security/integrity/iint.c | 71 +++++----------------------------- > > > > > > > > > security/integrity/integrity.h | 20 +++++++++- > > > > > > > > > 2 files changed, 29 insertions(+), 62 deletions(-) > > > > > > > > > > > > > > > > > > diff --git a/security/integrity/iint.c b/security/integrity/iint.c > > > > > > > > > index 882fde2a2607..a5edd3c70784 100644 > > > > > > > > > --- a/security/integrity/iint.c > > > > > > > > > +++ b/security/integrity/iint.c > > > > > > > > > @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) > > > > > > > > > return 0; > > > > > > > > > } > > > > > > > > > > > > > > > > > > +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { > > > > > > > > > + .lbs_inode = sizeof(struct integrity_iint_cache *), > > > > > > > > > +}; > > > > > > > > > > > > > > > > I'll admit that I'm likely missing an important detail, but is there > > > > > > > > a reason why you couldn't stash the integrity_iint_cache struct > > > > > > > > directly in the inode's security blob instead of the pointer? For > > > > > > > > example: > > > > > > > > > > > > > > > > struct lsm_blob_sizes ... = { > > > > > > > > .lbs_inode = sizeof(struct integrity_iint_cache), > > > > > > > > }; > > > > > > > > > > > > > > > > struct integrity_iint_cache *integrity_inode_get(inode) > > > > > > > > { > > > > > > > > if (unlikely(!inode->isecurity)) > > > > > > > > return NULL; > > > > > > > > return inode->i_security + integrity_blob_sizes.lbs_inode; > > > > > > > > } > > > > > > > > > > > > > > It would increase memory occupation. Sometimes the IMA policy > > > > > > > encompasses a small subset of the inodes. Allocating the full > > > > > > > integrity_iint_cache would be a waste of memory, I guess? > > > > > > > > > > > > Perhaps, but if it allows us to remove another layer of dynamic memory > > > > > > I would argue that it may be worth the cost. It's also worth > > > > > > considering the size of integrity_iint_cache, while it isn't small, it > > > > > > isn't exactly huge either. > > > > > > > > > > > > > On the other hand... (did not think fully about that) if we embed the > > > > > > > full structure in the security blob, we already have a mutex available > > > > > > > to use, and we don't need to take the inode lock (?). > > > > > > > > > > > > That would be excellent, getting rid of a layer of locking would be significant. > > > > > > > > > > > > > I'm fully convinced that we can improve the implementation > > > > > > > significantly. I just was really hoping to go step by step and not > > > > > > > accumulating improvements as dependency for moving IMA and EVM to the > > > > > > > LSM infrastructure. > > > > > > > > > > > > I understand, and I agree that an iterative approach is a good idea, I > > > > > > just want to make sure we keep things tidy from a user perspective, > > > > > > i.e. not exposing the "integrity" LSM when it isn't required. > > > > > > > > > > Ok, I went back to it again. > > > > > > > > > > I think trying to separate integrity metadata is premature now, too > > > > > many things at the same time. > > > > > > > > I'm not bothered by the size of the patchset, it is more important > > > > that we do The Right Thing. I would like to hear in more detail why > > > > you don't think this will work, I'm not interested in hearing about > > > > difficult it may be, I'm interested in hearing about what challenges > > > > we need to solve to do this properly. > > > > > > The right thing in my opinion is to achieve the goal with the minimal > > > set of changes, in the most intuitive way. > > > > Once again, I want to stress that I don't care about the size of the > > change, the number of patches in a patchset, etc. While it's always > > nice to be able to minimize the number of changes in a patch/patchset, > > that is secondary to making sure we are doing the right thing over the > > long term. This is especially important when we are talking about > > things that are user visible. > > > > > Until now, there was no solution that could achieve the primary goal of > > > this patch set (moving IMA and EVM to the LSM infrastructure) and, at > > > the same time, achieve the additional goal you set of removing the > > > 'integrity' LSM. > > > > We need to stop thinking about the "integrity" code as a LSM, it isn't > > a LSM. It's a vestigial implementation detail that was necessary back > > when there could only be one LSM active at a time and there was a > > desire to have IMA/EVM active in conjunction with one of the LSMs, > > i.e. Smack, SELinux, etc. > > > > IMA and EVM are (or will be) LSMs, "integrity" is not. I recognize > > that eliminating the need for the "integrity" code is a relatively new > > addition to this effort, but that is only because I didn't properly > > understand the relationship between IMA, EVM, and the "integrity" code > > until recently. The elimination of the shared "integrity" code is > > consistent with promoting IMA and EVM as full LSMs, if there is core > > functionality that cannot be split up into the IMA and/or EVM LSMs > > then we need to look at how to support that without exposing that > > implementation detail/hack to userspace. Maybe that means direct > > calls between IMA and EVM, maybe that means preserving some of the > > common integrity code hidden from userspace, maybe that means adding > > functionality to the LSM layer, maybe that means something else? > > Let's think on this to come up with something that we can all accept > > as a long term solution instead of just doing the quick and easy > > option. > > If the result of this patch set should be that IMA and EVM become > proper LSMs without the shared integrity layer, instead of collapsing > all changes in this patch set, I think we should first verify if IMA > and EVM can be really independent. Once we guarantee that, we can > proceed making the proper LSMs. > > These are the changes I have in mind: > > 1) Fix evm_verifyxattr(), and make it work without integrity_iint_cache > 2) Remove the integrity_iint_cache parameter from evm_verifyxattr(), > since the other callers are not going to use it Ehm, I checked better. integrity_inode_get() is public too (although it is not exported). So, a caller (not IMA) could do: iint = integrity_inode_get(inode); status = evm_verifyxattr(..., iint); However, it should not call integrity_inode_free(), which is also in include/linux/integrity.h, since this is going to be called by security_inode_free() (currently). > 3) Create an internal function with the original parameters to be used > by IMA > 4) Introduce evm_post_path_mknod(), which similarly to > ima_post_path_mknod(), sets IMA_NEW_FILE for new files I just realized that also this is changing the current behavior. IMA would clear IMA_NEW_FILE in ima_check_last_writer(), while EVM wouldn't (unless we implement the file_release hook in EVM too). > 5) Add hardcoded call to evm_post_path_mknod() after > ima_post_path_mknod() in security.c > > If we think that this is good enough, we proceed with the move of IMA > and EVM functions to the LSM infrastructure (patches v7 19-21). > > The next patches are going to be similar to patches v6 22-23, but > unlike those, their goal would be simply to split metadata, not to make > IMA and EVM independent, which at this point has been addressed > separately in the prerequisite patches. > > The final patch is to remove the 'integrity' LSM and the integrity > metadata management code, which now is not used anymore. > > Would that work? We are not making much progress, I'm going to follow any recommendation that would move this forward. Thanks Roberto > Thanks > > Roberto > > > > If you see the diff, the changes compared to v5 that was already > > > accepted by Mimi are very straightforward. If the assumption I made that > > > in the end the 'ima' LSM could take over the role of the 'integrity' > > > LSM, that for me is the preferable option. > > > > I looked at it quickly, but my workflow isn't well suited for patches > > as attachments; inline patches (the kernel standard) is preferable. > > > > > Given that the patch set is not doing any design change, but merely > > > moving calls and storing pointers elsewhere, that leaves us with the > > > option of thinking better what to do next, including like you suggested > > > to make IMA and EVM use disjoint metadata. > > > > > > > > I started to think, does EVM really need integrity metadata or it can > > > > > work without? > > > > > > > > > > The fact is that CONFIG_IMA=n and CONFIG_EVM=y is allowed, so we have > > > > > the same problem now. What if we make IMA the one that manages > > > > > integrity metadata, so that we can remove the 'integrity' LSM? > > > > > > > > I guess we should probably revisit the basic idea of if it even makes > > > > sense to enable EVM without IMA? Should we update the Kconfig to > > > > require IMA when EVM is enabled? > > > > > > That would be up to Mimi. Also this does not seem the main focus of the > > > patch set. > > > > Yes, it is not part of the original main focus, but it is definitely > > relevant to the discussion we are having now. Once again, the most > > important thing to me is that we do The Right Thing for the long term > > maintenance of the code base; if that means scope creep, I've got no > > problem with that. > > > > > > > Regarding the LSM order, I would take Casey's suggestion of introducing > > > > > LSM_ORDER_REALLY_LAST, for EVM. > > > > > > > > Please understand that I really dislike that we have imposed ordering > > > > constraints at the LSM layer, but I do understand the necessity (the > > > > BPF LSM ordering upsets me the most). I really don't want to see us > > > > make things worse by adding yet another ordering bucket, I would > > > > rather that we document it well and leave it alone ... basically treat > > > > it like the BPF LSM (grrrrrr). > > > > > > Uhm, that would not be possible right away (the BPF LSM is mutable), > > > remember that we defined LSM_ORDER_LAST so that an LSM can be always > > > enable and placed as last (requested by Mimi)? > > > > To be clear, I can both dislike the bpf-always-last and LSM_ORDER_LAST > > concepts while accepting them as necessary evils. I'm willing to > > tolerate LSM_ORDER_LAST, but I'm not currently willing to tolerate > > LSM_ORDER_REALLY_LAST; that is one step too far right now. I brought > > up the BPF LSM simply as an example of ordering that is not enforced > > by code, but rather by documentation and convention. > >
On Wed, 2023-12-06 at 14:10 +0100, Roberto Sassu wrote: > On Mon, 2023-12-04 at 14:26 +0100, Roberto Sassu wrote: ... > > If the result of this patch set should be that IMA and EVM become > > proper LSMs without the shared integrity layer, instead of collapsing > > all changes in this patch set, I think we should first verify if IMA > > and EVM can be really independent. Once we guarantee that, we can > > proceed making the proper LSMs. > > > > These are the changes I have in mind: > > > > 1) Fix evm_verifyxattr(), and make it work without integrity_iint_cache > > 2) Remove the integrity_iint_cache parameter from evm_verifyxattr(), > > since the other callers are not going to use it > > Ehm, I checked better. > > integrity_inode_get() is public too (although it is not exported). So, > a caller (not IMA) could do: > > iint = integrity_inode_get(inode); > status = evm_verifyxattr(..., iint); > > However, it should not call integrity_inode_free(), which is also in > include/linux/integrity.h, since this is going to be called by > security_inode_free() (currently). Calling integrity_inode_free() directly would release the iint early. As a result, IMA would then need to re-allocate it on next access. Other than impacting IMA's performance, is this a problem? > > 3) Create an internal function with the original parameters to be used > > by IMA > > 4) Introduce evm_post_path_mknod(), which similarly to > > ima_post_path_mknod(), sets IMA_NEW_FILE for new files > > I just realized that also this is changing the current behavior. > > IMA would clear IMA_NEW_FILE in ima_check_last_writer(), while EVM > wouldn't (unless we implement the file_release hook in EVM too). True Mimi > > 5) Add hardcoded call to evm_post_path_mknod() after > > ima_post_path_mknod() in security.c > > > > If we think that this is good enough, we proceed with the move of IMA > > and EVM functions to the LSM infrastructure (patches v7 19-21). > > > > The next patches are going to be similar to patches v6 22-23, but > > unlike those, their goal would be simply to split metadata, not to make > > IMA and EVM independent, which at this point has been addressed > > separately in the prerequisite patches. > > > > The final patch is to remove the 'integrity' LSM and the integrity > > metadata management code, which now is not used anymore. > > > > Would that work? > > We are not making much progress, I'm going to follow any recommendation > that would move this forward.
On Wed, 2023-12-06 at 11:11 -0500, Mimi Zohar wrote: > On Wed, 2023-12-06 at 14:10 +0100, Roberto Sassu wrote: > > On Mon, 2023-12-04 at 14:26 +0100, Roberto Sassu wrote: > > ... > > > If the result of this patch set should be that IMA and EVM become > > > proper LSMs without the shared integrity layer, instead of collapsing > > > all changes in this patch set, I think we should first verify if IMA > > > and EVM can be really independent. Once we guarantee that, we can > > > proceed making the proper LSMs. > > > > > > These are the changes I have in mind: > > > > > > 1) Fix evm_verifyxattr(), and make it work without integrity_iint_cache > > > 2) Remove the integrity_iint_cache parameter from evm_verifyxattr(), > > > since the other callers are not going to use it > > > > Ehm, I checked better. > > > > integrity_inode_get() is public too (although it is not exported). So, > > a caller (not IMA) could do: > > > > iint = integrity_inode_get(inode); > > status = evm_verifyxattr(..., iint); > > > > However, it should not call integrity_inode_free(), which is also in > > include/linux/integrity.h, since this is going to be called by > > security_inode_free() (currently). Oh, I needed to add a check for the iint here: void integrity_inode_free(struct inode *inode) { struct integrity_iint_cache *iint; if (!IS_IMA(inode)) return; iint = integrity_iint_find(inode); if (!iint) <=== this return; integrity_inode_set_iint(inode, NULL); iint_free(iint); } > Calling integrity_inode_free() directly would release the iint early. > As a result, IMA would then need to re-allocate it on next access. > Other than impacting IMA's performance, is this a problem? Uhm, I think the iint could be freed while IMA is using it, for example in process_measurement(). Roberto > > > 3) Create an internal function with the original parameters to be used > > > by IMA > > > 4) Introduce evm_post_path_mknod(), which similarly to > > > ima_post_path_mknod(), sets IMA_NEW_FILE for new files > > > > I just realized that also this is changing the current behavior. > > > > IMA would clear IMA_NEW_FILE in ima_check_last_writer(), while EVM > > wouldn't (unless we implement the file_release hook in EVM too). > > True > > Mimi > > > > 5) Add hardcoded call to evm_post_path_mknod() after > > > ima_post_path_mknod() in security.c > > > > > > If we think that this is good enough, we proceed with the move of IMA > > > and EVM functions to the LSM infrastructure (patches v7 19-21). > > > > > > The next patches are going to be similar to patches v6 22-23, but > > > unlike those, their goal would be simply to split metadata, not to make > > > IMA and EVM independent, which at this point has been addressed > > > separately in the prerequisite patches. > > > > > > The final patch is to remove the 'integrity' LSM and the integrity > > > metadata management code, which now is not used anymore. > > > > > > Would that work? > > > > We are not making much progress, I'm going to follow any recommendation > > that would move this forward. >
On 17.11.23 21:57, Paul Moore wrote: > On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: >> >> Before the security field of kernel objects could be shared among LSMs with >> the LSM stacking feature, IMA and EVM had to rely on an alternative storage >> of inode metadata. The association between inode metadata and inode is >> maintained through an rbtree. >> >> Because of this alternative storage mechanism, there was no need to use >> disjoint inode metadata, so IMA and EVM today still share them. >> >> With the reservation mechanism offered by the LSM infrastructure, the >> rbtree is no longer necessary, as each LSM could reserve a space in the >> security blob for each inode. However, since IMA and EVM share the >> inode metadata, they cannot directly reserve the space for them. >> >> Instead, request from the 'integrity' LSM a space in the security blob for >> the pointer of inode metadata (integrity_iint_cache structure). The other >> reason for keeping the 'integrity' LSM is to preserve the original ordering >> of IMA and EVM functions as when they were hardcoded. >> >> Prefer reserving space for a pointer to allocating the integrity_iint_cache >> structure directly, as IMA would require it only for a subset of inodes. >> Always allocating it would cause a waste of memory. >> >> Introduce two primitives for getting and setting the pointer of >> integrity_iint_cache in the security blob, respectively >> integrity_inode_get_iint() and integrity_inode_set_iint(). This would make >> the code more understandable, as they directly replace rbtree operations. >> >> Locking is not needed, as access to inode metadata is not shared, it is per >> inode. >> >> Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com> >> Reviewed-by: Casey Schaufler <casey@schaufler-ca.com> >> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> >> --- >> security/integrity/iint.c | 71 +++++----------------------------- >> security/integrity/integrity.h | 20 +++++++++- >> 2 files changed, 29 insertions(+), 62 deletions(-) >> >> diff --git a/security/integrity/iint.c b/security/integrity/iint.c >> index 882fde2a2607..a5edd3c70784 100644 >> --- a/security/integrity/iint.c >> +++ b/security/integrity/iint.c >> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) >> return 0; >> } >> >> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { >> + .lbs_inode = sizeof(struct integrity_iint_cache *), >> +}; > > I'll admit that I'm likely missing an important detail, but is there > a reason why you couldn't stash the integrity_iint_cache struct > directly in the inode's security blob instead of the pointer? For > example: > > struct lsm_blob_sizes ... = { > .lbs_inode = sizeof(struct integrity_iint_cache), > }; > > struct integrity_iint_cache *integrity_inode_get(inode) > { > if (unlikely(!inode->isecurity)) > return NULL; Ok, this caught my attention... I see that selinux_inode() has it, but smack_inode() doesn't. Some Smack code assumes that the inode security blob is always non-NULL: static void init_inode_smack(struct inode *inode, struct smack_known *skp) { struct inode_smack *isp = smack_inode(inode); isp->smk_inode = skp; isp->smk_flags = 0; } Is that intended? Should I add the check? Thanks Roberto > return inode->i_security + integrity_blob_sizes.lbs_inode; > } > > -- > paul-moore.com
On 12/13/2023 2:45 AM, Roberto Sassu wrote: > On 17.11.23 21:57, Paul Moore wrote: >> On Nov 7, 2023 Roberto Sassu <roberto.sassu@huaweicloud.com> wrote: >>> >>> ... >>> >>> diff --git a/security/integrity/iint.c b/security/integrity/iint.c >>> index 882fde2a2607..a5edd3c70784 100644 >>> --- a/security/integrity/iint.c >>> +++ b/security/integrity/iint.c >>> @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) >>> return 0; >>> } >>> +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { >>> + .lbs_inode = sizeof(struct integrity_iint_cache *), >>> +}; >> >> I'll admit that I'm likely missing an important detail, but is there >> a reason why you couldn't stash the integrity_iint_cache struct >> directly in the inode's security blob instead of the pointer? For >> example: >> >> struct lsm_blob_sizes ... = { >> .lbs_inode = sizeof(struct integrity_iint_cache), >> }; >> >> struct integrity_iint_cache *integrity_inode_get(inode) >> { >> if (unlikely(!inode->isecurity)) >> return NULL; > > Ok, this caught my attention... > > I see that selinux_inode() has it, but smack_inode() doesn't. > > Some Smack code assumes that the inode security blob is always non-NULL: > > static void init_inode_smack(struct inode *inode, struct smack_known > *skp) > { > struct inode_smack *isp = smack_inode(inode); > > isp->smk_inode = skp; > isp->smk_flags = 0; > } > > > Is that intended? Should I add the check? Unless there's a case where inodes are created without calling security_inode_alloc() there should never be an inode without a security blob by the time you get to the Smack hook. That said, people seem inclined to take all sorts of shortcuts and create various "inodes" that aren't really inodes. I also see that SELinux doesn't check the blob for cred or file structures. And that I wrote the code in both cases. Based on lack of bug reports for Smack on inodes and SELinux on creds or files, It appears that the check is unnecessary. On the other hand, it sure looks like good error detection hygiene. I would be inclined to include the check in new code, but not get in a panic about existing code. > > Thanks > > Roberto > >> return inode->i_security + integrity_blob_sizes.lbs_inode; >> } >> >> -- >> paul-moore.com > >
diff --git a/security/integrity/iint.c b/security/integrity/iint.c index 882fde2a2607..a5edd3c70784 100644 --- a/security/integrity/iint.c +++ b/security/integrity/iint.c @@ -14,56 +14,25 @@ #include <linux/slab.h> #include <linux/init.h> #include <linux/spinlock.h> -#include <linux/rbtree.h> #include <linux/file.h> #include <linux/uaccess.h> #include <linux/security.h> #include <linux/lsm_hooks.h> #include "integrity.h" -static struct rb_root integrity_iint_tree = RB_ROOT; -static DEFINE_RWLOCK(integrity_iint_lock); static struct kmem_cache *iint_cache __ro_after_init; struct dentry *integrity_dir; -/* - * __integrity_iint_find - return the iint associated with an inode - */ -static struct integrity_iint_cache *__integrity_iint_find(struct inode *inode) -{ - struct integrity_iint_cache *iint; - struct rb_node *n = integrity_iint_tree.rb_node; - - while (n) { - iint = rb_entry(n, struct integrity_iint_cache, rb_node); - - if (inode < iint->inode) - n = n->rb_left; - else if (inode > iint->inode) - n = n->rb_right; - else - return iint; - } - - return NULL; -} - /* * integrity_iint_find - return the iint associated with an inode */ struct integrity_iint_cache *integrity_iint_find(struct inode *inode) { - struct integrity_iint_cache *iint; - if (!IS_IMA(inode)) return NULL; - read_lock(&integrity_iint_lock); - iint = __integrity_iint_find(inode); - read_unlock(&integrity_iint_lock); - - return iint; + return integrity_inode_get_iint(inode); } #define IMA_MAX_NESTING (FILESYSTEM_MAX_STACK_DEPTH+1) @@ -123,9 +92,7 @@ static void iint_free(struct integrity_iint_cache *iint) */ struct integrity_iint_cache *integrity_inode_get(struct inode *inode) { - struct rb_node **p; - struct rb_node *node, *parent = NULL; - struct integrity_iint_cache *iint, *test_iint; + struct integrity_iint_cache *iint; iint = integrity_iint_find(inode); if (iint) @@ -137,31 +104,10 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode) iint_init_always(iint, inode); - write_lock(&integrity_iint_lock); - - p = &integrity_iint_tree.rb_node; - while (*p) { - parent = *p; - test_iint = rb_entry(parent, struct integrity_iint_cache, - rb_node); - if (inode < test_iint->inode) { - p = &(*p)->rb_left; - } else if (inode > test_iint->inode) { - p = &(*p)->rb_right; - } else { - write_unlock(&integrity_iint_lock); - kmem_cache_free(iint_cache, iint); - return test_iint; - } - } - iint->inode = inode; - node = &iint->rb_node; inode->i_flags |= S_IMA; - rb_link_node(node, parent, p); - rb_insert_color(node, &integrity_iint_tree); + integrity_inode_set_iint(inode, iint); - write_unlock(&integrity_iint_lock); return iint; } @@ -178,10 +124,8 @@ static void integrity_inode_free(struct inode *inode) if (!IS_IMA(inode)) return; - write_lock(&integrity_iint_lock); - iint = __integrity_iint_find(inode); - rb_erase(&iint->rb_node, &integrity_iint_tree); - write_unlock(&integrity_iint_lock); + iint = integrity_iint_find(inode); + integrity_inode_set_iint(inode, NULL); iint_free(iint); } @@ -231,6 +175,10 @@ static int __init integrity_lsm_init(void) return 0; } +struct lsm_blob_sizes integrity_blob_sizes __ro_after_init = { + .lbs_inode = sizeof(struct integrity_iint_cache *), +}; + /* * Keep it until IMA and EVM can use disjoint integrity metadata, and their * initialization order can be swapped without change in their behavior. @@ -239,6 +187,7 @@ DEFINE_LSM(integrity) = { .name = "integrity", .init = integrity_lsm_init, .order = LSM_ORDER_LAST, + .blobs = &integrity_blob_sizes, }; /* diff --git a/security/integrity/integrity.h b/security/integrity/integrity.h index e4df82d6f6e7..ef2689b5264d 100644 --- a/security/integrity/integrity.h +++ b/security/integrity/integrity.h @@ -158,7 +158,6 @@ struct ima_file_id { /* integrity data associated with an inode */ struct integrity_iint_cache { - struct rb_node rb_node; /* rooted in integrity_iint_tree */ struct mutex mutex; /* protects: version, flags, digest */ struct inode *inode; /* back pointer to inode in question */ u64 version; /* track inode changes */ @@ -192,6 +191,25 @@ int integrity_kernel_read(struct file *file, loff_t offset, #define INTEGRITY_KEYRING_MAX 4 extern struct dentry *integrity_dir; +extern struct lsm_blob_sizes integrity_blob_sizes; + +static inline struct integrity_iint_cache * +integrity_inode_get_iint(const struct inode *inode) +{ + struct integrity_iint_cache **iint_sec; + + iint_sec = inode->i_security + integrity_blob_sizes.lbs_inode; + return *iint_sec; +} + +static inline void integrity_inode_set_iint(const struct inode *inode, + struct integrity_iint_cache *iint) +{ + struct integrity_iint_cache **iint_sec; + + iint_sec = inode->i_security + integrity_blob_sizes.lbs_inode; + *iint_sec = iint; +} struct modsig;