From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-f199.google.com (mail-io0-f199.google.com [209.85.223.199]) by kanga.kvack.org (Postfix) with ESMTP id 0A1AC6B025E for ; Thu, 16 Nov 2017 19:17:01 -0500 (EST) Received: by mail-io0-f199.google.com with SMTP id g73so6037137ioj.0 for ; Thu, 16 Nov 2017 16:17:01 -0800 (PST) Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id v11sor1111676iod.324.2017.11.16.16.17.00 for (Google Transport Security); Thu, 16 Nov 2017 16:17:00 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <1510754620-27088-5-git-send-email-elena.reshetova@intel.com> References: <1510754620-27088-1-git-send-email-elena.reshetova@intel.com> <1510754620-27088-5-git-send-email-elena.reshetova@intel.com> From: Kees Cook Date: Thu, 16 Nov 2017 16:16:58 -0800 Message-ID: Subject: Re: [PATCH 04/16] sched: convert user_struct.__count to refcount_t Content-Type: text/plain; charset="UTF-8" Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Elena Reshetova Cc: Ingo Molnar , LKML , "linux-fsdevel@vger.kernel.org" , Peter Zijlstra , Greg KH , Al Viro , Tejun Heo , Johannes Weiner , Li Zefan , Arnaldo Carvalho de Melo , Alexander Shishkin , Eric Paris , Arnd Bergmann , Andy Lutomirski , Thomas Gleixner , Darren Hart , "Eric W. Biederman" , Linux-MM , Jens Axboe On Wed, Nov 15, 2017 at 6:03 AM, Elena Reshetova wrote: > atomic_t variables are currently used to implement reference > counters with the following properties: > - counter is initialized to 1 using atomic_set() > - a resource is freed upon counter reaching zero > - once counter reaches zero, its further > increments aren't allowed > - counter schema uses basic atomic operations > (set, inc, inc_not_zero, dec_and_test, etc.) > > Such atomic variables should be converted to a newly provided > refcount_t type and API that prevents accidental counter overflows > and underflows. This is important since overflows and underflows > can lead to use-after-free situation and be exploitable. > > The variable user_struct.__count is used as pure reference counter. > Convert it to refcount_t and fix up the operations. > > **Important note for maintainers: > > Some functions from refcount_t API defined in lib/refcount.c > have different memory ordering guarantees than their atomic > counterparts. > The full comparison can be seen in > https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon > in state to be merged to the documentation tree. > Normally the differences should not matter since refcount_t provides > enough guarantees to satisfy the refcounting use cases, but in > some rare cases it might matter. > Please double check that you don't have some undocumented > memory guarantees for this variable usage. > > For the user_struct.__count it might make a difference > in following places: > - free_uid(): decrement in refcount_dec_and_lock() only > provides RELEASE ordering, control dependency on success > and will hold a spin lock on success vs. fully ordered > atomic counterpart. Note there is no changes in spin lock > locking here. As with the others, this looks correct to me. Andrew, this looks like one for you again. :) Acked-by: Kees Cook -Kees > > Suggested-by: Kees Cook > Reviewed-by: David Windsor > Reviewed-by: Hans Liljestrand > Signed-off-by: Elena Reshetova > --- > include/linux/sched/user.h | 5 +++-- > kernel/user.c | 8 ++++---- > 2 files changed, 7 insertions(+), 6 deletions(-) > > diff --git a/include/linux/sched/user.h b/include/linux/sched/user.h > index 0dcf4e4..2ca7cf4 100644 > --- a/include/linux/sched/user.h > +++ b/include/linux/sched/user.h > @@ -4,6 +4,7 @@ > > #include > #include > +#include > > struct key; > > @@ -11,7 +12,7 @@ struct key; > * Some day this will be a full-fledged user tracking system.. > */ > struct user_struct { > - atomic_t __count; /* reference count */ > + refcount_t __count; /* reference count */ > atomic_t processes; /* How many processes does this user have? */ > atomic_t sigpending; /* How many pending signals does this user have? */ > #ifdef CONFIG_FANOTIFY > @@ -55,7 +56,7 @@ extern struct user_struct root_user; > extern struct user_struct * alloc_uid(kuid_t); > static inline struct user_struct *get_uid(struct user_struct *u) > { > - atomic_inc(&u->__count); > + refcount_inc(&u->__count); > return u; > } > extern void free_uid(struct user_struct *); > diff --git a/kernel/user.c b/kernel/user.c > index 9a20acc..f104474 100644 > --- a/kernel/user.c > +++ b/kernel/user.c > @@ -96,7 +96,7 @@ static DEFINE_SPINLOCK(uidhash_lock); > > /* root_user.__count is 1, for init task cred */ > struct user_struct root_user = { > - .__count = ATOMIC_INIT(1), > + .__count = REFCOUNT_INIT(1), > .processes = ATOMIC_INIT(1), > .sigpending = ATOMIC_INIT(0), > .locked_shm = 0, > @@ -122,7 +122,7 @@ static struct user_struct *uid_hash_find(kuid_t uid, struct hlist_head *hashent) > > hlist_for_each_entry(user, hashent, uidhash_node) { > if (uid_eq(user->uid, uid)) { > - atomic_inc(&user->__count); > + refcount_inc(&user->__count); > return user; > } > } > @@ -169,7 +169,7 @@ void free_uid(struct user_struct *up) > return; > > local_irq_save(flags); > - if (atomic_dec_and_lock(&up->__count, &uidhash_lock)) > + if (refcount_dec_and_lock(&up->__count, &uidhash_lock)) > free_user(up, flags); > else > local_irq_restore(flags); > @@ -190,7 +190,7 @@ struct user_struct *alloc_uid(kuid_t uid) > goto out_unlock; > > new->uid = uid; > - atomic_set(&new->__count, 1); > + refcount_set(&new->__count, 1); > > /* > * Before adding this, check whether we raced > -- > 2.7.4 > -- Kees Cook Pixel Security -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org