From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f70.google.com (mail-it0-f70.google.com [209.85.214.70]) by kanga.kvack.org (Postfix) with ESMTP id 82E6D6B005C for ; Wed, 21 Mar 2018 16:09:38 -0400 (EDT) Received: by mail-it0-f70.google.com with SMTP id i64-v6so5543868ita.8 for ; Wed, 21 Mar 2018 13:09:38 -0700 (PDT) Received: from resqmta-ch2-02v.sys.comcast.net (resqmta-ch2-02v.sys.comcast.net. [2001:558:fe21:29:69:252:207:34]) by mx.google.com with ESMTPS id 126si3716900iov.154.2018.03.21.13.09.36 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Mar 2018 13:09:37 -0700 (PDT) Date: Wed, 21 Mar 2018 15:09:35 -0500 (CDT) From: Christopher Lameter Subject: Re: [PATCH] slab: introduce the flag SLAB_MINIMIZE_WASTE In-Reply-To: Message-ID: References: <20180320173512.GA19669@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: owner-linux-mm@kvack.org List-ID: To: Mikulas Patocka Cc: Matthew Wilcox , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , linux-mm@kvack.org, dm-devel@redhat.com, Mike Snitzer On Wed, 21 Mar 2018, Mikulas Patocka wrote: > For example, if someone creates a slab cache with the flag SLAB_CACHE_DMA, > and he allocates an object from this cache and this allocation races with > the user writing to /sys/kernel/slab/cache/order - then the allocator can > for a small period of time see "s->allocflags == 0" and allocate a non-DMA > page. That is a bug. True we need to fix that: Subject: Avoid potentially visible allocflags without all flags set During slab size recalculation s->allocflags may be temporarily set to 0 and thus the flags may not be set which may result in the wrong flags being passed. Slab size calculation happens in two cases: 1. When a slab is created (which is safe since we cannot have concurrent allocations) 2. When the slab order is changed via /sysfs. Signed-off-by: Christoph Lameter Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c +++ linux/mm/slub.c @@ -3457,6 +3457,7 @@ static void set_cpu_partial(struct kmem_ static int calculate_sizes(struct kmem_cache *s, int forced_order) { slab_flags_t flags = s->flags; + gfp_t allocflags; size_t size = s->object_size; int order; @@ -3551,16 +3552,17 @@ static int calculate_sizes(struct kmem_c if (order < 0) return 0; - s->allocflags = 0; + allocflags = 0; if (order) - s->allocflags |= __GFP_COMP; + allocflags |= __GFP_COMP; if (s->flags & SLAB_CACHE_DMA) - s->allocflags |= GFP_DMA; + allocflags |= GFP_DMA; if (s->flags & SLAB_RECLAIM_ACCOUNT) - s->allocflags |= __GFP_RECLAIMABLE; + allocflags |= __GFP_RECLAIMABLE; + s->allocflags = allocflags; /* * Determine the number of objects per slab */