From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1344AFA372B for ; Wed, 16 Oct 2019 11:06:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CC33C2064B for ; Wed, 16 Oct 2019 11:06:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CC33C2064B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7610D8E0012; Wed, 16 Oct 2019 07:06:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EA568E0001; Wed, 16 Oct 2019 07:06:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D80F8E0012; Wed, 16 Oct 2019 07:06:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 371FB8E0001 for ; Wed, 16 Oct 2019 07:06:08 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id C906B1802BE03 for ; Wed, 16 Oct 2019 11:06:07 +0000 (UTC) X-FDA: 76049368374.26.mist80_8cc736b1b2113 X-HE-Tag: mist80_8cc736b1b2113 X-Filterd-Recvd-Size: 3683 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 11:06:07 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id BF7F3B1ED; Wed, 16 Oct 2019 11:06:05 +0000 (UTC) Date: Wed, 16 Oct 2019 13:06:04 +0200 From: Michal Hocko To: "Uladzislau Rezki (Sony)" Cc: Andrew Morton , Daniel Wagner , Sebastian Andrzej Siewior , Thomas Gleixner , linux-mm@kvack.org, LKML , Peter Zijlstra , Hillf Danton , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: Re: [PATCH v3 2/3] mm/vmalloc: respect passed gfp_mask when do preloading Message-ID: <20191016110604.GT317@dhcp22.suse.cz> References: <20191016095438.12391-1-urezki@gmail.com> <20191016095438.12391-2-urezki@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191016095438.12391-2-urezki@gmail.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 16-10-19 11:54:37, Uladzislau Rezki (Sony) wrote: > alloc_vmap_area() is given a gfp_mask for the page allocator. > Let's respect that mask and consider it even in the case when > doing regular CPU preloading, i.e. where a context can sleep. This is explaining what but it doesn't say why. I would go with " Allocation functions should comply with the given gfp_mask as much as possible. The preallocation code in alloc_vmap_area doesn't follow that pattern and it is using a hardcoded GFP_KERNEL. Although this doesn't really make much difference because vmalloc is not GFP_NOWAIT compliant in general (e.g. page table allocations are GFP_KERNEL) there is no reason to spread that bad habit and it is good to fix the antipattern. " > > Signed-off-by: Uladzislau Rezki (Sony) Acked-by: Michal Hocko > --- > mm/vmalloc.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index b7b443bfdd92..593bf554518d 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1064,9 +1064,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > return ERR_PTR(-EBUSY); > > might_sleep(); > + gfp_mask = gfp_mask & GFP_RECLAIM_MASK; > > - va = kmem_cache_alloc_node(vmap_area_cachep, > - gfp_mask & GFP_RECLAIM_MASK, node); > + va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); > if (unlikely(!va)) > return ERR_PTR(-ENOMEM); > > @@ -1074,7 +1074,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > * Only scan the relevant parts containing pointers to other objects > * to avoid false negatives. > */ > - kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK); > + kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask); > > retry: > /* > @@ -1100,7 +1100,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, > * Just proceed as it is. If needed "overflow" path > * will refill the cache we allocate from. > */ > - pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); > + pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); > > spin_lock(&vmap_area_lock); > > -- > 2.20.1 > -- Michal Hocko SUSE Labs