From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC64EC433E0 for ; Tue, 26 Jan 2021 13:43:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 69117229C9 for ; Tue, 26 Jan 2021 13:43:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69117229C9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B45638D00CE; Tue, 26 Jan 2021 08:43:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AF5A48D00B0; Tue, 26 Jan 2021 08:43:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0B748D00CE; Tue, 26 Jan 2021 08:43:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 8B0578D00B0 for ; Tue, 26 Jan 2021 08:43:43 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 50ABB181AEF30 for ; Tue, 26 Jan 2021 13:43:43 +0000 (UTC) X-FDA: 77748043926.17.quilt10_51090fd2758e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 326DA180D018B for ; Tue, 26 Jan 2021 13:43:43 +0000 (UTC) X-HE-Tag: quilt10_51090fd2758e X-Filterd-Recvd-Size: 3899 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 13:43:42 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 3E840AF1A; Tue, 26 Jan 2021 13:43:41 +0000 (UTC) Subject: Re: [PATCH 1/2] mm/page-alloc: Rename gfp_mask to gfp To: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org References: <20210124120357.701077-1-willy@infradead.org> <20210124120357.701077-2-willy@infradead.org> From: Vlastimil Babka Message-ID: Date: Tue, 26 Jan 2021 14:43:40 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <20210124120357.701077-2-willy@infradead.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 1/24/21 1:03 PM, Matthew Wilcox (Oracle) wrote: > Shorten some overly-long lines by renaming this identifier. > > Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka > --- > mm/page_alloc.c | 19 ++++++++++--------- > 1 file changed, 10 insertions(+), 9 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index b031a5ae0bd5..d72ef706f6e6 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4963,7 +4963,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, > * This is the 'heart' of the zoned buddy allocator. > */ > struct page * > -__alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, > +__alloc_pages_nodemask(gfp_t gfp, unsigned int order, int preferred_nid, > nodemask_t *nodemask) > { > struct page *page; > @@ -4976,20 +4976,21 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, > * so bail out early if the request is out of bound. > */ > if (unlikely(order >= MAX_ORDER)) { > - WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN)); > + WARN_ON_ONCE(!(gfp & __GFP_NOWARN)); > return NULL; > } > > - gfp_mask &= gfp_allowed_mask; > - alloc_mask = gfp_mask; > - if (!prepare_alloc_pages(gfp_mask, order, preferred_nid, nodemask, &ac, &alloc_mask, &alloc_flags)) > + gfp &= gfp_allowed_mask; > + alloc_mask = gfp; > + if (!prepare_alloc_pages(gfp, order, preferred_nid, nodemask, &ac, > + &alloc_mask, &alloc_flags)) > return NULL; > > /* > * Forbid the first pass from falling back to types that fragment > * memory until all local zones are considered. > */ > - alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp_mask); > + alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp); > > /* First allocation attempt */ > page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac); > @@ -5002,7 +5003,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, > * from a particular context which has been marked by > * memalloc_no{fs,io}_{save,restore}. > */ > - alloc_mask = current_gfp_context(gfp_mask); > + alloc_mask = current_gfp_context(gfp); > ac.spread_dirty_pages = false; > > /* > @@ -5014,8 +5015,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, > page = __alloc_pages_slowpath(alloc_mask, order, &ac); > > out: > - if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page && > - unlikely(__memcg_kmem_charge_page(page, gfp_mask, order) != 0)) { > + if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT) && page && > + unlikely(__memcg_kmem_charge_page(page, gfp, order) != 0)) { > __free_pages(page, order); > page = NULL; > } >