From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 374A0C433ED for ; Wed, 14 Apr 2021 13:25:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AE97C61132 for ; Wed, 14 Apr 2021 13:25:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE97C61132 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2968D6B0036; Wed, 14 Apr 2021 09:25:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 245976B0073; Wed, 14 Apr 2021 09:25:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C1D68D0001; Wed, 14 Apr 2021 09:25:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0042.hostedemail.com [216.40.44.42]) by kanga.kvack.org (Postfix) with ESMTP id E1B6C6B0036 for ; Wed, 14 Apr 2021 09:25:37 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 99A0E18033D1C for ; Wed, 14 Apr 2021 13:25:37 +0000 (UTC) X-FDA: 78031044714.33.ECEDA32 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf05.hostedemail.com (Postfix) with ESMTP id 49DF2E000102 for ; Wed, 14 Apr 2021 13:25:36 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1618406735; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=h8J6j8/YB/RYsAIGiVMUyssVRJiGq/GWCjtsMsXrzbQ=; b=gLx95Yv3dc6nXDNJIIknBGI8pb1t80CFfCmwv4Vv3U0TkHUWfBo33nOXuE6aVnKB4RNxQv Wc2D/LI//ssKSyIhVmY8mIjJCW74anXBehW6tvyvfw15afAh7t+Y3zr/OGeEm/RteQwTiz 2TvCYvGnSkwKGqCszpUqKzmihYij2cA= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 85AE1AEF8; Wed, 14 Apr 2021 13:25:35 +0000 (UTC) Date: Wed, 14 Apr 2021 15:25:34 +0200 From: Michal Hocko To: Feng Tang Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Andrea Arcangeli , David Rientjes , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Dave Hansen , Ben Widawsky , Andi Kleen , Dan Williams Subject: Re: [PATCH v4 11/13] mm/mempolicy: huge-page allocation for many preferred Message-ID: References: <1615952410-36895-1-git-send-email-feng.tang@intel.com> <1615952410-36895-12-git-send-email-feng.tang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1615952410-36895-12-git-send-email-feng.tang@intel.com> X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 49DF2E000102 X-Stat-Signature: twtejtt8n1h39wnfxwgi1nszjb9b6woh Received-SPF: none (suse.com>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618406736-361331 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Please use hugetlb prefix to make it explicit that this is hugetlb related. On Wed 17-03-21 11:40:08, Feng Tang wrote: > From: Ben Widawsky > > Implement the missing huge page allocation functionality while obeying > the preferred node semantics. > > This uses a fallback mechanism to try multiple preferred nodes first, > and then all other nodes. It cannot use the helper function that was > introduced because huge page allocation already has its own helpers and > it was more LOC, and effort to try to consolidate that. > > The weirdness is MPOL_PREFERRED_MANY can't be called yet because it is > part of the UAPI we haven't yet exposed. Instead of make that define > global, it's simply changed with the UAPI patch. > > [ feng: add NOWARN flag, and skip the direct reclaim to speedup allocation > in some case ] > > Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com > Signed-off-by: Ben Widawsky > Signed-off-by: Feng Tang > --- > mm/hugetlb.c | 26 +++++++++++++++++++++++--- > mm/mempolicy.c | 3 ++- > 2 files changed, 25 insertions(+), 4 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 8fb42c6..9dfbfa3 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1105,7 +1105,7 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, > unsigned long address, int avoid_reserve, > long chg) > { > - struct page *page; > + struct page *page = NULL; > struct mempolicy *mpol; > gfp_t gfp_mask; > nodemask_t *nodemask; > @@ -1126,7 +1126,17 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, > > gfp_mask = htlb_alloc_mask(h); > nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); > - page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); > + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ Please use MPOL_PREFERRED_MANY explicitly here. > + gfp_t gfp_mask1 = gfp_mask | __GFP_NOWARN; > + > + gfp_mask1 &= ~__GFP_DIRECT_RECLAIM; > + page = dequeue_huge_page_nodemask(h, > + gfp_mask1, nid, nodemask); > + if (!page) > + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, NULL); > + } else { > + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); > + } > if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { > SetHPageRestoreReserve(page); > h->resv_huge_pages--; __GFP_DIRECT_RECLAIM handing is not needed here. dequeue_huge_page_nodemask only uses gfp mask to get zone and cpusets constraines. So the above should have simply been if (mpol->mode == MPOL_PREFERRED_MANY) { page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); if (page) goto got_page; /* fallback to all nodes */ nodemask = NULL; } page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); got_page: if (page ...) > @@ -1883,7 +1893,17 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, > nodemask_t *nodemask; > > nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); > - page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); > + if (mpol->mode != MPOL_BIND && nodemask) { /* AKA MPOL_PREFERRED_MANY */ > + gfp_t gfp_mask1 = gfp_mask | __GFP_NOWARN; > + > + gfp_mask1 &= ~__GFP_DIRECT_RECLAIM; > + page = alloc_surplus_huge_page(h, > + gfp_mask1, nid, nodemask); > + if (!page) > + alloc_surplus_huge_page(h, gfp_mask, nid, NULL); > + } else { > + page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); > + } And here similar if (mpol->mode == MPOL_PREFERRED_MANY) { page = alloc_surplus_huge_page(h, (gfp_mask | __GFP_NOWARN) & ~(__GFP_DIRECT_RECLAIM), nodemask); if (page) goto got_page; /* fallback to all nodes */ nodemask = NULL; } page = alloc_surplus_huge_page(h, gfp_mask, nodemask); got_page: > mpol_cond_put(mpol); You can have a dedicated gfp mask here if you prefer of course but I calling out MPOL_PREFERRED_MANY explicitly will make the code easier to read. > return page; -- Michal Hocko SUSE Labs