From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26B31FA372A for ; Thu, 17 Oct 2019 21:14:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BB60820872 for ; Thu, 17 Oct 2019 21:14:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="RV6nB+uH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB60820872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4DFCF8E0005; Thu, 17 Oct 2019 17:14:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 490E78E0003; Thu, 17 Oct 2019 17:14:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37F708E0005; Thu, 17 Oct 2019 17:14:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id 189008E0003 for ; Thu, 17 Oct 2019 17:14:25 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id B4FB44404 for ; Thu, 17 Oct 2019 21:14:24 +0000 (UTC) X-FDA: 76054530048.22.horse42_7db511518e134 X-HE-Tag: horse42_7db511518e134 X-Filterd-Recvd-Size: 13029 Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com [216.228.121.65]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Oct 2019 21:14:23 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 17 Oct 2019 14:14:25 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 17 Oct 2019 14:14:22 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 17 Oct 2019 14:14:22 -0700 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 17 Oct 2019 21:14:21 +0000 Received: from [10.110.48.28] (10.124.1.5) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 17 Oct 2019 21:14:21 +0000 Subject: Re: [PATCH V3] mm/page_alloc: Add alloc_contig_pages() To: Anshuman Khandual , CC: Mike Kravetz , Andrew Morton , Vlastimil Babka , Michal Hocko , David Rientjes , Andrea Arcangeli , Oscar Salvador , Mel Gorman , Mike Rapoport , "Dan Williams" , Pavel Tatashin , Matthew Wilcox , "David Hildenbrand" , References: <1571300646-32240-1-git-send-email-anshuman.khandual@arm.com> From: John Hubbard X-Nvconfidentiality: public Message-ID: Date: Thu, 17 Oct 2019 14:14:20 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <1571300646-32240-1-git-send-email-anshuman.khandual@arm.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To DRHQMAIL107.nvidia.com (10.27.9.16) Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1571346865; bh=HnqpavetdbJgc/OJGv/GBKTTizohF9vHaaPQ4Z3Dp74=; h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=RV6nB+uHlZRz97UWqIcnO9ktKJrORpKtEepi0FgWKkPbQ0ah+jDUY4J4WuXxNhAnW SZ+QDuJO4Mwu7UxVAtMp3cyp6xVGOzV2kRauNtDlBLNEL70s+mcqbpChTWSjaXkYoE OqF/MiQDv9ZKUZr1Lg97EWWd86fNzm0fexsay8IrtQp0KRo+nXAWG6L0eur7+b1ePA 3RyQZT+Psdx980ZBfda0/Kza3SamgAbKjYdxNVW+MR2ctU3+R9QmkSbcHpz+zOpqI5 renaQL/ZAqpiuEj50IfGg/ERTYw0XIoBiJdXf8G9lNAzoap3tU4q2oVq4c82fhqtAX 8VoLXoGkW7u1Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/17/19 1:24 AM, Anshuman Khandual wrote: > HugeTLB helper alloc_gigantic_page() implements fairly generic allocation > method where it scans over various zones looking for a large contiguous pfn > range before trying to allocate it with alloc_contig_range(). Other than > deriving the requested order from 'struct hstate', there is nothing HugeTLB > specific in there. This can be made available for general use to allocate > contiguous memory which could not have been allocated through the buddy > allocator. > > alloc_gigantic_page() has been split carving out actual allocation method > which is then made available via new alloc_contig_pages() helper wrapped > under CONFIG_CONTIG_ALLOC. All references to 'gigantic' have been replaced > with more generic term 'contig'. Allocated pages here should be freed with > free_contig_range() or by calling __free_page() on each allocated page. > > Cc: Mike Kravetz > Cc: Andrew Morton > Cc: Vlastimil Babka > Cc: Michal Hocko > Cc: David Rientjes > Cc: Andrea Arcangeli > Cc: Oscar Salvador > Cc: Mel Gorman > Cc: Mike Rapoport > Cc: Dan Williams > Cc: Pavel Tatashin > Cc: Matthew Wilcox > Cc: David Hildenbrand > Cc: linux-kernel@vger.kernel.org > Acked-by: David Hildenbrand > Acked-by: Michal Hocko > Signed-off-by: Anshuman Khandual > --- > This is based on https://patchwork.kernel.org/patch/11190213/ Hi Anshuman, I'm having trouble finding a tree that this applies cleanly too, which one did you use? (latest linux-next, or linux.git would be nice). thanks, John Hubbard NVIDIA > > Changes in V3: > > - Added an in-code comment per Michal and David > > Changes in V2: > > - Rephrased patch subject per David > - Fixed all typos per David > - s/order/contiguous > > Changes from [V5,1/2] mm/hugetlb: Make alloc_gigantic_page()... > > - alloc_contig_page() takes nr_pages instead of order per Michal > - s/gigantic/contig on all related functions > > include/linux/gfp.h | 2 + > mm/hugetlb.c | 77 +-------------------------------- > mm/page_alloc.c | 101 ++++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 105 insertions(+), 75 deletions(-) > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index fb07b503dc45..1a11d4857027 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -589,6 +589,8 @@ static inline bool pm_suspended_storage(void) > /* The below functions must be run on a range from a single zone. */ > extern int alloc_contig_range(unsigned long start, unsigned long end, > unsigned migratetype, gfp_t gfp_mask); > +extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, > + int nid, nodemask_t *nodemask); > #endif > void free_contig_range(unsigned long pfn, unsigned int nr_pages); > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 985ee15eb04b..a5c2c880af27 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1023,85 +1023,12 @@ static void free_gigantic_page(struct page *page, unsigned int order) > } > > #ifdef CONFIG_CONTIG_ALLOC > -static int __alloc_gigantic_page(unsigned long start_pfn, > - unsigned long nr_pages, gfp_t gfp_mask) > -{ > - unsigned long end_pfn = start_pfn + nr_pages; > - return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE, > - gfp_mask); > -} > - > -static bool pfn_range_valid_gigantic(struct zone *z, > - unsigned long start_pfn, unsigned long nr_pages) > -{ > - unsigned long i, end_pfn = start_pfn + nr_pages; > - struct page *page; > - > - for (i = start_pfn; i < end_pfn; i++) { > - page = pfn_to_online_page(i); > - if (!page) > - return false; > - > - if (page_zone(page) != z) > - return false; > - > - if (PageReserved(page)) > - return false; > - > - if (page_count(page) > 0) > - return false; > - > - if (PageHuge(page)) > - return false; > - } > - > - return true; > -} > - > -static bool zone_spans_last_pfn(const struct zone *zone, > - unsigned long start_pfn, unsigned long nr_pages) > -{ > - unsigned long last_pfn = start_pfn + nr_pages - 1; > - return zone_spans_pfn(zone, last_pfn); > -} > - > static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, > int nid, nodemask_t *nodemask) > { > - unsigned int order = huge_page_order(h); > - unsigned long nr_pages = 1 << order; > - unsigned long ret, pfn, flags; > - struct zonelist *zonelist; > - struct zone *zone; > - struct zoneref *z; > - > - zonelist = node_zonelist(nid, gfp_mask); > - for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), nodemask) { > - spin_lock_irqsave(&zone->lock, flags); > + unsigned long nr_pages = 1UL << huge_page_order(h); > > - pfn = ALIGN(zone->zone_start_pfn, nr_pages); > - while (zone_spans_last_pfn(zone, pfn, nr_pages)) { > - if (pfn_range_valid_gigantic(zone, pfn, nr_pages)) { > - /* > - * We release the zone lock here because > - * alloc_contig_range() will also lock the zone > - * at some point. If there's an allocation > - * spinning on this lock, it may win the race > - * and cause alloc_contig_range() to fail... > - */ > - spin_unlock_irqrestore(&zone->lock, flags); > - ret = __alloc_gigantic_page(pfn, nr_pages, gfp_mask); > - if (!ret) > - return pfn_to_page(pfn); > - spin_lock_irqsave(&zone->lock, flags); > - } > - pfn += nr_pages; > - } > - > - spin_unlock_irqrestore(&zone->lock, flags); > - } > - > - return NULL; > + return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); > } > > static void prep_new_huge_page(struct hstate *h, struct page *page, int nid); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index cd1dd0712624..fe76be55c9d5 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -8499,6 +8499,107 @@ int alloc_contig_range(unsigned long start, unsigned long end, > pfn_max_align_up(end), migratetype); > return ret; > } > + > +static int __alloc_contig_pages(unsigned long start_pfn, > + unsigned long nr_pages, gfp_t gfp_mask) > +{ > + unsigned long end_pfn = start_pfn + nr_pages; > + > + return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE, > + gfp_mask); > +} > + > +static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, > + unsigned long nr_pages) > +{ > + unsigned long i, end_pfn = start_pfn + nr_pages; > + struct page *page; > + > + for (i = start_pfn; i < end_pfn; i++) { > + page = pfn_to_online_page(i); > + if (!page) > + return false; > + > + if (page_zone(page) != z) > + return false; > + > + if (PageReserved(page)) > + return false; > + > + if (page_count(page) > 0) > + return false; > + > + if (PageHuge(page)) > + return false; > + } > + return true; > +} > + > +static bool zone_spans_last_pfn(const struct zone *zone, > + unsigned long start_pfn, unsigned long nr_pages) > +{ > + unsigned long last_pfn = start_pfn + nr_pages - 1; > + > + return zone_spans_pfn(zone, last_pfn); > +} > + > +/** > + * alloc_contig_pages() -- tries to find and allocate contiguous range of pages > + * @nr_pages: Number of contiguous pages to allocate > + * @gfp_mask: GFP mask to limit search and used during compaction > + * @nid: Target node > + * @nodemask: Mask for other possible nodes > + * > + * This routine is a wrapper around alloc_contig_range(). It scans over zones > + * on an applicable zonelist to find a contiguous pfn range which can then be > + * tried for allocation with alloc_contig_range(). This routine is intended > + * for allocation requests which can not be fulfilled with the buddy allocator. > + * > + * The allocated memory is always aligned to a page boundary. If nr_pages is a > + * power of two then the alignment is guaranteed to be to the given nr_pages > + * (e.g. 1GB request would be aligned to 1GB). > + * > + * Allocated pages can be freed with free_contig_range() or by manually calling > + * __free_page() on each allocated page. > + * > + * Return: pointer to contiguous pages on success, or NULL if not successful. > + */ > +struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, > + int nid, nodemask_t *nodemask) > +{ > + unsigned long ret, pfn, flags; > + struct zonelist *zonelist; > + struct zone *zone; > + struct zoneref *z; > + > + zonelist = node_zonelist(nid, gfp_mask); > + for_each_zone_zonelist_nodemask(zone, z, zonelist, > + gfp_zone(gfp_mask), nodemask) { > + spin_lock_irqsave(&zone->lock, flags); > + > + pfn = ALIGN(zone->zone_start_pfn, nr_pages); > + while (zone_spans_last_pfn(zone, pfn, nr_pages)) { > + if (pfn_range_valid_contig(zone, pfn, nr_pages)) { > + /* > + * We release the zone lock here because > + * alloc_contig_range() will also lock the zone > + * at some point. If there's an allocation > + * spinning on this lock, it may win the race > + * and cause alloc_contig_range() to fail... > + */ > + spin_unlock_irqrestore(&zone->lock, flags); > + ret = __alloc_contig_pages(pfn, nr_pages, > + gfp_mask); > + if (!ret) > + return pfn_to_page(pfn); > + spin_lock_irqsave(&zone->lock, flags); > + } > + pfn += nr_pages; > + } > + spin_unlock_irqrestore(&zone->lock, flags); > + } > + return NULL; > +} > #endif /* CONFIG_CONTIG_ALLOC */ > > void free_contig_range(unsigned long pfn, unsigned int nr_pages) >