From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26A0EC433DB for ; Mon, 25 Jan 2021 00:05:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AEA9822BEF for ; Mon, 25 Jan 2021 00:05:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AEA9822BEF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3250B6B0005; Sun, 24 Jan 2021 19:05:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2ADF96B0007; Sun, 24 Jan 2021 19:05:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 176026B0008; Sun, 24 Jan 2021 19:05:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id F37C36B0005 for ; Sun, 24 Jan 2021 19:05:53 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C383A180ACEE4 for ; Mon, 25 Jan 2021 00:05:53 +0000 (UTC) X-FDA: 77742354186.21.bite85_440067a27581 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id AA26C180442C3 for ; Mon, 25 Jan 2021 00:05:53 +0000 (UTC) X-HE-Tag: bite85_440067a27581 X-Filterd-Recvd-Size: 7875 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 00:05:53 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id kx7so7280840pjb.2 for ; Sun, 24 Jan 2021 16:05:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=aFeG1n/kELrM7WcMSfFRoljhMMFSLBcu9gxcH6mwtVU=; b=DutDqr0QNi5In2PbEQAy724Xg8iqk206VHnKzN6m3GImiSkUvvVQobgBHL4/5YFbeV KMfrRH0IeWggm++ju0qzd4kX+2NzjJQ+ozhePueQEUfjK5cC6CiNp68Nut9/cUpmXMeq 86HhBvHHapSfaXlk9/Kwt4QmKfhciR2CdFAKCt3aMcbbUhDPvTEJ8Ykg26th83tv9I4b VaZanoT/Uw4XZS0AJ8A/qDB9vIyO6zFpEnydRrMYD5L7+tpWEiS7jm5zEli+6GxJdZeL dNwk5yhmTRevFq5ctxxFAanADwmKg2csUWBpQFrgzzZNLo9bn06STKwqFxl0uvspAHMv xXHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=aFeG1n/kELrM7WcMSfFRoljhMMFSLBcu9gxcH6mwtVU=; b=QAT2hWvRwQuooF4KfhR0A4L5axN4daFhbZ6/FIur2HaqGuxZKE4ehkSMNbaCipAqGy 5nYOOckZhWMA77TMqV6cgEVJNBPpyIgaz6Nuo0swMck1w38rvN+s3EPpw2WQWGbDtYjq 3p+9ZR1L5KHY7spg6Boh1MMIc4NZRQJpNpH7NR3QJ4WB69BQWNMJcg2oaSAFvmOb0+UB 90ULisMsW5O4V0O4I8DCfDqbOOBA7eduiXXWvq/L7dC6q7Brz3SR3XQaM5kl6O856L8R us9kdwV4a9szFX7sg6CDBGo/gwDIj4WYWpFW1AHR4r+XdRz9IaSjJuoHH4j9RAsEwEpx M+6A== X-Gm-Message-State: AOAM531bG2aO17JkAELdXR37jqZBNCeHjBfZpbf0A1wRaxjdIWTZTBG0 OneYS4LtLiHE+Z9jE0p49bP1IA== X-Google-Smtp-Source: ABdhPJxyN1d/9G1+MTACCy6wiX5cAtlVdSZ7chxGaBbnHDfnT/kwBNjXMRBS7AlaLGY97AE5xvXy+w== X-Received: by 2002:a17:90b:370d:: with SMTP id mg13mr260566pjb.161.1611533152037; Sun, 24 Jan 2021 16:05:52 -0800 (PST) Received: from [2620:15c:17:3:4a0f:cfff:fe51:6667] ([2620:15c:17:3:4a0f:cfff:fe51:6667]) by smtp.gmail.com with ESMTPSA id l2sm13465186pga.65.2021.01.24.16.05.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 Jan 2021 16:05:50 -0800 (PST) Date: Sun, 24 Jan 2021 16:05:49 -0800 (PST) From: David Rientjes To: Muchun Song cc: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, Matthew Wilcox , osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH v13 05/12] mm: hugetlb: allocate the vmemmap pages associated with each HugeTLB page In-Reply-To: <20210117151053.24600-6-songmuchun@bytedance.com> Message-ID: <6a68fde-583d-b8bb-a2c8-fbe32e03b@google.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> <20210117151053.24600-6-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, 17 Jan 2021, Muchun Song wrote: > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c > index ce4be1fa93c2..3b146d5949f3 100644 > --- a/mm/sparse-vmemmap.c > +++ b/mm/sparse-vmemmap.c > @@ -29,6 +29,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -40,7 +41,8 @@ > * @remap_pte: called for each non-empty PTE (lowest-level) entry. > * @reuse_page: the page which is reused for the tail vmemmap pages. > * @reuse_addr: the virtual address of the @reuse_page page. > - * @vmemmap_pages: the list head of the vmemmap pages that can be freed. > + * @vmemmap_pages: the list head of the vmemmap pages that can be freed > + * or is mapped from. > */ > struct vmemmap_remap_walk { > void (*remap_pte)(pte_t *pte, unsigned long addr, > @@ -50,6 +52,10 @@ struct vmemmap_remap_walk { > struct list_head *vmemmap_pages; > }; > > +/* The gfp mask of allocating vmemmap page */ > +#define GFP_VMEMMAP_PAGE \ > + (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN | __GFP_THISNODE) > + This is unnecessary, just use the gfp mask directly in allocator. > static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, > unsigned long end, > struct vmemmap_remap_walk *walk) > @@ -228,6 +234,75 @@ void vmemmap_remap_free(unsigned long start, unsigned long end, > free_vmemmap_page_list(&vmemmap_pages); > } > > +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, > + struct vmemmap_remap_walk *walk) > +{ > + pgprot_t pgprot = PAGE_KERNEL; > + struct page *page; > + void *to; > + > + BUG_ON(pte_page(*pte) != walk->reuse_page); > + > + page = list_first_entry(walk->vmemmap_pages, struct page, lru); > + list_del(&page->lru); > + to = page_to_virt(page); > + copy_page(to, (void *)walk->reuse_addr); > + > + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); > +} > + > +static void alloc_vmemmap_page_list(struct list_head *list, > + unsigned long start, unsigned long end) > +{ > + unsigned long addr; > + > + for (addr = start; addr < end; addr += PAGE_SIZE) { > + struct page *page; > + int nid = page_to_nid((const void *)addr); > + > +retry: > + page = alloc_pages_node(nid, GFP_VMEMMAP_PAGE, 0); > + if (unlikely(!page)) { > + msleep(100); > + /* > + * We should retry infinitely, because we cannot > + * handle allocation failures. Once we allocate > + * vmemmap pages successfully, then we can free > + * a HugeTLB page. > + */ > + goto retry; Ugh, I don't think this will work, there's no guarantee that we'll ever succeed and now we can't free a 2MB hugepage because we cannot allocate a 4KB page. We absolutely have to ensure we make forward progress here. We're going to be freeing the hugetlb page after this succeeeds, can we not use part of the hugetlb page that we're freeing for this memory instead? > + } > + list_add_tail(&page->lru, list); > + } > +} > + > +/** > + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end) > + * to the page which is from the @vmemmap_pages > + * respectively. > + * @start: start address of the vmemmap virtual address range. > + * @end: end address of the vmemmap virtual address range. > + * @reuse: reuse address. > + */ > +void vmemmap_remap_alloc(unsigned long start, unsigned long end, > + unsigned long reuse) > +{ > + LIST_HEAD(vmemmap_pages); > + struct vmemmap_remap_walk walk = { > + .remap_pte = vmemmap_restore_pte, > + .reuse_addr = reuse, > + .vmemmap_pages = &vmemmap_pages, > + }; > + > + might_sleep(); > + > + /* See the comment in the vmemmap_remap_free(). */ > + BUG_ON(start - reuse != PAGE_SIZE); > + > + alloc_vmemmap_page_list(&vmemmap_pages, start, end); > + vmemmap_remap_range(reuse, end, &walk); > +} > + > /* > * Allocate a block of memory to be used to back the virtual memory map > * or to back the page tables that are used to create the mapping. > -- > 2.11.0 > >