From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECFBCC433DB for ; Thu, 11 Mar 2021 04:14:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5B74664FC4 for ; Thu, 11 Mar 2021 04:14:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B74664FC4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D81038D026B; Wed, 10 Mar 2021 23:14:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D43568D0250; Wed, 10 Mar 2021 23:14:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD1928D026B; Wed, 10 Mar 2021 23:14:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id A4E4D8D0250 for ; Wed, 10 Mar 2021 23:14:38 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3AD4C180ACC31 for ; Thu, 11 Mar 2021 04:14:38 +0000 (UTC) X-FDA: 77906277036.05.B6E45E4 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf27.hostedemail.com (Postfix) with ESMTP id 3B5BF8019149 for ; Thu, 11 Mar 2021 04:14:33 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id s21so2807660pjq.1 for ; Wed, 10 Mar 2021 20:14:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=QEFc+5uxcw01if19rtXMDW1S4shWkY5w/azslxmxuJ8=; b=kBi8IOydn8mz/RQqllLx8Yn7CDOHYvKk/1hUB1Gs9l62c+HqvXZTjnenXvnm7BO2m9 mB7QxGuGYX2DgSomKvXpf2sIu1nMMuTsvdxViLIojTZALlC5jh/eO7VLJ6YybqcPiapf 3ghp/3bWgQhWqLl4k97+fgfFHSkSL4dEThDWe+B0eL19Omf7EQwWhwsrlcb11g5+NKOL /scIDvJLBSgpVre0BWH0WkGPEhsSraKlfozW63e3GabkIFX0h8y3Kxs859KXXYtdztl6 fHfwhiLDdKtntLdKvoctwlhk3ODs03ad+DWB0XEwvWqcU/G9Tz40uZDu/WCjZ9YL8o1e +cjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=QEFc+5uxcw01if19rtXMDW1S4shWkY5w/azslxmxuJ8=; b=istkJQ8GC+459mdTBA68oT1KZAKZI0Zv7DYPOKIftbssMpX2glb8HXmyNZ5PK4wBSz RrgKSbAqUy34MP2YXcp4yRX3/e9q8qpGjPhjqLJL5kX/ndYwl03vgrKxfEP6s72Ym1lH FCc7F4XEEe9BoKu5mKcbf4Z8DgdJ6fZlcnvF0d9zAO/n5z73L8+iTXikdGGObO2WCJAp rYPD6trc+O4lwJNGX8lr4O8BCJWIN6o5ffhUo+gPefLhkC5rttsFIOLGy1FvcNDaz2eW JlfF+8599Y10zVyj0AdF9hwsQgVSXyIRuwCDTzc8zbHJWVILlMkW0mESaWzF1qS/163B njTg== X-Gm-Message-State: AOAM533lKFqINAR9nGHS9mCcU5JnWJ0VecCV3+Ib31qISrvhH8YkrBTu 9NTJmyutJfglTmfctQyK3+A1u6mbdEbwQyXdTcDsUA== X-Google-Smtp-Source: ABdhPJx6NBHnxHdYb1A3eokq/gkhYfBt/LcvxHmuP92XpmYAwqdG0AurD8XdLRIdgW4SJlTNYwu+dP1ExB5GY7MnRl8= X-Received: by 2002:a17:902:d4cb:b029:e5:f608:6d5e with SMTP id o11-20020a170902d4cbb02900e5f6086d5emr6034382plg.20.1615436075626; Wed, 10 Mar 2021 20:14:35 -0800 (PST) MIME-Version: 1.0 References: <20210308102807.59745-1-songmuchun@bytedance.com> <20210308102807.59745-5-songmuchun@bytedance.com> <20210310142057.GA12777@linux> In-Reply-To: <20210310142057.GA12777@linux> From: Muchun Song Date: Thu, 11 Mar 2021 12:13:58 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page To: Oscar Salvador Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , Ingo Molnar , bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , Alexander Viro , Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Joao Martins , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel , Chen Huang , Bodeddula Balasubramaniam Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: d7wff69p7cq9mizzsbt5no3oqkf7chzi X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3B5BF8019149 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf27; identity=mailfrom; envelope-from=""; helo=mail-pj1-f47.google.com; client-ip=209.85.216.47 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615436073-960574 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 10, 2021 at 10:21 PM Oscar Salvador wrote: > > On Mon, Mar 08, 2021 at 06:28:02PM +0800, Muchun Song wrote: > > When we free a HugeTLB page to the buddy allocator, we need to allocate > > the vmemmap pages associated with it. However, we may not be able to > > allocate the vmemmap pages when the system is under memory pressure. In > > this case, we just refuse to free the HugeTLB page. This changes behavior > > in some corner cases as listed below: > > > > 1) Failing to free a huge page triggered by the user (decrease nr_pages). > > > > User needs to try again later. > > > > 2) Failing to free a surplus huge page when freed by the application. > > > > Try again later when freeing a huge page next time. > > > > 3) Failing to dissolve a free huge page on ZONE_MOVABLE via > > offline_pages(). > > > > This can happen when we have plenty of ZONE_MOVABLE memory, but > > not enough kernel memory to allocate vmemmmap pages. We may even > > be able to migrate huge page contents, but will not be able to > > dissolve the source huge page. This will prevent an offline > > operation and is unfortunate as memory offlining is expected to > > succeed on movable zones. Users that depend on memory hotplug > > to succeed for movable zones should carefully consider whether the > > memory savings gained from this feature are worth the risk of > > possibly not being able to offline memory in certain situations. > > This is nice to have it here, but a normal user won't dig in the kernel to > figure this out, so my question is: Do we have this documented somewhere under > Documentation/? > If not, could we document it there? It is nice to warn about this things were > sysadmins can find them. Make sense. I will do this. > > > 4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via > > alloc_contig_range() - once we have that handling in place. Mainly > > affects CMA and virtio-mem. > > > > Similar to 3). virito-mem will handle migration errors gracefully. > > CMA might be able to fallback on other free areas within the CMA > > region. > > > > Vmemmap pages are allocated from the page freeing context. In order for > > those allocations to be not disruptive (e.g. trigger oom killer) > > __GFP_NORETRY is used. hugetlb_lock is dropped for the allocation > > because a non sleeping allocation would be too fragile and it could fail > > too easily under memory pressure. GFP_ATOMIC or other modes to access > > memory reserves is not used because we want to prevent consuming > > reserves under heavy hugetlb freeing. > > > > Signed-off-by: Muchun Song > > Tested-by: Chen Huang > > Tested-by: Bodeddula Balasubramaniam > > Sorry for jumping in late. > It looks good to me: > > Reviewed-by: Oscar Salvador Thanks. > > Minor request above and below: > > > --- > > Documentation/admin-guide/mm/hugetlbpage.rst | 8 +++ > > include/linux/mm.h | 2 + > > mm/hugetlb.c | 92 +++++++++++++++++++++------- > > mm/hugetlb_vmemmap.c | 32 ++++++---- > > mm/hugetlb_vmemmap.h | 23 +++++++ > > mm/sparse-vmemmap.c | 75 ++++++++++++++++++++++- > > 6 files changed, 197 insertions(+), 35 deletions(-) > > [...] > > > > Could we place a brief comment about what we expect to return here? OK. Will do. > > > -static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) > > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) > > { > > - return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; > > + unsigned long vmemmap_addr = (unsigned long)head; > > + unsigned long vmemmap_end, vmemmap_reuse; > > + > > + if (!free_vmemmap_pages_per_hpage(h)) > > + return 0; > > + > > + vmemmap_addr += RESERVE_VMEMMAP_SIZE; > > + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); > > + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; > > + /* > > + * The pages which the vmemmap virtual address range [@vmemmap_addr, > > + * @vmemmap_end) are mapped to are freed to the buddy allocator, and > > + * the range is mapped to the page which @vmemmap_reuse is mapped to. > > + * When a HugeTLB page is freed to the buddy allocator, previously > > + * discarded vmemmap pages must be allocated and remapping. > > + */ > > + return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, > > + GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); > > } > > -- > Oscar Salvador > SUSE L3