From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4317BC433E0 for ; Mon, 15 Feb 2021 15:37:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C3C7E64DDA for ; Mon, 15 Feb 2021 15:37:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C3C7E64DDA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0E9178D010A; Mon, 15 Feb 2021 10:37:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 070868D00FD; Mon, 15 Feb 2021 10:37:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E790C8D010A; Mon, 15 Feb 2021 10:37:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id CA3F98D00FD for ; Mon, 15 Feb 2021 10:37:27 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 92FD2182293FF for ; Mon, 15 Feb 2021 15:37:27 +0000 (UTC) X-FDA: 77820906534.28.start48_2011eea2763c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 6E2046D99 for ; Mon, 15 Feb 2021 15:37:27 +0000 (UTC) X-HE-Tag: start48_2011eea2763c X-Filterd-Recvd-Size: 10916 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Feb 2021 15:37:26 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id nm1so3937472pjb.3 for ; Mon, 15 Feb 2021 07:37:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=V/qumPgEbTSzAiVIIkXJwkbtFKz9olOOVg0TMm73T7w=; b=Y2IFwVEmDtkaG1GqHrd6lFJs/MreSpb8DukAMyN3pnQ841ns7wnagiXyig4FhccgIO 7SW5mRiJjUrhsJseOuf/AAJdzxdte45p1+QIrG6J01//vgFUe9Z6ePudLL3Oj5UX/vGy /OFLdg7AlVkWpFwUfizMVf6ZBnP8xeYZNs9UMcjLF7xyoNsCmpiWRM/DI1DdAv8dblLz A08IZ1GL0zrb8+jKcbZ7F3GNtgfBqmy9yq6RD8DZYaownQU46fAWzqfbGaZpUQ/1Ly+l kyLCDwCn4c9uC2XIkaxhyaNpvdKdnvZJ3UrvI0nISIeh+63Bm9PTnL4YDgnoOYRjaPUm vpEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=V/qumPgEbTSzAiVIIkXJwkbtFKz9olOOVg0TMm73T7w=; b=Ljv8fVUdF5EK5M1+Crs4kszBzpXbFle7rgCkPAZsVVy+AmF4KsXeW6xpmXDZXJ2LA/ A76XPfAXeaQxeORjtFmzZdS+TXtkpI6bTpOTXvMG/KJ9wx5j8j4mOcINQCHlArLA4lqw /Uo+OC6WDLvPJ5JgYaY8m31bV+bFJxMfzbTeKTy3ZQu9QqEEpvRg4e75ufm83fA0gjxS iREqLSm7xc8O85emGgu7UELapxNdWzeykD4LRlot33L68aXGt1Fh5QrlRC2RbY+fhVdt Rm1V7j29oOHaBTNE4E9ceWV3XJE2iP1CuiwoKqyXUSJmA4Y1uZOpXiHVVuydt71bfiCi xS/g== X-Gm-Message-State: AOAM530PoF6dBd7obcdVDZ70/7vR5/zQ2ONZpe64Z9VAt3cftOVpXQwf uy6hw6j9hVav7+UdL7Q6YVNq2wkgxNN6Q3OkTiNFcaNl61bBUw== X-Google-Smtp-Source: ABdhPJzmcVtK3SgYW950oULzBHMXYADITI4DcNDXEKE689eTWY2EQi8+HjzVWJu04XTSi8qFbi3tPKhJ7taQg18HbaQ= X-Received: by 2002:a17:90a:c684:: with SMTP id n4mr909082pjt.13.1613403445576; Mon, 15 Feb 2021 07:37:25 -0800 (PST) MIME-Version: 1.0 References: <20210208085013.89436-1-songmuchun@bytedance.com> <20210208085013.89436-5-songmuchun@bytedance.com> In-Reply-To: From: Muchun Song Date: Mon, 15 Feb 2021 23:36:49 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v15 4/8] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page To: Michal Hocko Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Oscar Salvador , "Song Bao Hua (Barry Song)" , David Hildenbrand , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Joao Martins , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Feb 15, 2021 at 9:19 PM Michal Hocko wrote: > > On Mon 15-02-21 20:44:57, Muchun Song wrote: > > On Mon, Feb 15, 2021 at 8:18 PM Michal Hocko wrote: > > > > > > On Mon 15-02-21 20:00:07, Muchun Song wrote: > > > > On Mon, Feb 15, 2021 at 7:51 PM Muchun Song wrote: > > > > > > > > > > On Mon, Feb 15, 2021 at 6:33 PM Michal Hocko wrote: > > > > > > > > > > > > On Mon 15-02-21 18:05:06, Muchun Song wrote: > > > > > > > On Fri, Feb 12, 2021 at 11:32 PM Michal Hocko wrote: > > > > > > [...] > > > > > > > > > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) > > > > > > > > > +{ > > > > > > > > > + int ret; > > > > > > > > > + unsigned long vmemmap_addr = (unsigned long)head; > > > > > > > > > + unsigned long vmemmap_end, vmemmap_reuse; > > > > > > > > > + > > > > > > > > > + if (!free_vmemmap_pages_per_hpage(h)) > > > > > > > > > + return 0; > > > > > > > > > + > > > > > > > > > + vmemmap_addr += RESERVE_VMEMMAP_SIZE; > > > > > > > > > + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); > > > > > > > > > + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; > > > > > > > > > + > > > > > > > > > + /* > > > > > > > > > + * The pages which the vmemmap virtual address range [@vmemmap_addr, > > > > > > > > > + * @vmemmap_end) are mapped to are freed to the buddy allocator, and > > > > > > > > > + * the range is mapped to the page which @vmemmap_reuse is mapped to. > > > > > > > > > + * When a HugeTLB page is freed to the buddy allocator, previously > > > > > > > > > + * discarded vmemmap pages must be allocated and remapping. > > > > > > > > > + */ > > > > > > > > > + ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, > > > > > > > > > + GFP_ATOMIC | __GFP_NOWARN | __GFP_THISNODE); > > > > > > > > > > > > > > > > I do not think that this is a good allocation mode. GFP_ATOMIC is a non > > > > > > > > sleeping allocation and a medium memory pressure might cause it to > > > > > > > > fail prematurely. I do not think this is really an atomic context which > > > > > > > > couldn't afford memory reclaim. I also do not think we want to grant > > > > > > > > > > > > > > Because alloc_huge_page_vmemmap is called under hugetlb_lock > > > > > > > now. So using GFP_ATOMIC indeed makes the code more simpler. > > > > > > > > > > > > You can have a preallocated list of pages prior taking the lock. > > > > > > > > > > A discussion about this can refer to here: > > > > > > > > > > https://patchwork.kernel.org/project/linux-mm/patch/20210117151053.24600-5-songmuchun@bytedance.com/ > > > > > > > > > > > Moreover do we want to manipulate vmemmaps from under spinlock in > > > > > > general. I have to say I have missed that detail when reviewing. Need to > > > > > > think more. > > > > > > > > > > > > > From the document of the kernel, I learned that __GFP_NOMEMALLOC > > > > > > > can be used to explicitly forbid access to emergency reserves. So if > > > > > > > we do not want to use the reserve memory. How about replacing it to > > > > > > > > > > > > > > GFP_ATOMIC | __GFP_NOMEMALLOC | __GFP_NOWARN | __GFP_THISNODE > > > > > > > > > > > > The whole point of GFP_ATOMIC is to grant access to memory reserves so > > > > > > the above is quite dubious. If you do not want access to memory reserves > > > > > > > > > > Look at the code of gfp_to_alloc_flags(). > > > > > > > > > > static inline unsigned int gfp_to_alloc_flags(gfp_t gfp_mask) > > > > > { > > > > > [...] > > > > > if (gfp_mask & __GFP_ATOMIC) { > > > > > /* > > > > > * Not worth trying to allocate harder for __GFP_NOMEMALLOC even > > > > > * if it can't schedule. > > > > > */ > > > > > if (!(gfp_mask & __GFP_NOMEMALLOC)) > > > > > alloc_flags |= ALLOC_HARDER; > > > > > [...] > > > > > } > > > > > > > > > > Seems to allow this operation (GFP_ATOMIC | __GFP_NOMEMALLOC). > > > > > > Please read my response again more carefully. I am not claiming that > > > combination is not allowed. I have said it doesn't make any sense in > > > this context. > > > > I see you are worried that using GFP_ATOMIC will use reverse memory > > unlimited. So I think that __GFP_NOMEMALLOC may be suitable for us. > > Sorry, I may not understand the point you said. What I missed? > > OK, let me try to explain again. GFP_ATOMIC is not only a non-sleeping > allocation request. It also grants access to memory reserves. The later > is a bit more involved because there are more layers of memory reserves > to access but that is not really important. Non-sleeping semantic can be > achieved by GFP_NOWAIT which will not grant access to reserves unless > explicitly stated - e.g. by __GFP_HIGH or __GFP_ATOMIC. > Is that more clear? > > Now again why I do not think access to memory reserves is suitable. > Hugetlb pages can be released in a large batches and that might cause a > peak depletion of memory reserves which are normally used by other > consumers as well. Other GFP_ATOMIC users might see allocation failures. > Those shouldn't be really fatal as nobody should be relying on those and > a failure usually mean a hand over to a different, less constrained, > context. So this concern is more about a more well behaved behavior from > the hugetlb side than a correctness. > Is that more clear? Ok. It is very clear. Very thanks for your patient explanations. > > There shouldn't be any real reason why the memory allocation for > vmemmaps, or handling vmemmap in general, has to be done from within the > hugetlb lock and therefore requiring a non-sleeping semantic. All that > can be deferred to a more relaxed context. If you want to make a Yeah, you are right. We can put the freeing hugetlb routine to a workqueue. Just like I do in the previous version (before v13) patch. I will pick up these patches. > GFP_NOWAIT optimistic attempt in the direct free path then no problem > but you have to expect failures under memory pressure. If you want to > have a more robust allocation request then you have to go outside of the > spin lock and use GFP_KERNEL | __GFP_NORETRY or GFP_KERNEL | > __GFP_RETRY_MAYFAIL depending on how hard you want to try. > __GFP_THISNODE makes a slight difference here but something that I would > recommend not depending on. > Is that more clear? OK. I will use GFP_KERNEL instead of GFP_ATOMIC. Thanks for your suggestions. > -- > Michal Hocko > SUSE Labs