From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C804C433DB for ; Thu, 11 Mar 2021 04:27:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A01DB64F99 for ; Thu, 11 Mar 2021 04:27:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A01DB64F99 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1BD118D026C; Wed, 10 Mar 2021 23:27:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1827A8D0250; Wed, 10 Mar 2021 23:27:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 021298D026C; Wed, 10 Mar 2021 23:27:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id DB96E8D0250 for ; Wed, 10 Mar 2021 23:27:12 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 90EF08249980 for ; Thu, 11 Mar 2021 04:27:12 +0000 (UTC) X-FDA: 77906308704.09.E4ED661 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf30.hostedemail.com (Postfix) with ESMTP id DD4D2E0011C0 for ; Thu, 11 Mar 2021 04:27:04 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id q6-20020a17090a4306b02900c42a012202so8703950pjg.5 for ; Wed, 10 Mar 2021 20:27:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=wdHJK3DnjapBMDB3J4nzxo9opMMpuJCwZjO8+x8sSxg=; b=AVptnHxhFF2lfdrbogOfvyXoS+iA20CjdW0Y3g3Im2CA4FDVur+gpc9RDSc4nV8vpn DJJIGc6sd1ZnOt8Sf45R27jnkdEnwUytHhendjXcAGSizUGRxZkm8dnkfY1ouBOlLkjH qjVoz1+1VCtqTyybwJiF+PCpTZrfH5mlgFw3xD4vqnk3MVVxtSMjeLHW8amWMPkzsSCj NgSJFKW0TdWpZG9lcIERXexLuv01GSqWVrmA5bHD8rB/WqvoucKuDEH1jyfei/KIPRp8 SHwFwaL37o+276bp2p4UFrR6ft87rIemm+HhxvpUz9VzglEUjUMd/BvET7TfwvSGfBdF i/eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=wdHJK3DnjapBMDB3J4nzxo9opMMpuJCwZjO8+x8sSxg=; b=FZpHoYlEjxT4NDVAkHiblr3zs0KBlR/cJXe2f0uYRT6WHDfYEZyNskV/9B+JV+56xm 2GBKisBdim6YanZy9Z0u7vrGiCXnBrJ3E9MHPdC7paIrYVVw6qestdUq6bX+VvxZ5BE4 TF0cBSHJ+wOCDqApi0GY1wXhV7cnIkhOcmvh6WGd8H4/Kd9QHYI0s8YtrIDp9ive4iv4 YlN1A2+xFOp8OT1saQXI/h/5OoMbdXXI3aRTHX4Az3JyEXtq26TmFpJ+mMXiPOuzngSL g28WSlX2TIdiYNJv6b4498o8MWKGR77WyepL4+OkJnVvADptt9JH/x8fR8Ly0P3o0jl6 q02Q== X-Gm-Message-State: AOAM530SFAIUv9VaTX1Vw2d4CslWaUYtC2zT8p6mAtk3764kqgIcqRGW OIu/liCsjLx1wSqmtA0aA9rMFfmPRexjx5rkhPubRg== X-Google-Smtp-Source: ABdhPJyFtT1jcfBftxM1fBYo/GZ7Tgw6kfQp8Ydlw2Ya/FFTN4zt3q/TTwrXarOyu5SWEbQXYC+mJdl4VJaaDNvgL/g= X-Received: by 2002:a17:90a:901:: with SMTP id n1mr7048144pjn.147.1615436830045; Wed, 10 Mar 2021 20:27:10 -0800 (PST) MIME-Version: 1.0 References: <20210308102807.59745-1-songmuchun@bytedance.com> <20210308102807.59745-5-songmuchun@bytedance.com> In-Reply-To: From: Muchun Song Date: Thu, 11 Mar 2021 12:26:32 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page To: Michal Hocko Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , Ingo Molnar , bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , Alexander Viro , Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Oscar Salvador , "Song Bao Hua (Barry Song)" , David Hildenbrand , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Joao Martins , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel , Chen Huang , Bodeddula Balasubramaniam Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: iaqhzqgji8nf5mb8k66b53uh98rk7ars X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: DD4D2E0011C0 Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=mail-pj1-f52.google.com; client-ip=209.85.216.52 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615436824-578446 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 10, 2021 at 11:19 PM Michal Hocko wrote: > > On Mon 08-03-21 18:28:02, Muchun Song wrote: > [...] > > -static void update_and_free_page(struct hstate *h, struct page *page) > > +static int update_and_free_page(struct hstate *h, struct page *page) > > + __releases(&hugetlb_lock) __acquires(&hugetlb_lock) > > { > > int i; > > struct page *subpage = page; > > + int nid = page_to_nid(page); > > > > if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > > - return; > > + return 0; > > > > h->nr_huge_pages--; > > - h->nr_huge_pages_node[page_to_nid(page)]--; > > + h->nr_huge_pages_node[nid]--; > > + VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page); > > + VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page); > > > + set_page_refcounted(page); > > + set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > > + > > + /* > > + * If the vmemmap pages associated with the HugeTLB page can be > > + * optimized or the page is gigantic, we might block in > > + * alloc_huge_page_vmemmap() or free_gigantic_page(). In both > > + * cases, drop the hugetlb_lock. > > + */ > > + if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h)) > > + spin_unlock(&hugetlb_lock); > > + > > + if (alloc_huge_page_vmemmap(h, page)) { > > + spin_lock(&hugetlb_lock); > > + INIT_LIST_HEAD(&page->lru); > > + set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); > > + h->nr_huge_pages++; > > + h->nr_huge_pages_node[nid]++; > > + > > + /* > > + * If we cannot allocate vmemmap pages, just refuse to free the > > + * page and put the page back on the hugetlb free list and treat > > + * as a surplus page. > > + */ > > + h->surplus_huge_pages++; > > + h->surplus_huge_pages_node[nid]++; > > + > > + /* > > + * The refcount can possibly be increased by memory-failure or > > + * soft_offline handlers. > > This comment could be more helpful. I believe you want to say this > /* > * HWpoisoning code can increment the reference > * count here. If there is a race then bail out > * the holder of the additional reference count will > * free up the page with put_page. Right. I will reuse this. Thanks. > > + */ > > + if (likely(put_page_testzero(page))) { > > + arch_clear_hugepage_flags(page); > > + enqueue_huge_page(h, page); > > + } > > + > > + return -ENOMEM; > > + } > > + > > for (i = 0; i < pages_per_huge_page(h); > > i++, subpage = mem_map_next(subpage, page, i)) { > > subpage->flags &= ~(1 << PG_locked | 1 << PG_error | > [...] > > @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page) > > /* > > * Defer freeing if in non-task context to avoid hugetlb_lock deadlock. > > */ > > - if (!in_task()) { > > + if (in_atomic()) { > > As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n. > We need this change for other reasons and so it would be better to pull > it out into a separate patch which also makes HUGETLB depend on > PREEMPT_COUNT. > > [...] > > @@ -1771,8 +1813,12 @@ int dissolve_free_huge_page(struct page *page) > > h->free_huge_pages--; > > h->free_huge_pages_node[nid]--; > > h->max_huge_pages--; > > - update_and_free_page(h, head); > > - rc = 0; > > + rc = update_and_free_page(h, head); > > + if (rc) { > > + h->surplus_huge_pages--; > > + h->surplus_huge_pages_node[nid]--; > > + h->max_huge_pages++; > > This is quite ugly and confusing. update_and_free_page is careful to do > the proper counters accounting and now you just override it partially. > Why cannot we rely on update_and_free_page do the right thing? Dissolving path is special here. Since update_and_free_page failed, the number of surplus pages was incremented. Surplus pages are the number of pages greater than max_huge_pages. Since we are incrementing max_huge_pages, we should decrement (undo) the addition to surplus_huge_pages and surplus_huge_pages_node[nid]. > > -- > Michal Hocko > SUSE Labs