From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0ED55C43334 for ; Tue, 5 Jul 2022 08:58:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 714EF6B0071; Tue, 5 Jul 2022 04:58:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C4986B0073; Tue, 5 Jul 2022 04:58:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58F0A6B0074; Tue, 5 Jul 2022 04:58:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4A7A46B0071 for ; Tue, 5 Jul 2022 04:58:16 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1A27A35B04 for ; Tue, 5 Jul 2022 08:58:16 +0000 (UTC) X-FDA: 79652444592.29.5A7B825 Received: from mail-yw1-f177.google.com (mail-yw1-f177.google.com [209.85.128.177]) by imf28.hostedemail.com (Postfix) with ESMTP id B601BC0059 for ; Tue, 5 Jul 2022 08:58:14 +0000 (UTC) Received: by mail-yw1-f177.google.com with SMTP id 00721157ae682-3137316bb69so101775247b3.10 for ; Tue, 05 Jul 2022 01:58:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=sM5rr9XH930eMe+kKhMsrEpS8+IPKvTQlFpKhuzpyAU=; b=38vV6/TJ24n7ZRM4dXO+EOoIM3G0xx/XnQXc9Glp1uqA544gEpeagWAFsenHIrKFSF W069zwwKj98Bd+S769HbRNi+cMUrSnwbNZbozK20e1obKid5j8W0KjW4LEnO+Pngmtud Pmcv8lwD8McDk/wvANcocgVAwtBReujNJo695N2bfKHK/xPkdX/lsRfJd/dlpas3O2Xb oNKSqQEdq+REzQ8v+7CVgoEmFOSNhKUcR/AGx/TJYl5/1BW+OufMuW78WCvz8E2W6x/P KPefbeugobgTR+TmuSplx2yupgD/SB0v0Q9pvr6/AIp+a2j1zMWgAllt1N0EyiOjGm/y rzVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=sM5rr9XH930eMe+kKhMsrEpS8+IPKvTQlFpKhuzpyAU=; b=UemCUvwZl+H5515geh+qX8ift/wJ/3iN1Qrq5JzUdyTLT/TApMwERQzTcP4lDjUxQF BPGAJyWQiKxjymxrGEVrdOww+VKVLShIQ5PG9RKourg2NDBX3xUKlgC48V07ADmFQbtp 68AQCKq4vZdeKxSSclYu1Bvxty7avinNqkPhjQzRDgmw1pytVGmcWxEIDgw+5Ddtdpx8 c4/1SztLocuqKgYcpe+HxmWrol9JksIc65f3eQW9lXXhUXRqYXEgDSUVQfNA/uHlP5rw StFqjoibISdFsVX/ZnfqrM6itOFVDypjkRIGac1JesNCcjSwseif83zUh/p1KDHTWJgv RqFA== X-Gm-Message-State: AJIora+u18CzJB/7F58dYq1OgKnG9Lb9jOaSuoWwhfk7PTyfUKUtnu3q Fw5olb+2HRHiAI8uSW0Eh57MW/k4JYsKWAYhLx0rIw== X-Google-Smtp-Source: AGRyM1va8+mozJ6MLCCADrMSp3rFlDT7KZwHQfIDDRmqkm7426jzkiTnTKQLD6GSwQ1Id83zf5nN73UAoOeee8SelpI= X-Received: by 2002:a81:f0d:0:b0:31c:8860:c59f with SMTP id 13-20020a810f0d000000b0031c8860c59fmr15088183ywp.31.1657011493729; Tue, 05 Jul 2022 01:58:13 -0700 (PDT) MIME-Version: 1.0 References: <20220705062953.914256-1-luofei@unicloud.com> In-Reply-To: <20220705062953.914256-1-luofei@unicloud.com> From: Muchun Song Date: Tue, 5 Jul 2022 16:57:35 +0800 Message-ID: Subject: Re: [PATCH] mm,hwpoison,hugetlb: defer dissolve hwpoison hugepage when allocating vmemmap failed To: luofei Cc: Mike Kravetz , Andrew Morton , Linux Memory Management List , LKML Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657011495; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sM5rr9XH930eMe+kKhMsrEpS8+IPKvTQlFpKhuzpyAU=; b=U1+2sayBqGYpqlzd6nJjsprHZwdCGL6ehfUAbbF51ADMfg4V1N7LmY2eKJL4fYjRdEzlf5 sNtMpFrexGAZvqhDq/AEFpnqEjRKRvNTdc0xywL3/F3t+PhSQcGxJReOWe51FBM4g2zxLt GJF7d7xqBgK0V9Ax9kLZijnr85VY62c= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="38vV6/TJ"; spf=pass (imf28.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.128.177 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657011495; a=rsa-sha256; cv=none; b=py7SODTXMqkSFHLROgwRC3I5cdVN/rYsmByYiZUN6pqL3kYp3kICq6+bhXkc4wkOu4OdO3 jBcHUBw+4Ve432jnc43rr+AFHyR8Yb6ORJsJyifysuulpHvrnRQX4hwtEUy0UkfUn8qVM6 I7gLa2mwp+jNdBWVcrPW7fQpA5gQbo0= X-Stat-Signature: h7a6bixc5u1xsp19w9rmzk3ke3eau8ym X-Rspamd-Queue-Id: B601BC0059 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="38vV6/TJ"; spf=pass (imf28.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.128.177 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam02 X-HE-Tag: 1657011494-220236 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jul 5, 2022 at 2:32 PM luofei wrote: > > When dissolving hwpoison hugepage, if the allocation of vmemmap page > failed, the faulty page should not be put back on the hugepage free > list, which will cause the faulty pages to be reused. It's better to Hi luofei, How did it happen? If a hugepage is poisoned, then the head page's flag will be set to PageHWPoison. See the code of dequeue_huge_page_node_exact() which will filter out hwpoisoned page. So the hwpoisoned pages cannot be reused, hopefully, I am not missing something important. > postpone the reexecution of dissolve operation. > > Meanwhile when the page fault handling program calls > dissolve_free_huge_page() to dissolve the faulty page, the basic page > fault processing operation(such as migration pages and unmap etc) > has actually completed. There is no need to return -ENOMEM error code > to the upper layer for temporarily vmemmap page allocation failure, > which will cause the caller to make a wrong judgment. So just defer > dissolve and return success. > > Signed-off-by: luofei > --- > mm/hugetlb.c | 34 +++++++++++++++++++++------------- > 1 file changed, 21 insertions(+), 13 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ca081078e814..db25458eb0a5 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -90,6 +90,9 @@ struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp; > > /* Forward declaration */ > static int hugetlb_acct_memory(struct hstate *h, long delta); > +static LLIST_HEAD(hpage_freelist); > +static void free_hpage_workfn(struct work_struct *work); > +static DECLARE_DELAYED_WORK(free_hpage_work, free_hpage_workfn); > > static inline bool subpool_is_free(struct hugepage_subpool *spool) > { > @@ -1535,15 +1538,21 @@ static void __update_and_free_page(struct hstate *h, struct page *page) > if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) > return; > > - if (hugetlb_vmemmap_restore(h, page)) > + if (hugetlb_vmemmap_restore(h, page)) { > + if (unlikely(PageHWPoison(page))) { > + llist_add((struct llist_node *)&page->mapping, &hpage_freelist); > + schedule_delayed_work(&free_hpage_work, HZ); > + goto out; > + } > goto fail; > + } > > /* > * Move PageHWPoison flag from head page to the raw error pages, > * which makes any healthy subpages reusable. > */ > if (unlikely(PageHWPoison(page) && hugetlb_clear_page_hwpoison(page))) > - goto fail; > + goto out; > > for (i = 0; i < pages_per_huge_page(h); > i++, subpage = mem_map_next(subpage, page, i)) { > @@ -1574,6 +1583,8 @@ static void __update_and_free_page(struct hstate *h, struct page *page) > */ > add_hugetlb_page(h, page, true); > spin_unlock_irq(&hugetlb_lock); > +out: > + return; > } > > /* > @@ -1587,8 +1598,6 @@ static void __update_and_free_page(struct hstate *h, struct page *page) > * to be cleared in free_hpage_workfn() anyway, it is reused as the llist_node > * structure of a lockless linked list of huge pages to be freed. > */ > -static LLIST_HEAD(hpage_freelist); > - > static void free_hpage_workfn(struct work_struct *work) > { > struct llist_node *node; > @@ -1616,12 +1625,11 @@ static void free_hpage_workfn(struct work_struct *work) > cond_resched(); > } > } > -static DECLARE_WORK(free_hpage_work, free_hpage_workfn); > > static inline void flush_free_hpage_work(struct hstate *h) > { > if (hugetlb_vmemmap_optimizable(h)) > - flush_work(&free_hpage_work); > + flush_delayed_work(&free_hpage_work); > } > > static void update_and_free_page(struct hstate *h, struct page *page, > @@ -1634,13 +1642,9 @@ static void update_and_free_page(struct hstate *h, struct page *page, > > /* > * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap pages. > - * > - * Only call schedule_work() if hpage_freelist is previously > - * empty. Otherwise, schedule_work() had been called but the workfn > - * hasn't retrieved the list yet. > */ > - if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) > - schedule_work(&free_hpage_work); > + llist_add((struct llist_node *)&page->mapping, &hpage_freelist); > + schedule_delayed_work(&free_hpage_work, 0); > } > > static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) > @@ -2118,11 +2122,15 @@ int dissolve_free_huge_page(struct page *page) > rc = hugetlb_vmemmap_restore(h, head); > if (!rc) { > update_and_free_page(h, head, false); > - } else { > + } else if (!PageHWPoison(head)) { > spin_lock_irq(&hugetlb_lock); > add_hugetlb_page(h, head, false); > h->max_huge_pages++; > spin_unlock_irq(&hugetlb_lock); > + } else { > + llist_add((struct llist_node *)&head->mapping, &hpage_freelist); > + schedule_delayed_work(&free_hpage_work, HZ); > + rc = 0; > } > > return rc; > -- > 2.27.0 >