From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2095DC001DF for ; Tue, 18 Jul 2023 16:14:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CD668D000B; Tue, 18 Jul 2023 12:14:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 57DEA8D0001; Tue, 18 Jul 2023 12:14:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 46CAD8D000B; Tue, 18 Jul 2023 12:14:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 386F18D0001 for ; Tue, 18 Jul 2023 12:14:53 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C4472403BB for ; Tue, 18 Jul 2023 16:14:52 +0000 (UTC) X-FDA: 81025231224.13.4126D69 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf18.hostedemail.com (Postfix) with ESMTP id D0BF81C0021 for ; Tue, 18 Jul 2023 16:14:50 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=lSGnSjIB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of jthoughton@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=jthoughton@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689696890; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c/0dqzn2J5kw4y+zZIL8GSPv8S5vyPyRnDe8Ec33Zrs=; b=L9Oeo8VJStzZbUhsRIpKzO+rYiaXRYbgo1uqCgAah0xpv4VvV4jv75EZjWFdhWZ3oZcby6 Vt67eps141Z5r7StCvNCcZtau381V7BToDxcPcH7Ig13h8/v7Jybp1ukv2BifYtWDMJc0r OPa38IX8oh8HDQgMAMKzxU0nGqFowWU= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=lSGnSjIB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of jthoughton@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=jthoughton@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689696890; a=rsa-sha256; cv=none; b=rxst7yUEIryH7HsGx4fdZ3smmaitJKWhXWazk0r9fSaN2+f7Xtim1g01/m3LOoRcMSdf4c qoI6cQ9Ivgq5cgJOOD2kwwSyaau+TXgPLzYgYFvZ8j1tiJTxfXmsForiWZknN7td4nY8sr IvPfet9gT7RQbZC04fBUgsWFYNXtfAM= Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-40371070eb7so301811cf.1 for ; Tue, 18 Jul 2023 09:14:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1689696890; x=1692288890; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=c/0dqzn2J5kw4y+zZIL8GSPv8S5vyPyRnDe8Ec33Zrs=; b=lSGnSjIBI+wgU+VXQQhzLq0K4fZiryOOw0n4IOvbOn34n0tgp5B9jhkdsuXnuVCBGA O8myyTMDd0YWkpVme0wagHte95tKsDm8jVnz6O/k65x1FYsXuDZRBHYekWLEZpju6ToG wNlVyXdJ7+gy9P0XmSYaZOcaER2jIEnt2AzZd2iy+jdIOyWHWZmb/hKI+hNW1a3+He5+ DabixmPFeHBNT+UHIqsxffno8fJv86BXLkLpaWwNw6UuiWIpNLeszY/5JXGhoKNToiu1 RtHwO4z77x7DA/qTgrwutUy+fYOhWvE7hpWpoerX4aemEYpB/fae/c0dezC5Dx/Gwd8A VZiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689696890; x=1692288890; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=c/0dqzn2J5kw4y+zZIL8GSPv8S5vyPyRnDe8Ec33Zrs=; b=SfEoIpsVD09fVks5PFBsoWwLfflsyAXxUgKStkmbsM8I1J9UahcKm+j081sm90w+MN zfr+cyli14v5ZvkZ3G145ouvIPblcQMn7jn6gCdAY0DJsyq0jXdKa3nDXij7U3pIPxz6 S7LxirNtbiOfRg9W8y8Kxbjax+x5PlbNk/CzVKUthUvnN45PMaWz4LJ6TKrBLQ+yoMrE nfEILI0fXLXIaqvHA2xJWJjQFt2sgha7ilXsBPikuIvkneKMsfw4vM6316lz2nsMny2E XLSscXpR0iNkBg5d3Y5TdetQ48S4/ay598RVWvSIJiM+v4P7OB1A2AZEqm+syqsunh3L ICyQ== X-Gm-Message-State: ABy/qLYVSgV5tB6chxSOpBUgDYonGUfjVGoM6l65FshQn0Cd1RL9m8zt snsTHbVOmY4H3/AKgQXjWedCsaN/bo5l5+NxhCBNPQ== X-Google-Smtp-Source: APBJJlHdYn3M1o+p9OfETcxusX/LWCe//h2+xWNnjSuKc24YuF5OC/B+OgGNOEavlHS8rC4TKHiuLtw9kwe2CPwIZYk= X-Received: by 2002:a05:622a:1312:b0:403:d35d:4660 with SMTP id v18-20020a05622a131200b00403d35d4660mr223929qtk.11.1689696889673; Tue, 18 Jul 2023 09:14:49 -0700 (PDT) MIME-Version: 1.0 References: <20230718004942.113174-1-mike.kravetz@oracle.com> <20230718004942.113174-2-mike.kravetz@oracle.com> In-Reply-To: <20230718004942.113174-2-mike.kravetz@oracle.com> From: James Houghton Date: Tue, 18 Jul 2023 09:14:13 -0700 Message-ID: Subject: Re: [PATCH v2 1/2] hugetlb: Do not clear hugetlb dtor until allocating vmemmap To: Mike Kravetz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jiaqi Yan , Naoya Horiguchi , Muchun Song , Miaohe Lin , Axel Rasmussen , Michal Hocko , Andrew Morton , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: D0BF81C0021 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: pgpfd17bgydtzhhzs79k6pr4863wzeyh X-HE-Tag: 1689696890-554511 X-HE-Meta: U2FsdGVkX19kloA3364m0APL42UdMG7e/fsmKG4Hhxgid4DtTxQLuKaom5xCB1z+aHgL+2vYhB52BDihKXcduJFPHJZUa7e/J3DWKF4gdg07UasW7p+cttNLjqsO4rvA2GG2c+K7Tl2ADoBCr+8p4zJhrJXdcK3ntlUVXlwSrxH2PNOOvlBxO+Ny3z700L6UZ74Win76ZT1xDe6qOI9NCdKyec+VyZaQsUZ7qEmLPPSm4ztrDuw4YjHwbLOXDYhQmZnolP0bk68ytXo9elO0M08EKNRqtrymUYxRX56+J/d0iJXxZRSYZ1MPySc0ANvsFUbzd02tgGdPW4XGkMIqVZ2qQay3fImk8LteiFY18RpCYzQ6qLziWgv/bziGkkFr7jwyi9HgtL5mInSJK4DgaMnPaSTTinhZFXy+yCj1FfiKU1y5OVfLcatGJvrLA6QzDE8a6xfTnTZL/rDe6sSuGaS92WCWUYrOsuTVcp4R59FyjGmUzZmOH5AHWe7RDBsXF4h7QTggC1QMcxWyQJh2HOBELV8wRfKxNoQx3vJt8EtzDh79mkPT9OYQa7Q48x/Ncvmhw70kXFiKbTqHTlVd7fetWd0cvOw+Vng4cuynQgiLFiWUJkd0byeDh04GtFucydviS3G88z8K7NmHKnSTntOgLeYxJGfrBtHMyYZGxgFRsHz/nhPILLbPV4VC7j5Osiff4u7BxS8FzePS448UyRDy4xlQMoa+9lPifyRPe4yOfWJvFz3FumPk57QY33yUxjew3r5gqifoCTLnHxb71LFDd1rK3hugqkoDxseuuqOR5xm96snTWeuAo6I8bWRNmti+vhR5CBXC2vQgxBI5bGL50TZn/b7KbiFu38Iv+AUpyWyNCEeXiD9Xo9PVsGTW7kmQaZA6Ybm98H+3v76DPJCcJe5CN6U9C/VeJxY+JhHseC6m2FavkRSCUnDTmfabyO7EaQbgRJRvCvC0vEG 8o2GfYkj Q2bNs07rDoUQOkGq8ba2bWV8c+9BL06Kw/w7zNsU54no9T+2uD9Ah32A5MMF5pfgzxgJ39Knt2f70/0o8klXF40UqbtUjNfUaUTW0xKpjQNrgWz7xtZovD5SkSVswqooiIoLcjeX5cM9+wKpFZ1Gp5m/vs3ReT9nxKg/2UXr59rK+/uC8hhTj8E5zG2eDuY+bHTWMPdvR81cBgOOfezvx8JL7C42SDp8wmGANJi9q0vl1lk1jZu6SeKkH4xApRgVLNk0yFmu3drwlh03cUZXSTZG5mx9gVgPemu6SzbhJiwDISN6S4Br/vMnt8Px6amgxL/GffnGow4EVbU8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 17, 2023 at 5:50=E2=80=AFPM Mike Kravetz wrote: > > Freeing a hugetlb page and releasing base pages back to the underlying > allocator such as buddy or cma is performed in two steps: > - remove_hugetlb_folio() is called to remove the folio from hugetlb > lists, get a ref on the page and remove hugetlb destructor. This > all must be done under the hugetlb lock. After this call, the page > can be treated as a normal compound page or a collection of base > size pages. > - update_and_free_hugetlb_folio() is called to allocate vmemmap if > needed and the free routine of the underlying allocator is called > on the resulting page. We can not hold the hugetlb lock here. > > One issue with this scheme is that a memory error could occur between > these two steps. In this case, the memory error handling code treats > the old hugetlb page as a normal compound page or collection of base > pages. It will then try to SetPageHWPoison(page) on the page with an > error. If the page with error is a tail page without vmemmap, a write > error will occur when trying to set the flag. > > Address this issue by modifying remove_hugetlb_folio() and > update_and_free_hugetlb_folio() such that the hugetlb destructor is not > cleared until after allocating vmemmap. Since clearing the destructor > requires holding the hugetlb lock, the clearing is done in > remove_hugetlb_folio() if the vmemmap is present. This saves a > lock/unlock cycle. Otherwise, destructor is cleared in > update_and_free_hugetlb_folio() after allocating vmemmap. > > Note that this will leave hugetlb pages in a state where they are marked > free (by hugetlb specific page flag) and have a ref count. This is not > a normal state. The only code that would notice is the memory error > code, and it is set up to retry in such a case. > > A subsequent patch will create a routine to do bulk processing of > vmemmap allocation. This will eliminate a lock/unlock cycle for each > hugetlb page in the case where we are freeing a large number of pages. > > Fixes: ad2fa3717b74 ("mm: hugetlb: alloc the vmemmap pages associated wit= h each HugeTLB page") > Cc: > Signed-off-by: Mike Kravetz > --- > mm/hugetlb.c | 90 ++++++++++++++++++++++++++++++++++++++-------------- > 1 file changed, 66 insertions(+), 24 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 64a3239b6407..4a910121a647 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1579,9 +1579,37 @@ static inline void destroy_compound_gigantic_folio= (struct folio *folio, > unsigned int order) { } > #endif > > +static inline void __clear_hugetlb_destructor(struct hstate *h, > + struct folio *folio) > +{ > + lockdep_assert_held(&hugetlb_lock); > + > + /* > + * Very subtle > + * > + * For non-gigantic pages set the destructor to the normal compou= nd > + * page dtor. This is needed in case someone takes an additional > + * temporary ref to the page, and freeing is delayed until they d= rop > + * their reference. > + * > + * For gigantic pages set the destructor to the null dtor. This > + * destructor will never be called. Before freeing the gigantic > + * page destroy_compound_gigantic_folio will turn the folio into = a > + * simple group of pages. After this the destructor does not > + * apply. > + * > + */ Is it correct and useful to add a WARN_ON_ONCE(folio_test_hugetlb_vmemmap_optimized(folio)) here? Feel free to add: Acked-by: James Houghton > + if (hstate_is_gigantic(h)) > + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); > + else > + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); > +} > + > /* > - * Remove hugetlb folio from lists, and update dtor so that the folio ap= pears > - * as just a compound page. > + * Remove hugetlb folio from lists. > + * If vmemmap exists for the folio, update dtor so that the folio appear= s > + * as just a compound page. Otherwise, wait until after allocating vmem= map > + * to update dtor. > * > * A reference is held on the folio, except in the case of demote. > * > @@ -1612,31 +1640,19 @@ static void __remove_hugetlb_folio(struct hstate = *h, struct folio *folio, > } > > /* > - * Very subtle > - * > - * For non-gigantic pages set the destructor to the normal compou= nd > - * page dtor. This is needed in case someone takes an additional > - * temporary ref to the page, and freeing is delayed until they d= rop > - * their reference. > - * > - * For gigantic pages set the destructor to the null dtor. This > - * destructor will never be called. Before freeing the gigantic > - * page destroy_compound_gigantic_folio will turn the folio into = a > - * simple group of pages. After this the destructor does not > - * apply. > - * > - * This handles the case where more than one ref is held when and > - * after update_and_free_hugetlb_folio is called. > - * > - * In the case of demote we do not ref count the page as it will = soon > - * be turned into a page of smaller size. > + * We can only clear the hugetlb destructor after allocating vmem= map > + * pages. Otherwise, someone (memory error handling) may try to = write > + * to tail struct pages. > + */ > + if (!folio_test_hugetlb_vmemmap_optimized(folio)) > + __clear_hugetlb_destructor(h, folio); > + > + /* > + * In the case of demote we do not ref count the page as it will= soon > + * be turned into a page of smaller size. > */ > if (!demote) > folio_ref_unfreeze(folio, 1); > - if (hstate_is_gigantic(h)) > - folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); > - else > - folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); > > h->nr_huge_pages--; > h->nr_huge_pages_node[nid]--; > @@ -1728,6 +1744,19 @@ static void __update_and_free_hugetlb_folio(struct= hstate *h, > return; > } > > + /* > + * If needed, clear hugetlb destructor under the hugetlb lock. > + * This must be done AFTER allocating vmemmap pages in case there= is an > + * attempt to write to tail struct pages as in memory poison. > + * It must be done BEFORE PageHWPoison handling so that any subse= quent > + * memory errors poison individual pages instead of head. > + */ > + if (folio_test_hugetlb(folio)) { > + spin_lock_irq(&hugetlb_lock); > + __clear_hugetlb_destructor(h, folio); > + spin_unlock_irq(&hugetlb_lock); > + } > + > /* > * Move PageHWPoison flag from head page to the raw error pages, > * which makes any healthy subpages reusable. > @@ -3604,6 +3633,19 @@ static int demote_free_hugetlb_folio(struct hstate= *h, struct folio *folio) > return rc; > } > > + /* > + * The hugetlb destructor could still be set for this folio if vm= emmap > + * was actually allocated above. The ref count on all pages is 0= . > + * Therefore, nobody should attempt access. However, before dest= roying > + * compound page below, clear the destructor. Unfortunately, this > + * requires a lock/unlock cycle. > + */ > + if (folio_test_hugetlb(folio)) { > + spin_lock_irq(&hugetlb_lock); > + __clear_hugetlb_destructor(h, folio); > + spin_unlock_irq(&hugetlb_lock); > + } > + > /* > * Use destroy_compound_hugetlb_folio_for_demote for all huge pag= e > * sizes as it will not ref count folios. > -- > 2.41.0 >