From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 791BBC2BBCA for ; Wed, 26 Jun 2024 02:38:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC02C6B0088; Tue, 25 Jun 2024 22:38:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D6FB16B0089; Tue, 25 Jun 2024 22:38:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5ECC6B008A; Tue, 25 Jun 2024 22:38:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A7CFD6B0088 for ; Tue, 25 Jun 2024 22:38:15 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2A2BC40857 for ; Wed, 26 Jun 2024 02:38:15 +0000 (UTC) X-FDA: 82271480550.07.5A9F9E4 Received: from out-178.mta1.migadu.com (out-178.mta1.migadu.com [95.215.58.178]) by imf10.hostedemail.com (Postfix) with ESMTP id 569F7C000F for ; Wed, 26 Jun 2024 02:38:11 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=NNOg1al0; spf=pass (imf10.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.178 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719369472; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fgXz4m4UfWijICfm43T8lSGv9QGG3W4+S+pUDAQvTJM=; b=WDbeYyCsRmmugENI90BYfMOEkLPAnD0s6V5D9Y9g4y6we83T0nJggDRBSIJrIHGms5g4Is 0oJaFy9w8H9w28QhX9DMrN3oN7BwJ9Pe93FlWzITmGEX7pEbPSgrCWaKr7KVNZVLnoAGld L4qfGm18BUDnezx4YMYeIiQk/eOCIxs= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=NNOg1al0; spf=pass (imf10.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.178 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719369472; a=rsa-sha256; cv=none; b=V/JhTZ9eVHj4z9ZRazyYExVhIOiEVXRsTfj+3za/Rhs5O/UBY+mG54kY8L8kd7keGVFLsN cD4drJQM9Iu/H/OCM4lAckegsz0v7YnSs0dnP72OoyHLjmhfkwZRps4RtBGT9Y3eSGMO/f nYFVHAVomkpg0CcHsamxEmdZWpyLq5c= X-Envelope-To: yuzhao@google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1719369489; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fgXz4m4UfWijICfm43T8lSGv9QGG3W4+S+pUDAQvTJM=; b=NNOg1al0tIyzExuyQ/TubxLlxTDpXxqooUR9iL2CZpFfJWwW5cEm+pPPr8uQxlspjGjZCz J7bZ05jk/zfg0LnYALbkq8bpjRLAw9mxs8xR1mGem5+Ou4RHrhIlkQE+xhEzmGaPPoeVUB nRma/fYwwkia67tfS33i/5h4T8dfBUU= X-Envelope-To: akpm@linux-foundation.org X-Envelope-To: linux-mm@kvack.org X-Envelope-To: david@redhat.com X-Envelope-To: fvdl@google.com X-Envelope-To: willy@infradead.org Message-ID: <7380dad0-829a-48b6-a69e-e3a02ded30de@linux.dev> Date: Wed, 26 Jun 2024 10:37:59 +0800 MIME-Version: 1.0 Subject: Re: [RFC PATCH] mm/hugetlb_vmemmap: fix race with speculative PFN walkers To: Yu Zhao Cc: Andrew Morton , linux-mm@kvack.org, David Hildenbrand , Frank van der Linden , "Matthew Wilcox (Oracle)" References: <20240621213717.1099079-1-yuzhao@google.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20240621213717.1099079-1-yuzhao@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 569F7C000F X-Stat-Signature: hx99ct581t7tmuojy7fgkcfb1w3y4k3m X-Rspam-User: X-HE-Tag: 1719369491-575590 X-HE-Meta: U2FsdGVkX182cSnUwD1YSrvddgJK9Xwm5AaTlpwbD9sm4ww+Yo4M9IomS4EtLR0wz3/StZ2WbL6RyzUkAfl5Hv5hI05AM9txnmAnyrgtjsIn4nLOaAH7e4vPWaKF/vPxKNoNU7WOGSxtQLAj5ycXg0/o0tLCejosZHW1NrH/AmXN3FAzPmisJUgpmke1cxm/jyHUnZ9wiDsOi2FKl0gKq19NCE08ts+4VsfHNX6cbKXXidlNYy9S4XClSzSwPdh7cs7Jv6syphFSD4xNVel2lAnLWzVMFp2kC5RB70wRsz1KC19PxMmbFWy5CQ9exkWwEr+UlBA6ncwmR7XZq18z1U5HhCqZvIsqnkTf5fpmB4DWrJpdrPmsgE8ATUrxFvTQ/bxICpi8GVnetwoHkxNhvf3c7BErwSx1YoMSiEpGNfa8SsXCGZihVNVvxKfjHqH0y17mtJEXEeouwZzz9H71gyudYiiLxKn31w2A001DucTmJFyihus80EqSp5JM8Atr29OhjRGmwRTm+bGngCAwhVf09cPh4wqF9EKNRU6uQ1ZpY56fs023rNytEo/AbIKgmHqzrpPwfQYRimJLSiTEFxRVjRDRqOFaomryKRMHkf1B5mPmqFzbnqMHyscYKAg1SrwEEe29yX1p2jyS/0Q5X35qxhhYVHfHXHa/cZe2zdVFO2C7PM2AGmYgCHcyJSzoICq/1YWllOp2hqQ40goc64n9JW4aaqIZM0Ox6wviIgHqZAedBrgo2mnSraSr1BOIuodQ/FALkR+c4C/Ah1mVq5gqgph5g5fz2i1DWwjiZCScShfYnIxWsC03iHbbFtvidYqC1Ot/PjhwvIbJRk3dJMqRKVqSIHNsphK7Lz1E/Euy9qEvZhoBYwTz0vfzyV4/2UukZQ1zUyURR/moPligtNa8vR5mHIWpc5t/SDErax3o6xDkSAwByjp0u+gwbVyfeuOGOFiTKs9xg8/P+SK CY3FO+EG oECO5eHlDxHP6/o4oFl/Gw1HBrv00UxWiHlweaYVUSd5ThIbDWEWsEaA+BX80smMsxp+O8p9Fmq02UYynzdSiwx7ZNe8dBkOAyCYubhXOB+ds24BJGhuz+kVwQYYpMK8ixd+lBb+XMBg4Jq0ByPuTJPYEFEUQUJu/cR4LskLAhiTMwMQQbGe4/mwtSVgU7GmDMs4XRmADoLQk4xWYCq5frer5EklQVVq+/jlsOMfK3kgfQZq+Gksld6c4+15qv2b1zyAx9HwxkHbwKP7BS8TTdUG0M5DoGzz2Vhm4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/6/22 05:37, Yu Zhao wrote: > While investigating HVO for THPs [1], it turns out that speculative > PFN walkers like compaction can race with vmemmap modificatioins, > e.g., > > CPU 1 (vmemmap modifier) CPU 2 (speculative PFN walker) > ----------------------------- ------------------------------ > Allocates an LRU folio page1 > Sees page1 > Frees page1 > > Allocates a hugeTLB folio page2 > (page1 being a tail of page2) > > Updates vmemmap mapping page1 > get_page_unless_zero(page1) > > Even though page1 has a zero refcnt after HVO, get_page_unless_zero() > can still try to modify its read-only struct page resulting in a > crash. > > An independent report [2] confirmed this race. Right. Thanks for your continuous focus on this race. > > There are two discussed approaches to fix this race: > 1. Make RO vmemmap RW so that get_page_unless_zero() can fail without > triggering a PF. > 2. Use RCU to make sure get_page_unless_zero() either sees zero > refcnts through the old vmemmap or non-zero refcnts through the new > one. > > The second approach is preferred here because: > 1. It can prevent illegal modifications to struct page[] that is HVO; > 2. It can be generalized, in a way similar to ZERO_PAGE(), to fix > similar races in other places, e.g., arch_remove_memory() on x86 > [3], which frees vmemmap mapping offlined struct page[]. > > While adding synchronize_rcu(), the goal is to be surgical, rather > than optimized. Specifically, calls to synchronize_rcu() on the error > handling paths can be coalesced, but it is not done for the sake of > Simplicity: noticeably, this fix removes ~50% more lines than it adds. > > [1] https://lore.kernel.org/20240229183436.4110845-4-yuzhao@google.com/ > [2] https://lore.kernel.org/917FFC7F-0615-44DD-90EE-9F85F8EA9974@linux.dev/ > [3] https://lore.kernel.org/be130a96-a27e-4240-ad78-776802f57cad@redhat.com/ > > Signed-off-by: Yu Zhao > --- > include/linux/page_ref.h | 8 ++++++- > mm/hugetlb.c | 50 +++++----------------------------------- > mm/hugetlb_vmemmap.c | 16 +++++++++++++ > 3 files changed, 29 insertions(+), 45 deletions(-) > > diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h > index 1acf5bac7f50..add92e8f31b2 100644 > --- a/include/linux/page_ref.h > +++ b/include/linux/page_ref.h > @@ -230,7 +230,13 @@ static inline int folio_ref_dec_return(struct folio *folio) > > static inline bool page_ref_add_unless(struct page *page, int nr, int u) > { > - bool ret = atomic_add_unless(&page->_refcount, nr, u); > + bool ret = false; > + > + rcu_read_lock(); > + /* avoid writing to the vmemmap area being remapped */ > + if (!page_is_fake_head(page) && page_ref_count(page) != u) > + ret = atomic_add_unless(&page->_refcount, nr, u); > + rcu_read_unlock(); > > if (page_ref_tracepoint_active(page_ref_mod_unless)) > __page_ref_mod_unless(page, nr, ret); > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index f35abff8be60..271d83a7cde0 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1629,9 +1629,8 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio, > * > * Must be called with hugetlb lock held. > */ > -static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, > - bool adjust_surplus, > - bool demote) > +static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, > + bool adjust_surplus) > { > int nid = folio_nid(folio); > > @@ -1661,33 +1660,13 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, > if (!folio_test_hugetlb_vmemmap_optimized(folio)) > __folio_clear_hugetlb(folio); > > - /* > - * In the case of demote we do not ref count the page as it will soon > - * be turned into a page of smaller size. > - */ > - if (!demote) > - folio_ref_unfreeze(folio, 1); > - > h->nr_huge_pages--; > h->nr_huge_pages_node[nid]--; > } > > -static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, > - bool adjust_surplus) > -{ > - __remove_hugetlb_folio(h, folio, adjust_surplus, false); > -} > - > -static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *folio, > - bool adjust_surplus) > -{ > - __remove_hugetlb_folio(h, folio, adjust_surplus, true); > -} > - > static void add_hugetlb_folio(struct hstate *h, struct folio *folio, > bool adjust_surplus) > { > - int zeroed; > int nid = folio_nid(folio); > > VM_BUG_ON_FOLIO(!folio_test_hugetlb_vmemmap_optimized(folio), folio); > @@ -1711,21 +1690,6 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, > */ > folio_set_hugetlb_vmemmap_optimized(folio); > > - /* > - * This folio is about to be managed by the hugetlb allocator and > - * should have no users. Drop our reference, and check for others > - * just in case. > - */ > - zeroed = folio_put_testzero(folio); > - if (unlikely(!zeroed)) > - /* > - * It is VERY unlikely soneone else has taken a ref > - * on the folio. In this case, we simply return as > - * free_huge_folio() will be called when this other ref > - * is dropped. > - */ > - return; > - > arch_clear_hugetlb_flags(folio); > enqueue_hugetlb_folio(h, folio); > } > @@ -1779,6 +1743,8 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, > spin_unlock_irq(&hugetlb_lock); > } > > + folio_ref_unfreeze(folio, 1); > + > /* > * Non-gigantic pages demoted from CMA allocated gigantic pages > * need to be given back to CMA in free_gigantic_folio. > @@ -3079,11 +3045,8 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, > > free_new: > spin_unlock_irq(&hugetlb_lock); > - if (new_folio) { > - /* Folio has a zero ref count, but needs a ref to be freed */ > - folio_ref_unfreeze(new_folio, 1); > + if (new_folio) > update_and_free_hugetlb_folio(h, new_folio, false); > - } Look into this function, we have: dissolve_free_huge_page retry:     if (!folio_test_hugetlb(folio))         return;     if (!folio_ref_count(folio))         if (unlikely(!folio_test_hugetlb_freed(folio)))             goto retry;     remove_hugetlb_folio(h, folio, false); Since you have not raised the refcount in remove_hugetlb_folio(), we will disslove this page again if there is a concurrent dissolve_free_huge_page() processing routine. Then, the statistics will be wrong (like ->nr_huge_pages). A solution seems easy, we should clear folio_clear_hugetlb_freed in remove_hugetlb_folio. Muchun, Thanks. > > return ret; > } > @@ -3938,7 +3901,7 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) > > target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); > > - remove_hugetlb_folio_for_demote(h, folio, false); > + remove_hugetlb_folio(h, folio, false); > spin_unlock_irq(&hugetlb_lock); > > /* > @@ -3952,7 +3915,6 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) > if (rc) { > /* Allocation of vmemmmap failed, we can not demote folio */ > spin_lock_irq(&hugetlb_lock); > - folio_ref_unfreeze(folio, 1); > add_hugetlb_folio(h, folio, false); > return rc; > } > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index b9a55322e52c..8193906515c6 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -446,6 +446,8 @@ static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, > unsigned long vmemmap_reuse; > > VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(folio), folio); > + VM_WARN_ON_ONCE_FOLIO(folio_ref_count(folio), folio); > + > if (!folio_test_hugetlb_vmemmap_optimized(folio)) > return 0; > > @@ -481,6 +483,9 @@ static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, > */ > int hugetlb_vmemmap_restore_folio(const struct hstate *h, struct folio *folio) > { > + /* avoid writes from page_ref_add_unless() while unfolding vmemmap */ > + synchronize_rcu(); > + > return __hugetlb_vmemmap_restore_folio(h, folio, 0); > } > > @@ -505,6 +510,9 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, > long restored = 0; > long ret = 0; > > + /* avoid writes from page_ref_add_unless() while unfolding vmemmap */ > + synchronize_rcu(); > + > list_for_each_entry_safe(folio, t_folio, folio_list, lru) { > if (folio_test_hugetlb_vmemmap_optimized(folio)) { > ret = __hugetlb_vmemmap_restore_folio(h, folio, > @@ -550,6 +558,8 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, > unsigned long vmemmap_reuse; > > VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(folio), folio); > + VM_WARN_ON_ONCE_FOLIO(folio_ref_count(folio), folio); > + > if (!vmemmap_should_optimize_folio(h, folio)) > return ret; > > @@ -601,6 +611,9 @@ void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *folio) > { > LIST_HEAD(vmemmap_pages); > > + /* avoid writes from page_ref_add_unless() while folding vmemmap */ > + synchronize_rcu(); > + > __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, 0); > free_vmemmap_page_list(&vmemmap_pages); > } > @@ -644,6 +657,9 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l > > flush_tlb_all(); > > + /* avoid writes from page_ref_add_unless() while folding vmemmap */ > + synchronize_rcu(); > + > list_for_each_entry(folio, folio_list, lru) { > int ret; > > > base-commit: 264efe488fd82cf3145a3dc625f394c61db99934 > prerequisite-patch-id: 5029fb66d9bf40b84903a5b4f066e85101169e84 > prerequisite-patch-id: 7889e5ee16b8e91cccde12468f1d2c3f65500336 > prerequisite-patch-id: 0d4c19afc7b92f16bee9e9cf2b6832406389742a > prerequisite-patch-id: c56f06d4bb3e738aea489ec30313ed0c1dbac325