From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85848C3064D for ; Thu, 27 Jun 2024 07:25:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19C4F6B0092; Thu, 27 Jun 2024 03:25:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 14BE46B0098; Thu, 27 Jun 2024 03:25:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2DA56B0099; Thu, 27 Jun 2024 03:25:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D3CB66B0092 for ; Thu, 27 Jun 2024 03:25:49 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 55F76A4535 for ; Thu, 27 Jun 2024 07:25:49 +0000 (UTC) X-FDA: 82275834018.13.96FC027 Received: from out-188.mta1.migadu.com (out-188.mta1.migadu.com [95.215.58.188]) by imf24.hostedemail.com (Postfix) with ESMTP id 8BA87180014 for ; Thu, 27 Jun 2024 07:25:46 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=eWpdZTPf; spf=pass (imf24.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.188 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719473131; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BWojZVzNV+zDBRMlFD0vl4NDHhcBHwAHj5cUBFwCx3o=; b=5F/JKCTqtxwRsjPBp/yIfDJjfHlb9+wWwTY0khWchp9I3mU2NsTqaxrxuXHy0LP1DRfwjA qW96t4PHcZUS/QbLTCWyNgBikMo4G/FNM0iiwOe7EBbXCEMzDIudGVLNwmptb7sqiKUYWh Q8CEYDDhR1W+HmQUqY3/7Y3c+XZTnhg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719473131; a=rsa-sha256; cv=none; b=G9UBA2uo6wy9FQrCnOZiX+NuHMN3h/Y3ZzAJJeiTsZ5j+Gu7BOgJzImIvuJ5VowMZ5ac+q w8oRAq7Glnhm7S7xqd1+2QSHFEBrok20HaVYBMJWCFxsBgyk9w5qoORdlOAiLf/YJB0+Df CaVOnZuJm92HsBO7GgsOXW6pUmRXuOc= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=eWpdZTPf; spf=pass (imf24.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.188 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev X-Envelope-To: yuzhao@google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1719473144; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BWojZVzNV+zDBRMlFD0vl4NDHhcBHwAHj5cUBFwCx3o=; b=eWpdZTPfnQZKYMN5SijH159CRiCl+KjK0QW/HCRf4jkukk19vjdTtBovoaN00nNhG5HeUl YzxDn89bu3JpLhRnrBdxTMFhQJe4YA3QLgx+Eb6/BE8J+qR24cBUKAt1pM2qvMxo2yoc6B 1EISdHm8u6JWWeialf4kPBSCnZvcH8o= X-Envelope-To: akpm@linux-foundation.org X-Envelope-To: david@redhat.com X-Envelope-To: fvdl@google.com X-Envelope-To: willy@infradead.org X-Envelope-To: peterx@redhat.com X-Envelope-To: yang@os.amperecomputing.com X-Envelope-To: linux-mm@kvack.org X-Envelope-To: linux-kernel@vger.kernel.org Message-ID: <379a225a-3e26-4adc-9add-b4d931c55a9a@linux.dev> Date: Thu, 27 Jun 2024 15:25:34 +0800 MIME-Version: 1.0 Subject: Re: [PATCH mm-unstable v1] mm/hugetlb_vmemmap: fix race with speculative PFN walkers To: Yu Zhao , Andrew Morton Cc: David Hildenbrand , Frank van der Linden , "Matthew Wilcox (Oracle)" , Peter Xu , Yang Shi , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240627044335.2698578-1-yuzhao@google.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20240627044335.2698578-1-yuzhao@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 8BA87180014 X-Stat-Signature: mbfxs3st9c75dot9u5prdo3kioun46ck X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1719473146-847538 X-HE-Meta: U2FsdGVkX1+zlVCIC1XFlFjzfb0sEhHLWpSOUt3k9Ml61gQJLKCzM6MzecmM+Esi1Q3reh/kA85SVcQeqpNGJrTQrIn8c5jVcf8Ah7uNlUmExOsQtPJvX7itRweTDUf4JYgojama6dQ3y8yluVvugBvjZ1AnTDGl7M1ZB/zF2SRLmbfmeuD7mfA4bjlS4ZBQymQ+tY0fWPsxr5+HZJSobNVp8xTBOL4v5IAbvl9NpiTE/bZPlZNlA8ByONE8yvOI+fW2vEp3ewktauDXqaLnQozal97jwq/V/4mL4Ym1EqjRVQ2KASx27XMCtYNwxBtpVDkbgNg+WmYzT5Y65m0hu+Nwu+FcePZwSMOG6GtOVqjPe7iXrQ0JAKpBlr/YHhxI/r1rRTO6f5pMq2nKv4AUQILaqR/9XhJR8dtC8ymUY21EB+dEnrydxTMbu5OJ/Y4BWztddTuI644cDQGtyI5Kjxhnxynv9dN7h0adyK/kT4x9bLXx8MkZz3tUxlbMLgKr/8yf+ogn+QgDi+kzwvs+Q/5KIPvw0XO6LiWTsWwVnn0OgtAKNmTEJEf6elXAh5WBrIKX/ygm+RqSed43ZTCG/Lary2p34QXwnkwyU9ETtrWz4o7AI1IaeTVLbkxJmwOMHkZzbsPjLr9LxVpwvNc/Hu10bz9j6hjxqgYDqTqdusLesI9iguHR+oS5xtnM87HCUl2K08eap01XeDd3cy0V6kQ5dnSqY1zxMc7cqSrbN3QopIGg3q0cK2nISLSAPr43Nmb2HZRMYIRJ0/ERlaDDC8MFc4HJhOgKgIAGrWR2J2nFk49GQ/t0Wud8RnjCCVhocC1MDJ6ICcRSf6PNMYUzaxtlU2PSd1gZW81G2VjztajI6RApQGiMKJepcKzbttJG9BmrmcteG5zoBNnnk4/1drpstam5g0qbjC0v3Z8vGKxhKM2RJGd9PCOUSvgmWEmHTcGdlexzm32O5CJPGEq q+UKX3PN ctjm70GM+fHIkgl+TXS/5evDS3ZdLBoOR0MtciPmiM6tuV7SAVJjHJy1Z0iN159jW2O7QTVcwpaxKoPK+BzcN8CtDb9DML828aYMY3X/8fU4eWB6XsICr9Z9ZFheFmfkEgO9SV5YQQYzs+6N1GrRbEDD9zPY63WkH1hmD/VSaQirQNAyRsdkTQvx1E3bxc9E4W3Yqi5SqQj2jG+N2y76RzSVVhdbVNcpgzmPcBE5xcmAyIz4IJlxPCV+e8pmQ6jzyKoHikg18p7IEGxCB+rjE82Pj2Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/6/27 12:43, Yu Zhao wrote: > While investigating HVO for THPs [1], it turns out that speculative > PFN walkers like compaction can race with vmemmap modifications, e.g., > > CPU 1 (vmemmap modifier) CPU 2 (speculative PFN walker) > ------------------------------- ------------------------------ > Allocates an LRU folio page1 > Sees page1 > Frees page1 > > Allocates a hugeTLB folio page2 > (page1 being a tail of page2) > > Updates vmemmap mapping page1 > get_page_unless_zero(page1) > > Even though page1->_refcount is zero after HVO, get_page_unless_zero() > can still try to modify this read-only field, resulting in a crash. > > An independent report [2] confirmed this race. > > There are two discussed approaches to fix this race: > 1. Make RO vmemmap RW so that get_page_unless_zero() can fail without > triggering a PF. > 2. Use RCU to make sure get_page_unless_zero() either sees zero > page->_refcount through the old vmemmap or non-zero page->_refcount > through the new one. > > The second approach is preferred here because: > 1. It can prevent illegal modifications to struct page[] that has been > HVO'ed; > 2. It can be generalized, in a way similar to ZERO_PAGE(), to fix > similar races in other places, e.g., arch_remove_memory() on x86 > [3], which frees vmemmap mapping offlined struct page[]. > > While adding synchronize_rcu(), the goal is to be surgical, rather > than optimized. Specifically, calls to synchronize_rcu() on the error > handling paths can be coalesced, but it is not done for the sake of > Simplicity: noticeably, this fix removes ~50% more lines than it adds. I suggest adding some user-visible effect here like for use case of nr_overcommit_hugepages, synchronize_rcu() will make this use case worse. > > [1] https://lore.kernel.org/20240229183436.4110845-4-yuzhao@google.com/ > [2] https://lore.kernel.org/917FFC7F-0615-44DD-90EE-9F85F8EA9974@linux.dev/ > [3] https://lore.kernel.org/be130a96-a27e-4240-ad78-776802f57cad@redhat.com/ > > Signed-off-by: Yu Zhao Acked-by: Muchun Song A nit below. > --- > include/linux/page_ref.h | 8 +++++- > mm/hugetlb.c | 53 ++++++---------------------------------- > mm/hugetlb_vmemmap.c | 16 ++++++++++++ > 3 files changed, 30 insertions(+), 47 deletions(-) > > diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h > index 490d0ad6e56d..8c236c651d1d 100644 > --- a/include/linux/page_ref.h > +++ b/include/linux/page_ref.h > @@ -230,7 +230,13 @@ static inline int folio_ref_dec_return(struct folio *folio) > > static inline bool page_ref_add_unless(struct page *page, int nr, int u) > { > - bool ret = atomic_add_unless(&page->_refcount, nr, u); > + bool ret = false; > + > + rcu_read_lock(); > + /* avoid writing to the vmemmap area being remapped */ > + if (!page_is_fake_head(page) && page_ref_count(page) != u) > + ret = atomic_add_unless(&page->_refcount, nr, u); > + rcu_read_unlock(); > > if (page_ref_tracepoint_active(page_ref_mod_unless)) > __page_ref_mod_unless(page, nr, ret); > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 9691624fcb79..1ddaf25737da 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1629,13 +1629,10 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio, > * folio appears as just a compound page. Otherwise, wait until after > * allocating vmemmap to clear the flag. > * > - * A reference is held on the folio, except in the case of demote. > - * > * Must be called with hugetlb lock held. > */ > -static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, > - bool adjust_surplus, > - bool demote) > +static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, > + bool adjust_surplus) > { > int nid = folio_nid(folio); > > @@ -1657,6 +1654,7 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, > h->surplus_huge_pages_node[nid]--; > } > > + folio_clear_hugetlb_freed(folio); I'd suggest putting this into the above "if" statement (if (folio_test_hugetlb_freed(folio))). Then, if the flag is already cleared, there is nothing to do. Thanks. > /* > * We can only clear the hugetlb flag after allocating vmemmap > * pages. Otherwise, someone (memory error handling) may try to write > @@ -1665,33 +1663,13 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, > if (!folio_test_hugetlb_vmemmap_optimized(folio)) > __folio_clear_hugetlb(folio); > > - /* > - * In the case of demote we do not ref count the page as it will soon > - * be turned into a page of smaller size. > - */ > - if (!demote) > - folio_ref_unfreeze(folio, 1); > - > h->nr_huge_pages--; > h->nr_huge_pages_node[nid]--; > } > > -static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, > - bool adjust_surplus) > -{ > - __remove_hugetlb_folio(h, folio, adjust_surplus, false); > -} > - > -static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *folio, > - bool adjust_surplus) > -{ > - __remove_hugetlb_folio(h, folio, adjust_surplus, true); > -} > - > static void add_hugetlb_folio(struct hstate *h, struct folio *folio, > bool adjust_surplus) > { > - int zeroed; > int nid = folio_nid(folio); > > VM_BUG_ON_FOLIO(!folio_test_hugetlb_vmemmap_optimized(folio), folio); > @@ -1715,21 +1693,6 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, > */ > folio_set_hugetlb_vmemmap_optimized(folio); > > - /* > - * This folio is about to be managed by the hugetlb allocator and > - * should have no users. Drop our reference, and check for others > - * just in case. > - */ > - zeroed = folio_put_testzero(folio); > - if (unlikely(!zeroed)) > - /* > - * It is VERY unlikely soneone else has taken a ref > - * on the folio. In this case, we simply return as > - * free_huge_folio() will be called when this other ref > - * is dropped. > - */ > - return; > - > arch_clear_hugetlb_flags(folio); > enqueue_hugetlb_folio(h, folio); > } > @@ -1783,6 +1746,8 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, > spin_unlock_irq(&hugetlb_lock); > } > > + folio_ref_unfreeze(folio, 1); > + > /* > * Non-gigantic pages demoted from CMA allocated gigantic pages > * need to be given back to CMA in free_gigantic_folio. > @@ -3106,11 +3071,8 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, > > free_new: > spin_unlock_irq(&hugetlb_lock); > - if (new_folio) { > - /* Folio has a zero ref count, but needs a ref to be freed */ > - folio_ref_unfreeze(new_folio, 1); > + if (new_folio) > update_and_free_hugetlb_folio(h, new_folio, false); > - } > > return ret; > } > @@ -3965,7 +3927,7 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) > > target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); > > - remove_hugetlb_folio_for_demote(h, folio, false); > + remove_hugetlb_folio(h, folio, false); > spin_unlock_irq(&hugetlb_lock); > > /* > @@ -3979,7 +3941,6 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) > if (rc) { > /* Allocation of vmemmmap failed, we can not demote folio */ > spin_lock_irq(&hugetlb_lock); > - folio_ref_unfreeze(folio, 1); > add_hugetlb_folio(h, folio, false); > return rc; > } > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index fa00d61b6c5a..829112b0a914 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -455,6 +455,8 @@ static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, > unsigned long vmemmap_reuse; > > VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(folio), folio); > + VM_WARN_ON_ONCE_FOLIO(folio_ref_count(folio), folio); > + > if (!folio_test_hugetlb_vmemmap_optimized(folio)) > return 0; > > @@ -490,6 +492,9 @@ static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, > */ > int hugetlb_vmemmap_restore_folio(const struct hstate *h, struct folio *folio) > { > + /* avoid writes from page_ref_add_unless() while unfolding vmemmap */ > + synchronize_rcu(); > + > return __hugetlb_vmemmap_restore_folio(h, folio, 0); > } > > @@ -514,6 +519,9 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, > long restored = 0; > long ret = 0; > > + /* avoid writes from page_ref_add_unless() while unfolding vmemmap */ > + synchronize_rcu(); > + > list_for_each_entry_safe(folio, t_folio, folio_list, lru) { > if (folio_test_hugetlb_vmemmap_optimized(folio)) { > ret = __hugetlb_vmemmap_restore_folio(h, folio, > @@ -559,6 +567,8 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, > unsigned long vmemmap_reuse; > > VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(folio), folio); > + VM_WARN_ON_ONCE_FOLIO(folio_ref_count(folio), folio); > + > if (!vmemmap_should_optimize_folio(h, folio)) > return ret; > > @@ -610,6 +620,9 @@ void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *folio) > { > LIST_HEAD(vmemmap_pages); > > + /* avoid writes from page_ref_add_unless() while folding vmemmap */ > + synchronize_rcu(); > + > __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, 0); > free_vmemmap_page_list(&vmemmap_pages); > } > @@ -653,6 +666,9 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l > > flush_tlb_all(); > > + /* avoid writes from page_ref_add_unless() while folding vmemmap */ > + synchronize_rcu(); > + > list_for_each_entry(folio, folio_list, lru) { > int ret; >