From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82698E77197 for ; Tue, 7 Jan 2025 08:41:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1AD3D6B00BD; Tue, 7 Jan 2025 03:41:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 15CA46B00BF; Tue, 7 Jan 2025 03:41:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 04C966B00C1; Tue, 7 Jan 2025 03:41:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DDB8B6B00BD for ; Tue, 7 Jan 2025 03:41:48 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6BE5F80803 for ; Tue, 7 Jan 2025 08:41:48 +0000 (UTC) X-FDA: 82980012696.07.6F3C3ED Received: from out-176.mta0.migadu.com (out-176.mta0.migadu.com [91.218.175.176]) by imf27.hostedemail.com (Postfix) with ESMTP id 7D1874000A for ; Tue, 7 Jan 2025 08:41:46 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=ocL3Jk21; spf=pass (imf27.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.176 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736239306; a=rsa-sha256; cv=none; b=K8MgVc4/fwikM1MKPoP8KriuKqiaiNA6U/rZ3Er0hJwxALD4tgZNJrGwZ9ddRt7ixoNsdv sUwGX5eM3pp73bms/xjAOXpoQem6EUBPMqEF+MAbatocEqNkXQc/TntCMXMFxIehTPNzAH SOfussgVFCBlp8BjmH4A+u+jb42JsH4= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=ocL3Jk21; spf=pass (imf27.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.176 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736239306; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ws7s9ot9g9RMuZ5YQDWC51kDruEfARKXBMGZD+KjBBg=; b=kTzdhuiYTCT5PeUD1DXUFQUMpo7J2GTYNFPCLiQEZYB2K9ZZpG0Gjy1nbhfXsdgtpF5h+Z d1WqkmpKikzIZUxVttMs5MQideNw7h+0OAjyCQkBVLMxrdFFSXvBi+VwvrXSc4/AO1CJwc uMztwTTUAZYk1ZtpOrjcIdG/WazRmzg= Content-Type: text/plain; charset=us-ascii DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1736239304; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ws7s9ot9g9RMuZ5YQDWC51kDruEfARKXBMGZD+KjBBg=; b=ocL3Jk21HZpGt/qPSjFcXLVZc+5NvDtg3tKYtuV36L3xThRy2w4auDzI/WeWnnDSJtRqL+ 7QAHpP5tFfSbC7ZHjpnQ8zTzd93lgIyZn2LrAbHlpzJq1nJPZRRoye0VyLrizsYXcuG+Lg b6X18YnGkHvgV8tcRT+Lq8p/ABNkMKM= Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3826.200.121\)) Subject: Re: [PATCH mm-unstable v1] mm/hugetlb_vmemmap: fix memory loads ordering X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20250107043505.351925-1-yuzhao@google.com> Date: Tue, 7 Jan 2025 16:41:03 +0800 Cc: Andrew Morton , David Hildenbrand , Mateusz Guzik , "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Will Deacon Content-Transfer-Encoding: quoted-printable Message-Id: References: <20250107043505.351925-1-yuzhao@google.com> To: Yu Zhao X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 7D1874000A X-Stat-Signature: e5ppz6e3wex9hwwpdpfapm6cm35fet6y X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1736239306-234408 X-HE-Meta: U2FsdGVkX1/6Se+juokG7VETlRCVXNZTxOFxo2MLsy0IEoO0+uNjU0LVogxZ6wZMouU+wLmZ++NaFd09W9i60/JM+21xB7hUvDsaSqIv8a6h19EW8oG0gjVzDSUDd1GGeg8jRYF3bVqge4irocJYhIcFoCmUPOD0hwQmgTloGbtuuglALZcClQgXWJyHMRsphfFazuHHUCnHN7V5cQk4chdT361ExF2TwTDc49Ula0Hq6drbBlYMr3J7QBiYgXXyELEkNQmpWJeeYsw/MHT9T+WybUtAhomxlg1VphM8yswb2XDeDk6+YT1Exy4JZj2Jq5SDHdVwxLKxM3Lg2iBnZrTqIEz0CkbYJnVLgSJVpzCd1y4nwOnLnaQl/vSjGjQJX9DtqPFsLhzkkLbINRHNhI68wx9dx+2AVsNwb6i2YKOa5nGpo1EGMYqctBmkksb0f7xYodgQL5Z728tME9ljSNTcIIOxxE29XhbRHYtcEEHcklPXajmkx4fdiz25RvbDSXTBrv5VKSSU7wucw6IlZl0x9MrER8Vvg8ZDGh6YVXRTk5xRGcI1hRPl7HMIfMR5G84EKoy6iYhqx0+xxYTrNo2if5V66Hp9xR/tZ7xUlqybaWPbFfRGNa1ABHui66mz0iz2uwkEXabk+60C1ajiD8veiMxfheRPHFqGE04Whp0dE1DNf+6qunCSejBxN/b+nju8JGC5ByZvKEFPsjwHibaXVjT0VvHY4cgyFoedV/5FzhDZL54pwJJUS6TdE75mtHNuPmujSqEF2hLEYKyZ42MQeia9EWNGmCWDPyz922SasVQ/Rh+nCeOTyIV0DV3QybUFH9ZZKnUyRK9EdGHwIvwjk7Gr/3DEBlZ1PL04XAbSBXL3lj4DkRnleR4ygTMrV94t9nRkcIo/lTdIdjndUk9sfblqGrlvOAfKO+uUEXQz/O1dp36YyxBTypPv4GZuea4wWkPos9el+qz0CcV 0G1x10P5 CTPtuC0/QcSOj1FqPlpuH1tCNNxSF7pUdlvZ5puKGJz4LNUGNtKjvjTu3TjtX4oBMe1rKsh3iwiHAG0aDaOrvB9JXW8XC7+b7hLYKsd73Ctb39yhcU7qhqHiEjtY0itBs1woRRwntU0l3N+hfBflYwR0RL2sJ4xfH/Os+FzN+Uv+a9/xxU3+agGg838ngow1amTp8ZGy0LW5PlpNSdNtaP3V7zulPZfzSpnFyYsH5Zj/7IPRE95doSkC4BnsxclMHkoCZBfvqF6+kFII= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: > On Jan 7, 2025, at 12:35, Yu Zhao wrote: >=20 > Using x86_64 as an example, for a 32KB struct page[] area describing a > 2MB hugeTLB, HVO reduces the area to 4KB by the following steps: > 1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs; > 2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped > by PTE 0, and at the same time change the permission from r/w to > r/o; > 3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB > to 4KB. >=20 > However, the following race can happen due to improperly memory loads > ordering: > CPU 1 (HVO) CPU 2 (speculative PFN walker) >=20 > page_ref_freeze() > synchronize_rcu() > rcu_read_lock() > page_is_fake_head() is false > vmemmap_remap_pte() > XXX: struct page[] becomes r/o >=20 > page_ref_unfreeze() > page_ref_count() is not zero >=20 > atomic_add_unless(&page->_refcount) > XXX: try to modify r/o struct page[] >=20 > Specifically, page_is_fake_head() must be ordered after > page_ref_count() on CPU 2 so that it can only return true for this > case, to avoid the later attempt to modify r/o struct page[]. >=20 > This patch adds the missing memory barrier and makes the tests on > page_is_fake_head() and page_ref_count() done in the proper order. >=20 > Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative = PFN walkers") > Reported-by: Will Deacon > Closes: = https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/ > Signed-off-by: Yu Zhao > --- > include/linux/page-flags.h | 2 +- > include/linux/page_ref.h | 8 ++++++-- > 2 files changed, 7 insertions(+), 3 deletions(-) >=20 > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index 691506bdf2c5..6b8ecf86f1b6 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -212,7 +212,7 @@ static __always_inline const struct page = *page_fixed_fake_head(const struct page > * cold cacheline in some cases. > */ > if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) && > - test_bit(PG_head, &page->flags)) { > + test_bit_acquire(PG_head, &page->flags)) { > /* > * We can safely access the field of the @page[1] with = PG_head > * because the @page is a compound page composed with at = least > diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h > index 8c236c651d1d..5becea98bd79 100644 > --- a/include/linux/page_ref.h > +++ b/include/linux/page_ref.h > @@ -233,8 +233,12 @@ static inline bool page_ref_add_unless(struct = page *page, int nr, int u) > bool ret =3D false; >=20 > rcu_read_lock(); > - /* avoid writing to the vmemmap area being remapped */ > - if (!page_is_fake_head(page) && page_ref_count(page) !=3D u) > + /* > + * To avoid writing to the vmemmap area remapped into r/o in = parallel, > + * the page_ref_count() test must precede the = page_is_fake_head() test > + * so that test_bit_acquire() in the latter is ordered after the = former. > + */ > + if (page_ref_count(page) !=3D u && !page_is_fake_head(page)) IIUC, we need to insert a memory barrier between page_ref_count() and = page_is_fake_head(). Specifically, accessing between page->_refcount and page->flags. So we = should insert a read memory barrier here, right? But I saw you added an acquire barrier = in page_fixed_fake_head(), I don't understand why an acquire barrier could stop the CPU reordering = the accessing between them. What am I missing here? Muchun, Thanks. > ret =3D atomic_add_unless(&page->_refcount, nr, u); > rcu_read_unlock(); >=20 > --=20 > 2.47.1.613.gc27f4b7a9f-goog >=20