From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F631C83F07 for ; Mon, 7 Jul 2025 09:13:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 431168D0003; Mon, 7 Jul 2025 05:13:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 408DC8D0002; Mon, 7 Jul 2025 05:13:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 345608D0003; Mon, 7 Jul 2025 05:13:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1FCA28D0002 for ; Mon, 7 Jul 2025 05:13:51 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 97C4BC0392 for ; Mon, 7 Jul 2025 09:13:50 +0000 (UTC) X-FDA: 83636906220.14.8C17E9D Received: from out-184.mta0.migadu.com (out-184.mta0.migadu.com [91.218.175.184]) by imf07.hostedemail.com (Postfix) with ESMTP id 8FF3A40011 for ; Mon, 7 Jul 2025 09:13:48 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="ZfLbEP/v"; spf=pass (imf07.hostedemail.com: domain of lance.yang@linux.dev designates 91.218.175.184 as permitted sender) smtp.mailfrom=lance.yang@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751879629; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CgyqmMgmj4RoHL2MEMeBZbWgZV9nUOOsFAtiV0PoOpY=; b=a/4Ji6g5Wz0BE2dfD0FBnipSFpfAWEYkTIXyphDXU1oXtFExUI3lqCGK8vLbC086kw4S4T 7AczGtP2rxzABXm3I5qq9hZhnll7a5R6aB011L1ZxOKzb8MDZtxt90M1MxXEgvflay8gRi bPGBx0n8/mZKDeBfu7lUOrOvDrmRnT0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751879629; a=rsa-sha256; cv=none; b=Iw8pdCN8PASHvkOcjYfw8wf1RdM3idjA4rMsQBc9LR4FOEn7034cw0oVgDGDw7E/j9faLQ UOrfog8f6Ej6YUgOFP3heMm1yRcV6yZM8J/+82/hPxx0p8ldym2aIx+NBr/mG8M1mTX73T 0mO5lACtLLKswxBoBVhQ0KSs/JKUu2Q= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="ZfLbEP/v"; spf=pass (imf07.hostedemail.com: domain of lance.yang@linux.dev designates 91.218.175.184 as permitted sender) smtp.mailfrom=lance.yang@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Message-ID: <072268ae-3dea-46f8-8c9e-203d062eab82@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1751879626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CgyqmMgmj4RoHL2MEMeBZbWgZV9nUOOsFAtiV0PoOpY=; b=ZfLbEP/vZq2o4+akgW+lc4Afrvl/00HiSK0NwhRm0A7WUXvct0BrON8rtglHmdzeq5KYSC Xv/RTbWP+ZfT7gszBHH2OFBdN1XVATLskmoyuwGiwM4ktvtQnzZuftIU1kgvxkqx71qwzm jpzJevRaDNfnRPuYyjDh/EXfDo5znqo= Date: Mon, 7 Jul 2025 17:13:24 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v4 1/1] mm/rmap: fix potential out-of-bounds page table access during batched unmap Content-Language: en-US To: Harry Yoo Cc: akpm@linux-foundation.org, david@redhat.com, 21cnbao@gmail.com, baolin.wang@linux.alibaba.com, chrisl@kernel.org, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, huang.ying.caritas@gmail.com, zhengtangquan@oppo.com, riel@surriel.com, Liam.Howlett@oracle.com, vbabka@suse.cz, mingzhe.yang@ly.com, stable@vger.kernel.org, Barry Song , Lance Yang References: <20250701143100.6970-1-lance.yang@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Lance Yang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 8FF3A40011 X-Stat-Signature: o75euqd4y1ffntodhrmjdd56wwwfwruc X-HE-Tag: 1751879628-327368 X-HE-Meta: U2FsdGVkX18XQfhKQaa/rIfMzKfw3v4ervF3ljpWkJQnYildxTPClXc1BowJbB/MKdHYFWRlOBoZfMTtdLydGllo/ImRNZHe8yScZywiDNMhvP9DQKZX8DcUQREka3J4agVlAPx0Kryg0K4j2nf4qnohWRgNPSJBuKhMm0yuEl4Eurltl1Z48tArUw5VNehTm2Rjaf2G+s3Y+8Y3TH4eug/1+26F5n4JZP9aqAeWqOsbfPPgUcNedPyk8KuM53ksWKo55wGpVEuS+uuMK70pmsPiN9ED3vrA8jqdweukelnRRhDcmUuGmYqM9n8aNNO403e1JP7JWF7OhrzBwqlw368URmfJ4ahs6ZU2DW3hj6yfcoBi6Tig5wkRDmbeYnJKZ2n1YpcPPmmwwMg3CFFvkyAQ13JbzvdoizOHhJyDHJ4DnqrHQjWLYjYPq+kBmY8gvtUJtMjLJPQeVbmu83VYL4XWo6yZF6ApEjrIUIrqyQCHzLNOvOUrqMkq98zRho8dbc9EnEfvg/rGnwq7uoVJNoHAwMosS5BOfGewdxi3bRYTU3+Fl+vd7hyyynp0N3s/9qaY1ll/sZ1RLx47H9gLhkSc8q1Yf9IPatcZ2oBSJGFxKpKjJXvgz6Hzud9+AEOFC9ov4vUaEb/oPHHyPPrJtIcya6zEnRc2XJPy/iudrTBGOU1o4JnFTx55z3ctFfUQxkvxFIYBINoqVGgoihbDqvrot/mazColF/JiWOjZctsbCjKZ4hrHObVM2cEc/RD/0JS+uPklmq8DL5oLNaFNwtqnAX3PszTCiNY2m/LLr9YlFeWphpqsks9V9acXrJTC7LxhGNXjk19XSdc/U5now9xQKxl6RJbIcwSt3lW+u5JTTX4PMead626k79cb1k4DJhp2AGTDXdig+PWzKfTFkA1QyRsSHgL1eUJlhx/LOJrNsb/Ez8sNhWTfz5XLv77yu5K2g0LRfBqG+gPCZlX z2xPymWs LKH1j1fr7zTw+cIFW6gFJxgCWtkQYVORbqRlstpQmbyLUFbv70TwaRCerh3PY6tGfp0rVN+0GNI0ScXbk9dz0tNOPsO3ahtm8QbSN9empmlU2us4EIe+EiPXFmQPb9jHjZ5ToHgGSuDK/Vlu2p2xOq3AV6iwnuqtjb/VpxAT8xlNjBjkev6ConDj06h0D4ayb7WDVfsmWsv7mbiBDCjc9wTMyJT/+qvkmL6qCwRGnQA5AcF5AsMLD8tMmkgI3WiEw8sPf/UDCHp1SNplMm8Ef0qDz6OAbSYPmCTkpj+NQYoDV2LOPl9o4roJOgdDQI3ODycavwZe6/7SA/tspsmGzzwDChWlxgv5vMdnftVJe2d1hhMA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/7/7 13:40, Harry Yoo wrote: > On Tue, Jul 01, 2025 at 10:31:00PM +0800, Lance Yang wrote: >> From: Lance Yang >> >> As pointed out by David[1], the batched unmap logic in try_to_unmap_one() >> may read past the end of a PTE table when a large folio's PTE mappings >> are not fully contained within a single page table. >> >> While this scenario might be rare, an issue triggerable from userspace must >> be fixed regardless of its likelihood. This patch fixes the out-of-bounds >> access by refactoring the logic into a new helper, folio_unmap_pte_batch(). >> >> The new helper correctly calculates the safe batch size by capping the scan >> at both the VMA and PMD boundaries. To simplify the code, it also supports >> partial batching (i.e., any number of pages from 1 up to the calculated >> safe maximum), as there is no strong reason to special-case for fully >> mapped folios. >> >> [1] https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com >> >> Cc: >> Reported-by: David Hildenbrand >> Closes: https://lore.kernel.org/linux-mm/a694398c-9f03-4737-81b9-7e49c857fcbe@redhat.com >> Fixes: 354dffd29575 ("mm: support batched unmap for lazyfree large folios during reclamation") >> Suggested-by: Barry Song >> Acked-by: Barry Song >> Reviewed-by: Lorenzo Stoakes >> Acked-by: David Hildenbrand >> Signed-off-by: Lance Yang >> --- > > LGTM, > Reviewed-by: Harry Yoo Hi Harry, Thanks for taking time to review! > > With a minor comment below. > >> diff --git a/mm/rmap.c b/mm/rmap.c >> index fb63d9256f09..1320b88fab74 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -2206,13 +2213,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >> hugetlb_remove_rmap(folio); >> } else { >> folio_remove_rmap_ptes(folio, subpage, nr_pages, vma); >> - folio_ref_sub(folio, nr_pages - 1); >> } >> if (vma->vm_flags & VM_LOCKED) >> mlock_drain_local(); >> - folio_put(folio); >> - /* We have already batched the entire folio */ >> - if (nr_pages > 1) >> + folio_put_refs(folio, nr_pages); >> + >> + /* >> + * If we are sure that we batched the entire folio and cleared >> + * all PTEs, we can just optimize and stop right here. >> + */ >> + if (nr_pages == folio_nr_pages(folio)) >> goto walk_done; > > Just a minor comment. > > We should probably teach page_vma_mapped_walk() to skip nr_pages pages, > or just rely on next_pte: do { ... } while (pte_none(ptep_get(pvmw->pte))) > loop in page_vma_mapped_walk() to skip those ptes? Good point. We handle partially-mapped folios by relying on the "next_pte" loop to skip those ptes. The common case we expect to handle is fully-mapped folios. > > Taking different paths depending on (nr_pages == folio_nr_pages(folio)) > doesn't seem sensible. Adding more logic to page_vma_mapped_walk() for the rare partial-folio case seems like an over-optimization that would complicate the walker. So, I'd prefer to keep it as is for now ;)