From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A805C77B7F for ; Tue, 24 Jun 2025 16:27:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9412D6B0096; Tue, 24 Jun 2025 12:27:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8CAD76B00A7; Tue, 24 Jun 2025 12:27:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B9ED6B00AA; Tue, 24 Jun 2025 12:27:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 65F646B0096 for ; Tue, 24 Jun 2025 12:27:14 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 094BB1D854C for ; Tue, 24 Jun 2025 16:27:14 +0000 (UTC) X-FDA: 83590823988.09.6474412 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) by imf19.hostedemail.com (Postfix) with ESMTP id 271831A000B for ; Tue, 24 Jun 2025 16:27:11 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=I8URvqZP; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.128.48 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750782432; a=rsa-sha256; cv=none; b=DUfmprnGueZ8DuERZYfLHFlURdjBFS+i72znA1ahf1xp+/Ho0XbvfExmBh5baYIez22PoK q299hiXtYrWm6uVr2XINfvQMl9oQLRCfORZ6IZ3rJyt0kDfcGHP7Bv2BhpssHC9YFPMdJc 5z9qgJFIYSFdwsK7+vmXJw72H9zw/WQ= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=I8URvqZP; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.128.48 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750782432; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vfQI9Zp3u7xxHzx8USFneWg+7J0GkvoymFFOyqkhxfc=; b=VVK5mbfPU8mVPI5qmZHjDd4sgBOosG/70ZYJD0mmPl8HCrrjvnvJemqTJxO1xKWdwQLetP 3+Z++mHri26ALs+M+JOtjPXcN1nmUprfOUqMvrraiJMKBpvubxzffUzQnWIkXRTpFyzVrl J0tpaVJeadXy9X0Avqhm2kmEIg2A544= Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-45348bff79fso61002695e9.2 for ; Tue, 24 Jun 2025 09:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750782430; x=1751387230; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vfQI9Zp3u7xxHzx8USFneWg+7J0GkvoymFFOyqkhxfc=; b=I8URvqZPak77FgtidKJvVCLNBo9r4jdpH2z04uCs0EmMzB4KFXORxN+tZne8AwT+yW A/o3rljgNFc0JhMAU+z2YQYMBi+KzgTpYazfpA7Bxtir/nUAXIp5QjJ+kI4Ytb9wrWup CjTonUdbCewA0nsOrxeujbS/6ckemOj+BV6UOqX53Z5CnaSytuYeTpYVyRa66I7eeNfI CevTDxUq11jJ+7DJfmK+rut9jhWvk8ixfOEsA+0rdIlEhpvaKrVMtukDFMvG9vlGqWMF 0i2jsN0b/Ib2j+CVxlk6sYo4zMIQUGmbxDlNNl8kilHJxA3CpBZoqcCZ+YqBccMG0Tuy IpHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750782430; x=1751387230; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vfQI9Zp3u7xxHzx8USFneWg+7J0GkvoymFFOyqkhxfc=; b=mf/r/Krj4O5Z4LYpwqip5e4T9FFKKzD1M5qXPkRd387zRpAJtVyJLGQtDiN6AFsl3W 6lFkiS5vGtwpuR0y3BJMK0N4ri3BWFo3qU5gx5ZjehP1x4Tx7MNKYV98R443aUv0z2yX FLQPqle888DH2NdsH74iDWqVMT5vONy3MGSsC1FdWiujONY3xS18Bw/KGxs6rSC4RYxD xtCJBiVV/x12A0GBsCiIGP6590O++W2NQ5hxnao9+8iin9n7U3dPZc0c8SuLpQvHpBZm xKnxy8ig0A7qCv9Za0DFZtydq5u6UdE10xcVqVRPcJ+Ht/mGc9ZIOe4YMLMXUsXLW5sF aKXg== X-Forwarded-Encrypted: i=1; AJvYcCWYmPtzsp8lpNj6OvNMO3E1KUcmAbVQL5SvB2buFis+ZS+25yS9O5lvmI8V6GkFXl3BSbQ4xy5soQ==@kvack.org X-Gm-Message-State: AOJu0YysuZWA7T05rzM0ZOt0RFh/eW300KkOoIOBnN+As7Ay0T4+YK0P 4sR5U9wGdPeKgrSvOuH9CqhZ/qNcPVVkqDSYECn0rsMm6Z1SgHsB2+wD X-Gm-Gg: ASbGncuQoOPBCTZChviPnOpNLlSXlpzIvBdr7bZV5a1tIvQIhcYc/DXl67dzkCIqVb4 TQraOxWCNM3boqGn6vNMWQ4hBAKkAby272QQvTuZ9k5uePhqLsJSZD3JegPTWZxZpqMLC6+jYSZ CD85eubcn0WadjwUkYgaZufmSNk4WO3ut/eRFsbZ4qEs9rnPsdF+22gXS8a8cD8R0iaf7yNhK9l dLzQ74TukPRcn/J+HCjMdTV3HpENGb3GCKMRl3R+9+Oji3Xs5QfJGsxrh4ZW0YLhOiBxtc+Db2a EkXSEwH0yN5lloaosTc62YzFiHRFcuUr2/lSyIIhs9zEOBAzig== X-Google-Smtp-Source: AGHT+IGjHJmY+eS3Ezzzmmv0iIiNJG0AeJ4sIzX9fod1PSWFl8jor/RhAVVtop+K7gu3E2hIu7gfhA== X-Received: by 2002:a05:6000:4a16:b0:3a4:ee3f:e9a6 with SMTP id ffacd0b85a97d-3a6d1331fd2mr14561131f8f.54.1750782430437; Tue, 24 Jun 2025 09:27:10 -0700 (PDT) Received: from localhost.localdomain ([2a09:0:1:2::305a]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3a6e805175asm2267875f8f.20.2025.06.24.09.27.06 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 24 Jun 2025 09:27:10 -0700 (PDT) From: Lance Yang To: david@redhat.com Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, chrisl@kernel.org, ioworker0@gmail.com, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, ying.huang@intel.com, zhengtangquan@oppo.com Subject: Re: [PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation Date: Wed, 25 Jun 2025 00:25:03 +0800 Message-ID: <20250624162503.78957-1-ioworker0@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <2c19a6cf-0b42-477b-a672-ed8c1edd4267@redhat.com> References: <2c19a6cf-0b42-477b-a672-ed8c1edd4267@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: 5wwqnher3zu7ww18pepih3kgemmk3hy5 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 271831A000B X-HE-Tag: 1750782431-377342 X-HE-Meta: U2FsdGVkX1/id0GXVLlG7jNaww5lxKNCBJmvo490lH2FgqbYsvx4tYgBpuumbxoOgVcflzlsoZ8gskYIjVGIloVTUNLLnNRva/egPDQufggUvFrE1/qGzrrog4jEnCtcq5rNgD7pSUpfmQK+M07cbyH9F0k/JjiBbUj270rtPpNBm8vIfuIGW6cYO53Et9e9TdLBkcviPTFA7OfuGqVIVLz109uxCLlBiJYdhFCI3JYT3nUaOsFDOUhWgQLkwar1zMlLwsdP3rc2umvTexNcTFQAlzm44nsK8uiRIpQBYtWJ/F1JDQR2I1oeOjOI/0K0iiRo9G9Ju6twS+xKIdpyUv+PcK/zPLy8P3z5bW8nP6QsnybGJE0vymi/fn6LHqJGWpzOCWa/e0YtnfDmzvaDVURFzzvsgcpLybgl5LF0aPu3jWEyOhDsn8FtS6iyZ8a7lkkv2CflAtV0pmlkRqlzgAkpcSle4cIJ3Xui+M2/U+D6cItel3fixIY1jTF3pyuHkSUPzn8qLmHYwWxx7xybjlHApC7BKXB4Aa18PtplQmvj35/4tqz5duV2eZ8vaIsxfC6hYryOy2VXAGd3Ass+wwiCBJYpEQHbdTDnGviqcGDaSbnemQZg2F4823s7E7dlfdT7lgBTem18rB4mNteKinUdeVaE0ZhqhGVbXbETC8z39POJrccIlI+5DF8wTetYog5l/+fnVBzjUIyrI3pUq4eNK/OtRZNLC4e3rbkj8eQ7Yb8nrCENX1fckflwWt3HcNyrRaOB7XhBtEEIXopXF3nODzOpOlJVA5MZV3QIAgy4CJweG0jj7IHa5ZHNU2pL382qxrQ1dzwJ1lUWTDu99KvSHZsfaAmyvFmM27KszWgDgajYShthG1HbFLUjbIV15TKP3EXR2Ez+sVo8R855K1cJ6bVsNrfXBk8CgvMg3gPBqFSw5JYMixfYooz6BK3Rzx43gCbha327M3Coybt iSrWuA9V CN13MljDhh5AFYWxijxK+i3Vn0Or0OQjeNDxR/HdawBBDywGP3WDqQFBHI/pgalqsWN+XES0hwH9uCs+BdKLqCXkbWvcFpT4wPm5qqHn+uc9GtXFnLsIOQbu9GPgEBtvc/AqEFkyZTCGklS2um9zHtZLIs+NMXYSRG+AF/Msnz2887GI7YH/5I8SgI+hC8xRwnaTjTdkhirq7bspjcwzumRjyaPnDXyWQCLpK1ut0Hop631F2wBo3l51OSrQPClyTGgp91enIsyLye5sWb+3HKxhhnBQBPPCZpLq6HPcG7O9qXqvy4cUHqPU/WMq9CAfrmHSLeZmr0el5fdzcFwyuggk4li3TorCvXGp1ZDGhsddreDF8cFUJmV4ADxqxJi3B5/NAsMyFZHYFB/bxljTQM4Qcg6gRqaU+vR/9iSulckQNGKh5ICbDHH/I7849eG4IXI0/q8kdb65ew1sjyaONmKaAhq/u0FVGq+WU8RXSMXk/jhg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/6/24 23:34, David Hildenbrand wrote: > On 24.06.25 17:26, Lance Yang wrote: >> On 2025/6/24 20:55, David Hildenbrand wrote: >>> On 14.02.25 10:30, Barry Song wrote: >>>> From: Barry Song >> [...] >>>> diff --git a/mm/rmap.c b/mm/rmap.c >>>> index 89e51a7a9509..8786704bd466 100644 >>>> --- a/mm/rmap.c >>>> +++ b/mm/rmap.c >>>> @@ -1781,6 +1781,25 @@ void folio_remove_rmap_pud(struct folio *folio, >>>> struct page *page, >>>> #endif >>>> } >>>> +/* We support batch unmapping of PTEs for lazyfree large folios */ >>>> +static inline bool can_batch_unmap_folio_ptes(unsigned long addr, >>>> + struct folio *folio, pte_t *ptep) >>>> +{ >>>> + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; >>>> + int max_nr = folio_nr_pages(folio); >>> >>> Let's assume we have the first page of a folio mapped at the last page >>> table entry in our page table. >> >> Good point. I'm curious if it is something we've seen in practice ;) > > I challenge you to write a reproducer :P I assume it might be doable > through simple mremap(). > >> >>> >>> What prevents folio_pte_batch() from reading outside the page table? >> >> Assuming such a scenario is possible, to prevent any chance of an >> out-of-bounds read, how about this change: >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index fb63d9256f09..9aeae811a38b 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1852,6 +1852,25 @@ static inline bool >> can_batch_unmap_folio_ptes(unsigned long addr, >> const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; >> int max_nr = folio_nr_pages(folio); >> pte_t pte = ptep_get(ptep); >> + unsigned long end_addr; >> + >> + /* >> + * To batch unmap, the entire folio's PTEs must be contiguous >> + * and mapped within the same PTE page table, which corresponds to >> + * a single PMD entry. Before calling folio_pte_batch(), which does >> + * not perform boundary checks itself, we must verify that the >> + * address range covered by the folio does not cross a PMD boundary. >> + */ >> + end_addr = addr + (max_nr * PAGE_SIZE) - 1; >> + >> + /* >> + * A fast way to check for a PMD boundary cross is to align both >> + * the start and end addresses to the PMD boundary and see if they >> + * are different. If they are, the range spans across at least two >> + * different PMD-managed regions. >> + */ >> + if ((addr & PMD_MASK) != (end_addr & PMD_MASK)) >> + return false; > > You should not be messing with max_nr = folio_nr_pages(folio) here at > all. folio_pte_batch() takes care of that. > > Also, way too many comments ;) > > You may only batch within a single VMA and within a single page table. > > So simply align the addr up to the next PMD, and make sure it does not > exceed the vma end. > > ALIGN and friends can help avoiding excessive comments. Thanks! How about this updated version based on your suggestion: diff --git a/mm/rmap.c b/mm/rmap.c index fb63d9256f09..241d55a92a47 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1847,12 +1847,25 @@ void folio_remove_rmap_pud(struct folio *folio, struct page *page, /* We support batch unmapping of PTEs for lazyfree large folios */ static inline bool can_batch_unmap_folio_ptes(unsigned long addr, - struct folio *folio, pte_t *ptep) + struct folio *folio, pte_t *ptep, + struct vm_area_struct *vma) { const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + unsigned long next_pmd, vma_end, end_addr; int max_nr = folio_nr_pages(folio); pte_t pte = ptep_get(ptep); + /* + * Limit the batch scan within a single VMA and within a single + * page table. + */ + vma_end = vma->vm_end; + next_pmd = ALIGN(addr + 1, PMD_SIZE); + end_addr = addr + (unsigned long)max_nr * PAGE_SIZE; + + if (end_addr > min(next_pmd, vma_end)) + return false; + if (!folio_test_anon(folio) || folio_test_swapbacked(folio)) return false; if (pte_unused(pte)) @@ -2025,7 +2038,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, folio_mark_dirty(folio); } else if (likely(pte_present(pteval))) { if (folio_test_large(folio) && !(flags & TTU_HWPOISON) && - can_batch_unmap_folio_ptes(address, folio, pvmw.pte)) + can_batch_unmap_folio_ptes(address, folio, pvmw.pte, pvmw.vma)) nr_pages = folio_nr_pages(folio); end_addr = address + nr_pages * PAGE_SIZE; flush_cache_range(vma, address, end_addr); -- Thanks, Lance