From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BE24D3C52B for ; Thu, 17 Oct 2024 18:44:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3CDE6B007B; Thu, 17 Oct 2024 14:44:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DEC936B0082; Thu, 17 Oct 2024 14:44:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8CE36B0083; Thu, 17 Oct 2024 14:44:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A87DC6B007B for ; Thu, 17 Oct 2024 14:44:25 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B9BE3A018D for ; Thu, 17 Oct 2024 18:44:04 +0000 (UTC) X-FDA: 82683969354.25.D4D4527 Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) by imf08.hostedemail.com (Postfix) with ESMTP id 7D9C9160014 for ; Thu, 17 Oct 2024 18:44:16 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="HWONc/oM"; spf=pass (imf08.hostedemail.com: domain of jannh@google.com designates 209.85.208.41 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729190517; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y4LSXbXNO20AxHYvERdb4bBA4B871DiEWg3td+aWMDU=; b=VlRhhLZn28EwMHYpzUFrzVBYFyYlMlPyfIClA9+yHd/qSjRClX3+EIpraJ+RcMox0Pku2v 5gbjY0FkA7Vgiqd6NoaZyynlV8c54/1XPYSObWO5JRdf5In0lG8sGv2rX0oSeIcI699M0P k922Z7FQAwk2IhLiP5rCDVmetiYefhE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729190517; a=rsa-sha256; cv=none; b=mwRiFX823Rsqt3dtF5xy34KfUxiyNn/jhxlwji2Zn0a4k6fFKQ3gqQni9rtM4oKFEvF4Yq OEUtn3n9eCtzaaFgaiEqD2hhqG6899Ls4opWUJRGKP/LNtWGNK6wzm4R0gst0SrWalAaup hHNJqr/l9acg1c6/1zxSLqqJJSdant0= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="HWONc/oM"; spf=pass (imf08.hostedemail.com: domain of jannh@google.com designates 209.85.208.41 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-ed1-f41.google.com with SMTP id 4fb4d7f45d1cf-5c932b47552so4090a12.0 for ; Thu, 17 Oct 2024 11:44:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1729190662; x=1729795462; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Y4LSXbXNO20AxHYvERdb4bBA4B871DiEWg3td+aWMDU=; b=HWONc/oM7MAf8D/OlQT3ETx0V897XjWCyqIOhgUOtQOaT1GGukhptk8Fgc6DvjetmR MiNp2sOlbMRRghfIwseP0q8twK58Ob6rRuzhVXLfgAWmSUOlfVMGr1a61p8hUIUiOsy/ q4YVXSNDXMkOUhXdbmw+YQOp+9WOCQFGlUBsZyH6ad/8kPTJwpU1Kgh0R0MwX617n8Zp 2aVNLzlvmhHvQ1virx3SEoSLuwP+cG7z8JDim4Q+5XIqsUPCaC67MLfsfK56r3Wj/2H8 ABgfnTfp1Wx8FjNoqTWWrZPFMedDJgObtkpZA18gxCtc4qNTUOcP7pByEqrZBel7yUjz MMxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729190662; x=1729795462; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y4LSXbXNO20AxHYvERdb4bBA4B871DiEWg3td+aWMDU=; b=JkKX2PSN/Vk3auiOhLln45aF72JrizJ0YcE1frKoZO2DJ7t2QC992abgFlIOj/bSNn 9pRfIu6BifsFSCbws23O7YIb4WJb5SCVggFIK3VwRc+y6j0irjWQ7Gv+fOn1+6t09UoC tmFRPYyh9XMejFca/LNE7qnhGidA+6q4G5iImuquj4/tJHAfkJvk3sXCkwcDo15AsQuj EWmEQdHAXEpIC9HV39w1YnbuARnBOEYQQFJiw0yFWx5WyZCvRcRKJr7SVIKoPH5UwWTS 8n5sMRGcskdljXdr2y+Qd8e3towRe/TPaeH9wUXlvLqkxQwftWJzCtk70YvuWRo2Qu4P yrpg== X-Forwarded-Encrypted: i=1; AJvYcCX2r7ULUWNtXrWSYZAnje0wAIkmE3DN2A9foBIPngbXgjjqvhiS+kC5OLjusREKVj/weZVEPx8sJA==@kvack.org X-Gm-Message-State: AOJu0Yxlx8lrvv1nH6s1Elj+S61nAJVLUpPJ9fIRWOdFPyEfaEh0A6V0 DRClFli1F+dNu+Djkn0G+kQst5h/5kQHOa55RhBkle1cF2hxAgXi1pxbmia9f65vrxHUAVO8LDr HCqloWkRFURyPa/ElfiMEzdbj4w4iY+TaMERu X-Google-Smtp-Source: AGHT+IHRT0wqPgR4NZuOwJLPWGnv7qZGhOR/jlX9a973MSbvfq3gHHKV7SeOpZxzpiYoVWtPih/9J/cMG1UeMUoDus8= X-Received: by 2002:a05:6402:520b:b0:5c9:2834:589c with SMTP id 4fb4d7f45d1cf-5ca0a1f5063mr33361a12.3.1729190661263; Thu, 17 Oct 2024 11:44:21 -0700 (PDT) MIME-Version: 1.0 References: <6c7fe15b0434a08a287c400869f9ba434e1a8fa3.1729157502.git.zhengqi.arch@bytedance.com> In-Reply-To: <6c7fe15b0434a08a287c400869f9ba434e1a8fa3.1729157502.git.zhengqi.arch@bytedance.com> From: Jann Horn Date: Thu, 17 Oct 2024 20:43:43 +0200 Message-ID: Subject: Re: [PATCH v1 5/7] mm: pgtable: try to reclaim empty PTE page in madvise(MADV_DONTNEED) To: Qi Zheng , Catalin Marinas , Will Deacon Cc: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, vbabka@kernel.org, akpm@linux-foundation.org, zokeefe@google.com, rientjes@google.com, peterx@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: zadx3khcjagta3zh1irixa917aipyk6y X-Rspamd-Queue-Id: 7D9C9160014 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1729190656-987389 X-HE-Meta: U2FsdGVkX18PoGzra34/knay45ZgwM14WT++va8Avldydd8CLBFAhzovJjiUJG6XTfJYeydsROIerQYU9qVPwly4D88snPbF5dEshH+bZ8L1YO4QFNWqN34ZLVKHuRlHcJzpDPHuw75bR7zv22yorjWR1rOVrwRuM1lSwh9IabvU36zdGNftw+ugtfCCbukJg+EFIj8URIqbPlQoBypKR0OUg0w1kNpjyoq6asud5Zy1zJMmwrmQF7NyWu+vRfEaWVJr4ZUP3S6Eqdd3dBH5jIlXLHv2gdmzfJn5ci+IGh9RE9hmcTd/l7SAvVn7qKiRiT/guUS9klQHPbVAm8sDBntqi5X03lN3dkb9mLgPXzOz8o3elLPoUL1I//mi2OYKsCvXr2DpqY4ZTQKpRlXPyKG2q9VpcvwP1ElPf4pJVaENHBPxGG89ipTa1tiMNrWeIOLfpeuQXaxk+YIgpbcloc6Jm98ZTawWXyQmh1E+jogIjyu3k9kC+p5YF4P8Pxzvg8O6X1Q0f6uWJgm2J5TxQABWHOcRCdAk5pJnGLwJqaHfkqlruRqT6GuT8wGFsO++UltnSA6WKO4B7ytPgal4qG64IEZwecFjbveUQIGxgwx9Z3R3eniFGVzfS5plZ/G1kKX/evgVVHYB5buaVTQUVJhI/8nMdoQA7kHcXKGn0Jmgrhe1UM+vg01t7WQHrvUfuz2HnES81GTzkLNWYN/M3buboXWENHeCS6jtBcCXnnxYzcV6Vly8ALbB0KtEwtG9VGNfCf14UXmflrcXHLOuBd/MCi+xTDmgT/hwjilzShDup6+GKYM/Kq5MWWA4b05JSLYuUTkYzAH235abT197BIA19qgQruGSMtl96lusICEWrAoy9AO2Gi8atQsjF8UdMQ7FNSS1umHWTP9xOJHFUS2Gu78xp4QIS7aB/c0XrOvO2Wr1EOdZEu7AKrMseYrmneuiWxghGLqRNjnfLIz NaxaSD+1 tr6lmk+cXOmC3b6W9+afzHej8rUZuWiKrXPHD0Fg4NwEWDyxOQOKV9uGbAGxDP51vrYndIM8HRrwtQ9yj0XfpnIC56gJqBzp/+8d7qH2pfT9iwNSBUyIVfNXJkzk33zspf0w5qvFP2UJD9xKauBO7kk5a5+cv3CzOgYo5I2+cbLrXnEPZTYsuWJ9qCGm63cIUlDcH0bfIEsJySRLxyNbhPkFIpzP5yeNCTfmSIT9CL0evmkIll/fWJef7cWD9rtn00d+fwAQ5mxAX5/Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: +arm64 maintainers in case they have opinions on the break-before-make aspe= cts On Thu, Oct 17, 2024 at 11:48=E2=80=AFAM Qi Zheng wrote: > Now in order to pursue high performance, applications mostly use some > high-performance user-mode memory allocators, such as jemalloc or > tcmalloc. These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) > to release physical memory, but neither MADV_DONTNEED nor MADV_FREE will > release page table memory, which may cause huge page table memory usage. > > The following are a memory usage snapshot of one process which actually > happened on our server: > > VIRT: 55t > RES: 590g > VmPTE: 110g > > In this case, most of the page table entries are empty. For such a PTE > page where all entries are empty, we can actually free it back to the > system for others to use. > > As a first step, this commit aims to synchronously free the empty PTE > pages in madvise(MADV_DONTNEED) case. We will detect and free empty PTE > pages in zap_pte_range(), and will add zap_details.reclaim_pt to exclude > cases other than madvise(MADV_DONTNEED). > > Once an empty PTE is detected, we first try to hold the pmd lock within > the pte lock. If successful, we clear the pmd entry directly (fast path). > Otherwise, we wait until the pte lock is released, then re-hold the pmd > and pte locks and loop PTRS_PER_PTE times to check pte_none() to re-detec= t > whether the PTE page is empty and free it (slow path). > > For other cases such as madvise(MADV_FREE), consider scanning and freeing > empty PTE pages asynchronously in the future. One thing I find somewhat scary about this is that it makes it possible to free page tables in anonymous mappings, and to free page tables of VMAs with an ->anon_vma, which was not possible before. Have you checked all the current users of pte_offset_map_ro_nolock(), pte_offset_map_rw_nolock(), and pte_offset_map() to make sure none of them assume that this can't happen? For example, pte_offset_map_rw_nolock() is called from move_ptes(), with a comment basically talking about how this is safe *because only khugepaged can remove page tables*. > diff --git a/mm/memory.c b/mm/memory.c > index cc89ede8ce2ab..77774b34f2cde 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -1437,7 +1437,7 @@ copy_page_range(struct vm_area_struct *dst_vma, str= uct vm_area_struct *src_vma) > static inline bool should_zap_cows(struct zap_details *details) > { > /* By default, zap all pages */ > - if (!details) > + if (!details || details->reclaim_pt) > return true; > > /* Or, we zap COWed pages only if the caller wants to */ > @@ -1611,8 +1611,18 @@ static unsigned long zap_pte_range(struct mmu_gath= er *tlb, > pte_t *start_pte; > pte_t *pte; > swp_entry_t entry; > + pmd_t pmdval; > + bool can_reclaim_pt =3D false; > + bool direct_reclaim; > + unsigned long start =3D addr; > int nr; > > + if (details && details->reclaim_pt) > + can_reclaim_pt =3D true; > + > + if ((ALIGN_DOWN(end, PMD_SIZE)) - (ALIGN(start, PMD_SIZE)) < PMD_= SIZE) > + can_reclaim_pt =3D false; Does this check actually work? Assuming we're on x86, if you pass in start=3D0x1000 and end=3D0x2000, if I understand correctly, ALIGN_DOWN(end, PMD_SIZE) will be 0, while ALIGN(start, PMD_SIZE) will be 0x200000, and so we will check: if (0 - 0x200000 < PMD_SIZE) which is if (0xffffffffffe00000 < 0x200000) which is false? > retry: > tlb_change_page_size(tlb, PAGE_SIZE); > init_rss_vec(rss); > @@ -1641,6 +1651,8 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, > nr =3D zap_present_ptes(tlb, vma, pte, ptent, max= _nr, > addr, details, rss, &force_= flush, > &force_break, &is_pt_unrecl= aimable); > + if (is_pt_unreclaimable) > + set_pt_unreclaimable(&can_reclaim_pt); > if (unlikely(force_break)) { > addr +=3D nr * PAGE_SIZE; > break; > @@ -1653,8 +1665,10 @@ static unsigned long zap_pte_range(struct mmu_gath= er *tlb, > is_device_exclusive_entry(entry)) { > page =3D pfn_swap_entry_to_page(entry); > folio =3D page_folio(page); > - if (unlikely(!should_zap_folio(details, folio))) > + if (unlikely(!should_zap_folio(details, folio))) = { > + set_pt_unreclaimable(&can_reclaim_pt); > continue; > + } > /* > * Both device private/exclusive mappings should = only > * work with anonymous page so far, so we don't n= eed to > @@ -1670,14 +1684,18 @@ static unsigned long zap_pte_range(struct mmu_gat= her *tlb, > max_nr =3D (end - addr) / PAGE_SIZE; > nr =3D swap_pte_batch(pte, max_nr, ptent); > /* Genuine swap entries, hence a private anon pag= es */ > - if (!should_zap_cows(details)) > + if (!should_zap_cows(details)) { > + set_pt_unreclaimable(&can_reclaim_pt); > continue; > + } > rss[MM_SWAPENTS] -=3D nr; > free_swap_and_cache_nr(entry, nr); > } else if (is_migration_entry(entry)) { > folio =3D pfn_swap_entry_folio(entry); > - if (!should_zap_folio(details, folio)) > + if (!should_zap_folio(details, folio)) { > + set_pt_unreclaimable(&can_reclaim_pt); > continue; > + } > rss[mm_counter(folio)]--; > } else if (pte_marker_entry_uffd_wp(entry)) { > /* > @@ -1685,21 +1703,29 @@ static unsigned long zap_pte_range(struct mmu_gat= her *tlb, > * drop the marker if explicitly requested. > */ > if (!vma_is_anonymous(vma) && > - !zap_drop_file_uffd_wp(details)) > + !zap_drop_file_uffd_wp(details)) { > + set_pt_unreclaimable(&can_reclaim_pt); > continue; > + } > } else if (is_hwpoison_entry(entry) || > is_poisoned_swp_entry(entry)) { > - if (!should_zap_cows(details)) > + if (!should_zap_cows(details)) { > + set_pt_unreclaimable(&can_reclaim_pt); > continue; > + } > } else { > /* We should have covered all the swap entry type= s */ > pr_alert("unrecognized swap entry 0x%lx\n", entry= .val); > WARN_ON_ONCE(1); > } > clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullm= m); > - zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details= , ptent); > + if (zap_install_uffd_wp_if_needed(vma, addr, pte, nr, det= ails, ptent)) > + set_pt_unreclaimable(&can_reclaim_pt); > } while (pte +=3D nr, addr +=3D PAGE_SIZE * nr, addr !=3D end); > > + if (addr =3D=3D end && can_reclaim_pt) > + direct_reclaim =3D try_get_and_clear_pmd(mm, pmd, &pmdval= ); > + > add_mm_rss_vec(mm, rss); > arch_leave_lazy_mmu_mode(); > > @@ -1724,6 +1750,13 @@ static unsigned long zap_pte_range(struct mmu_gath= er *tlb, > goto retry; > } > > + if (can_reclaim_pt) { > + if (direct_reclaim) > + free_pte(mm, start, tlb, pmdval); > + else > + try_to_free_pte(mm, pmd, start, tlb); > + } > + > return addr; > } > > diff --git a/mm/pt_reclaim.c b/mm/pt_reclaim.c > new file mode 100644 > index 0000000000000..fc055da40b615 > --- /dev/null > +++ b/mm/pt_reclaim.c > @@ -0,0 +1,68 @@ > +// SPDX-License-Identifier: GPL-2.0 > +#include > +#include > +#include > + > +#include "internal.h" > + > +bool try_get_and_clear_pmd(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdv= al) > +{ > + spinlock_t *pml =3D pmd_lockptr(mm, pmd); > + > + if (!spin_trylock(pml)) > + return false; > + > + *pmdval =3D pmdp_get_lockless(pmd); > + pmd_clear(pmd); > + spin_unlock(pml); > + > + return true; > +} > + > +void free_pte(struct mm_struct *mm, unsigned long addr, struct mmu_gathe= r *tlb, > + pmd_t pmdval) > +{ > + pte_free_tlb(tlb, pmd_pgtable(pmdval), addr); > + mm_dec_nr_ptes(mm); > +} > + > +void try_to_free_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long add= r, > + struct mmu_gather *tlb) > +{ > + pmd_t pmdval; > + spinlock_t *pml, *ptl; > + pte_t *start_pte, *pte; > + int i; > + > + start_pte =3D pte_offset_map_rw_nolock(mm, pmd, addr, &pmdval, &p= tl); > + if (!start_pte) > + return; > + > + pml =3D pmd_lock(mm, pmd); > + if (ptl !=3D pml) > + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); > + > + if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) > + goto out_ptl; > + > + /* Check if it is empty PTE page */ > + for (i =3D 0, pte =3D start_pte; i < PTRS_PER_PTE; i++, pte++) { > + if (!pte_none(ptep_get(pte))) > + goto out_ptl; > + } > + pte_unmap(start_pte); > + > + pmd_clear(pmd); > + > + if (ptl !=3D pml) > + spin_unlock(ptl); > + spin_unlock(pml); At this point, you have cleared the PMD and dropped the locks protecting against concurrency, but have not yet done a TLB flush. If another thread concurrently repopulates the PMD at this point, can we get incoherent TLB state in a way that violates the arm64 break-before-make rule? Though I guess we can probably already violate break-before-make if MADV_DONTNEED races with a pagefault, since zap_present_folio_ptes() does not seem to set "force_flush" when zapping anon PTEs... (I realize you're only enabling this for x86 for now, but we should probably make sure the code is not arch-dependent in subtle undocumented ways...) > + free_pte(mm, addr, tlb, pmdval); > + > + return; > +out_ptl: > + pte_unmap_unlock(start_pte, ptl); > + if (pml !=3D ptl) > + spin_unlock(pml); > +} > -- > 2.20.1 >