From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA313C27C6E for ; Fri, 14 Jun 2024 03:34:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F9606B00BF; Thu, 13 Jun 2024 23:33:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 281386B00F0; Thu, 13 Jun 2024 23:33:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FA526B00BF; Thu, 13 Jun 2024 23:33:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4DB746B00EF for ; Thu, 13 Jun 2024 23:32:55 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id F3457A1774 for ; Fri, 14 Jun 2024 03:32:54 +0000 (UTC) X-FDA: 82228072668.13.C4D39A1 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by imf28.hostedemail.com (Postfix) with ESMTP id 63BCAC0005 for ; Fri, 14 Jun 2024 03:32:52 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=dNjn34ym; spf=pass (imf28.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718335971; a=rsa-sha256; cv=none; b=B0j+eSD8/nyw1nAjmKn7Wn9t7Ft61pq9/+I1s0ya1wTwt+uGCMHSOLqnS0Ti0EFT623tAA F0Gsd0A7xK1H3cZiWTerpxj5Zy4F0eDyYS2nnm8lP4VHhRgItq1WUTWzpL2ixqoUBhW5HU toig7DNLLudkLLFNRlvoap9G303FeEg= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=dNjn34ym; spf=pass (imf28.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718335971; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VOeIsZP1p5TwLxrkuL+tuGN9UG/weTPFqKRDeUmQh0E=; b=bgaY3e/bUv3WD4wKmU2d3X4xHUky/5pLQxPA8vrCMunNq/zYmb8QxEeQX42vtG2EKzv88b nC7btuhcbujPKOfBEvuUu2xFviOdDYmqx2R1GJTHUh1Ea2jJid5mhupsocEo1syt7Uyw53 79QN1IGaluMg8s8oiRAEAzwXxFbv5vI= Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-2c2c91c9279so287348a91.0 for ; Thu, 13 Jun 2024 20:32:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1718335971; x=1718940771; darn=kvack.org; h=content-transfer-encoding:in-reply-to:references:cc:to:from :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=VOeIsZP1p5TwLxrkuL+tuGN9UG/weTPFqKRDeUmQh0E=; b=dNjn34ymGv78fLR1O3p7eLZIkocq4YMvrKqK0nfkkOWEDYoLCKkmDbP2DgOsvfTNPP 0EpJjS5+PCnUwZzyG4/xt+TMJyGUojJ9wlcoDCu4VIsYcev5rcoRUsB+JZQPLtMSuXN4 5lrdvk6W0nzoqdFapsbisXrwT3Fil1PfoBoLKgvB12xilQHmXO8dgH/Xso5abHjnF9AZ GRRTyvCVtQf0duOBTO7wsMite0pSn3ZxYax4jVio9f2Ti3FjGUXXmFgLfNHDzreA99JO MjTpPe05KGHAm/M7Q5BffBM0Ux3+cFWc6bleIBTsxRrTFxaCw96Yr4nxBMBMvPxPTXiW jq5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718335971; x=1718940771; h=content-transfer-encoding:in-reply-to:references:cc:to:from :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VOeIsZP1p5TwLxrkuL+tuGN9UG/weTPFqKRDeUmQh0E=; b=sBKuDgaPsosq1O+M3pWmHViJLdPFa0WULLY4AD/kMpI9eT1zAbsW8r0gFxLy2nafSf PiFxZhIP4bL6fnX785slB+RmkZB8gR1ZXBsLvfW4NtgaHRh3GyyV7zXz6x5L7YNjMzBi BtDSsDXhFycNrZRtE10OwWCXJ+agTXLFUC67d7bvxZS+hICELN/N7Afz51sgve3BA4p+ kIh8VTBz2FWqcGfVzNAjt9aabdkETXVDcU3DIQPcYUojOKLqAL+BeWKlirnDonF4bAWY H/85FfU1yfxEZo1hvDVoQ/bfgxtwbBAKw4nq4bR69DnJk0Qr8L8QmdZSOKWprlKVcKm+ 2/QQ== X-Forwarded-Encrypted: i=1; AJvYcCVhrd3N4W9bIRk2hjwi3+NXRYYmen2jZZT3fRKy3Z6PAW9CI1c/92IG3dj1UAnzcTLWiEXjPlFZmAsolcjA95n20kQ= X-Gm-Message-State: AOJu0YwPTM2KKEW0L3CvvM0+t+koJR73jOB+4QLzy5gQv4WkxGZ9b8+4 LXv3LhdifEuH6wmruws2AIWUTNG7k0TmMwGm4g673Jxdmf1b4UofESEkgw0k5SI= X-Google-Smtp-Source: AGHT+IERYb7eDOf0jIVYAjrj1coheLJXAppcYT1bTuUiG29/BxsiVVswUWz+U0rs2JwS6SD5vZ3ZaA== X-Received: by 2002:a05:6a21:6d9e:b0:1af:cd45:59a9 with SMTP id adf61e73a8af0-1bae7e1cf3emr2114454637.2.1718335970668; Thu, 13 Jun 2024 20:32:50 -0700 (PDT) Received: from [10.84.144.49] ([203.208.167.148]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855f185a8sm21808115ad.235.2024.06.13.20.32.46 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 13 Jun 2024 20:32:50 -0700 (PDT) Message-ID: <24480cd6-0a13-4534-8d64-4517e73f0070@bytedance.com> Date: Fri, 14 Jun 2024 11:32:44 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH 0/3] asynchronously scan and free empty user PTE pages Content-Language: en-US From: Qi Zheng To: David Hildenbrand Cc: hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <02f8cbd0-8b2b-4c2d-ad96-f854d25bf3c2@redhat.com> <2cda0af6-8fde-4093-b615-7979744d6898@redhat.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Stat-Signature: kpzycfqp7jq3jd9opc4gz7j9junskd19 X-Rspamd-Queue-Id: 63BCAC0005 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1718335972-176148 X-HE-Meta: U2FsdGVkX18gawkPM8OoFvq2QzK3ZZlvyYCnqdy5F+cyz63cWXkGLTH5Tk/8G1ikUF1qQ9YhgfWOMNXfnf/X4rx9HzrXVTTOZba+/CAzCsBwG+WJLx65FQn4hrYoh0QbiTYG7o19myOXsT6Po/Rre42YEAc02P4WXzZU2U6/cj7AjqdPzUSVFo9MURxqnbDgBBWRttsEeYzvpgzidnW+FNK8sME0JkXgsZezqu0++vfUzZor/WGgr9QiAEQXuZKPmCFU47zwmiy38dpNtzTf9hUKjUe/9Ak2T8kT0cb2AmaX+E2020eCMBrM80iVo6TeT2ojMawassx/auxQjfAhsPU+UuuKVInDhnCjivtQciPcTIpAsYi2l719JMItd4jolDAg2aOAYMyNqLYrslEvevnuH3p0E16ZFWVoHC56ja/VWFj7cEOy7FWdaNg4HAfixBNKLVtUh0MUZ3+vMWLnF1whKW/9vq3H1iqF+sWu1+SbDpx8UYPvUCHbgDKeLXmTUXBpoTWZgyHnjCw4IFGknBWuvigjCfpR1bqTb8NsAVb/tJ2RwWleosHxvEkPgi2s1ZqUZG0QT6uxdKJKqIlolbG34q7DKnSZ7svFdYS4Rj/BDNxMYShnmP0wydP9/MX762iqer8F1vnNJK4QQ7uz/ENxNQixwUybp/1Nz+dS7gUkFrltXBnfSC01G9kVuMn7WYEZ9wWaIp3mtiLFNOlv7aLLR5nIOROM/VMrQUu1WnAz/KC7YBNYjXB6IMTssCPIsBZqtdXBvfoS2teyI6gFjFtOHa6W5N8jvegfR5taddM6wfr/KeSqBs8ZQ6Gh8ZE+IfXwweHVYFi3RmyDTTy6Ku6xLQkiDm/CmvqBjBNQK2HhZDd39M4FBmpWP23RkjaTdTQnWsOkrdvOyClMLmCsTogmphGIBEEW8lTN6O2ArOYHfJrjHSB+6fwoHPvPNEDfclObZzEYd9M1VQyjz9n 2O5LZN3y tM1Aadoq9MET3otculiLulI7blMZfGdFtzzD6H+w4ikmZPkXKgIyuC1FSRT3eYSuGIhbBFdeSTJP1d384yQCxz/lMZ4lsjPMwBY28t5Qviwjz/gNJNF8YJBIWPv5kWHM+1HTT09LC6oLmjm/bIQK8ACaVyMxBKPUympmhJvu9hapNKhbKtQDR92KiSisx8uqC/qV45j3WxFlogOVPxBek4SjdCGYsDtTXgT9TcZ48NEchvs2qQDUDJ7aYi+GuTeDYLp6r6+8UKesoMzlRfroMK9FCScdyrSRa5Hx7w3rUUsTZqm+YKL4W+mNu3j/QC6p5oWwsd7ydn1a04lyNcN3rht1r2xFySUK4KmAth6nT/FMGVNI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi David, How about starting with this: a. for MADV_DONTNEED case, try synchronous reclaim as you said b. for MADV_FREE case, add a madvise(MADV_PT_RECLAIM) option to mark this vma, then add its corresponding mm to a global list, and then traverse the list and reclaim it when the memory is tight and enters the system reclaim path. (If this option is for synchronous reclaim as you said, then the user-mode program may need to start a thread to make a cyclic call. I'm not sure if this usage makes sense. If so, I can also implement such an option.) c. for s390 case you mentioned, maybe we can set a CONFIG_FREE_PT first, and then s390 will not select this config until the problem is solved. d. for lockless scan, we can use pte_offset_map_nolock() instead of disabling IRQ to scan, because we hold RCU read lock at this time, which can also ensure that the PTE page is not freed. Thanks, Qi On 2024/6/13 19:59, Qi Zheng wrote: > Hi, > > On 2024/6/13 18:25, David Hildenbrand wrote: >> On 13.06.24 11:32, Qi Zheng wrote: >>> Hi David, >>> >>> Thanks for such a quick reply! >> >> I appreciate you working on this :) >> >>> >>> On 2024/6/13 17:04, David Hildenbrand wrote: >>>> On 13.06.24 10:38, Qi Zheng wrote: >>>>> Hi all, >>> >>> [...] >>> >>>> >>>> >>>>> 3. Implementation >>>>> ================= >>>>> >>>>> For empty user PTE pages, we don't actually need to free it >>>>> immediately, nor do >>>>> we need to free all of it. >>>>> >>>>> Therefore, in this patchset, we register a task_work for the user >>>>> tasks to >>>>> asyncronously scan and free empty PTE pages when they return to user >>>>> space. >>>>> (The scanning time interval and address space size can be adjusted.) >>>> >>>> The question is, if we really have to scan asynchronously, or if would >>>> be reasonable for most use cases to trigger a madvise(MADV_PT_RECLAIM) >>>> every now and then. For virtio-mem, and likely most memory allocators, >>>> that might be feasible, and valuable independent of system-wide >>>> automatic scanning. >>> >>> Agree, I also think it is possible to add always && madvise modes >>> simliar to THP. >> >> My thinking is, we start with a madvise(MADV_PT_RECLAIM) that will >> synchronously try to reclaim page tables without any asynchronous work. >> >> Similar to MADV_COLLAPSE that only does synchronous work. Of course, > > This is feasible, but I worry that some user-mode programs may not be > able to determine when to call it. > > My previous idea was to do something similar to madvise(MADV_HUGEPAGE), > just mark the vma as being able to reclaim the pgtable, and then hand > it over to the background thread for asynchronous reclaim. > >> if we don't need any heavy locking for reclaim, we might also just >> try reclaiming during MADV_DONTNEED when spanning a complete page > > I think the lock held by the current solution is not too heavy and > should be acceptable. > > But for MADV_FREE case, it still needs to be handled by > madvise(MADV_PT_RECLAIM) or asynchronous work. > >> table. That won't sort out all cases where reclaim is possible, but >> with both approaches we could cover quite a lot that were discovered >> to really result in a lot of emprt page tables. > > Yes, agree. > >> >> On top, we might implement some asynchronous scanning later, This is, >> of course, TBD. Maybe we could wire up other page table scanners >> (khugepaged ?) to simply reclaim empty page tables it finds as well? > > This is also an idea. Another option may be some pgtable scanning paths, > such as MGLRU. > >> >>> >>>> >>>>> >>>>> When scanning, we can filter out some unsuitable vmas: >>>>> >>>>>       - VM_HUGETLB vma >>>>>       - VM_UFFD_WP vma >>>> >>>> Why is UFFD_WP unsuitable? It should be suitable as long as you make >>>> sure to really only remove page tables that are all pte_none(). >>> >>> Got it, I mistakenly thought pte_none() covered pte marker case until >>> I saw pte_none_mostly(). >> >> I *think* there is one nasty detail, and we might need an arch callback >> to test if a pte is *really* can be reclaimed: for example, s390x might >> require us keeping some !pte_none() page tables. >> >> While a PTE might be none, the s390x PGSTE (think of it as another >> 8byte per PTE entry stored right next to the actual page table >> entries) might hold data we might have to preserve for our KVM guest. > > Oh, thanks for adding this background information! > >> >> But that should be easy to wire up. > > That's good! > >> >>> >>>> >>>>>       - etc >>>>> And for some PTE pages that spans multiple vmas, we can also skip. >>>>> >>>>> For locking: >>>>> >>>>>       - use the mmap read lock to traverse the vma tree and pgtable >>>>>       - use pmd lock for clearing pmd entry >>>>>       - use pte lock for checking empty PTE page, and release it after >>>>> clearing >>>>>         pmd entry, then we can capture the changed pmd in >>>>> pte_offset_map_lock() >>>>>         etc after holding this pte lock. Thanks to this, we don't need >>>>> to hold the >>>>>         rmap-related locks. >>>>>       - users of pte_offset_map_lock() etc all expect the PTE page to >>>>> be stable by >>>>>         using rcu lock, so use pte_free_defer() to free PTE pages. >>>> >>>> I once had a protoype that would scan similar to GUP-fast, using the >>>> mmap lock in read mode and disabling local IRQs and then walking the >>>> page table locklessly (no PTLs). Only when identifying an empty page >>>> and >>>> ripping out the page table, it would have to do more heavy locking >>>> (back >>>> when we required the mmap lock in write mode and other things). >>> >>> Maybe mmap write lock is not necessary, we can protect it using pmd lock >>> && pte lock as above. >> >> Yes, I'm hoping we can do that, that will solve a lot of possible issues. > > Yes, I think the protection provided by the locks above is enough. Of > course, it would be better if more people could double-check it. > >> >>> >>>> >>>> I can try digging up that patch if you're interested. >>> >>> Yes, that would be better, maybe it can provide more inspiration! >> >> I pushed it to >>      https://github.com/davidhildenbrand/linux/tree/page_table_reclaim >> >> I suspect it's a non-working version (and I assume the locking is >> broken, there >> are no VMA checks, etc), it's an old prototype. Just to give you an >> idea about the >> lockless scanning and how I started by triggering reclaim only when >> kicked-off by >> user space. > > Many thanks! But I'm worried that on some platforms disbaling the IRQ > might be more expensive than holding the lock, such as arm64? Not sure. > >> >>> >>>> >>>> We'll have to double check whether all anon memory cases can *properly* >>>> handle pte_offset_map_lock() failing (not just handling it, but doing >>>> the right thing; most of that anon-only code didn't ever run into that >>>> issue so far, so these code paths were likely never triggered). >>> >>> Yeah, I'll keep checking this out too. >>> >>>> >>>> >>>>> For the path that will also free PTE pages in THP, we need to recheck >>>>> whether the >>>>> content of pmd entry is valid after holding pmd lock or pte lock. >>>>> >>>>> 4. TODO >>>>> ======= >>>>> >>>>> Some applications may be concerned about the overhead of scanning and >>>>> rebuilding >>>>> page tables, so the following features are considered for >>>>> implementation in the >>>>> future: >>>>> >>>>>       - add per-process switch (via prctl) >>>>>       - add a madvise option (like THP) >>>>>       - add MM_PGTABLE_SCAN_DELAY/MM_PGTABLE_SCAN_SIZE control (via >>>>> procfs file) >>>>> Perhaps we can add the refcount to PTE pages in the future as well, >>>>> which would >>>>> help improve the scanning speed. >>>> >>>> I didn't like the added complexity last time, and the problem of >>>> handling situations where we squeeze multiple page tables into a single >>>> "struct page". >>> >>> OK, except for refcount, do you think the other three todos above are >>> still worth doing? >> >> I think the question is from where we start: for example, only >> synchronous >> reclaim vs. asynchonous reclaim. Synchronous reclaim won't really affect >> workloads that do not actively trigger it, so it raises a lot less >> eyebrows. ... >> and some user space might have a good idea where it makes sense to try to >> reclaim, and when. >> >> So the other things you note here rather affect asynchronous reclaim, and >> might be reasonable in that context. But not sure if we should start >> with doing >> things asynchronously. > > I think synchronous and asynchronous have their own advantages and > disadvantages, and are complementary. Perhaps they can be implemented at > the same time? > > Thanks, > Qi > >>