From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57E02C47DDF for ; Wed, 31 Jan 2024 10:26:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DBB046B0089; Wed, 31 Jan 2024 05:26:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D6A596B008C; Wed, 31 Jan 2024 05:26:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C323B6B0092; Wed, 31 Jan 2024 05:26:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B44E26B0089 for ; Wed, 31 Jan 2024 05:26:26 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2AE9D160CD6 for ; Wed, 31 Jan 2024 10:26:26 +0000 (UTC) X-FDA: 81739226772.13.E4CAC52 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf25.hostedemail.com (Postfix) with ESMTP id 37D8AA0008 for ; Wed, 31 Jan 2024 10:26:24 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706696784; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LmAV2l61ilUvFjcYEuKpa4PqXIDLxUje4g/USlfxWTo=; b=5dzru1MqtgOjNPSzFut1fNkibq9/2PUJxDf/h51KMumOBe1A9UW7+XHp89bP7WvFpGUJei V/zyvc1KK4QO2FoXgM8CNMZKvQ+VRfLcXUAY5JxwCBBQC62covH8im/NZdGjyVovbQG2To b7sTT4oywxlTmvcjFVe0xS0b26JrA1c= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706696784; a=rsa-sha256; cv=none; b=TYKWUON8duPqkBh8E8dH7mwyIB6L/qhQTutMdgfagw2m+5d4ogZ4O56JmfiL/9rx5DTihz A40W2IJltj+rmQ7coQBVArEPEvpqWdfxIjF8dBPUaSVcwpQ19BxePt4PIyib7FGvSti8tV nLz0hg44J6jZoN3c4PeV9FWjw/l7LFM= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9F4F4DA7; Wed, 31 Jan 2024 02:27:06 -0800 (PST) Received: from [10.57.79.60] (unknown [10.57.79.60]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 994A23F5A1; Wed, 31 Jan 2024 02:26:15 -0800 (PST) Message-ID: <1fd26a83-8e6f-4b96-9d27-dd46de9488cc@arm.com> Date: Wed, 31 Jan 2024 10:26:13 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 0/9] mm/memory: optimize unmap/zap with PTE-mapped THP To: David Hildenbrand , Yin Fengwei , linux-kernel@vger.kernel.org, Linus Torvalds , Michal Hocko Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Catalin Marinas , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Peter Zijlstra , Michael Ellerman , Christophe Leroy , "Naveen N. Rao" , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Arnd Bergmann , linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, "Huang, Ying" References: <20240129143221.263763-1-david@redhat.com> <4ef64fd1-f605-4ddf-82e6-74b5e2c43892@intel.com> Content-Language: en-GB From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 37D8AA0008 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: r7zbbmhihhcyboir7pa35s5s3ghw1aam X-HE-Tag: 1706696784-361645 X-HE-Meta: U2FsdGVkX18YdWtNxDjwmEy23xjV/U1LBCddG14r40tD9uCUiYgFwgOkc9n0NNJqn6uMzf9GExlap9hb/9OyZt62jtxZ95o/LHy1sdv+t37sM7+wW3/v9GrCoyFOQF3lcuo/GbHuqcPeESRdDDWyCYr6Inond0TS80Jc6IrOUan2HBqMoggeTsKiMOuLHh+vMLdu9GSylf8ySHtpn+taQchIr5aWtAV04mjPjVYdGZF9hHqOXwMEuNLRosq3jNZ/ORHHNa8jgNMZJcnIl2Naez6kuYo7tTuLfdTowesnO+3j9oIhJ6QcGafKh0p9z75/5lhbcBLwe3/PARYZTSNoV+i/vLsRquyh0fdbSsVc5Mzt2HKdL78OQpZHs2euuUytjFeBa3usEa+6Pqpo/60GwGqCEEr2J7raFaVnNRJRkFwvQ50XRC30UaPK1ZVOQ7BtVlRmvtp0WIDhw/tfeicW+eoSP4ZLvAMMIYHKKkZ+ybNIBag6IAEmSBMNf7lxctd+ozbfyJIFOF5hgpqUPPuUO2VOKLtFmPTXhz53+P96MMqsJaptWqvuVfST0LTBgSQkYh1Fqi+WNjieoYfm2/pMRhTsyRpOol0MzRPu/XZEtJtxTSaOhY3AJ8BXz7KRqkcHJiqZYSyaqwhzVIwESKDoHfiUPYx7PW97PP/YNguHX77Efi7q0T71leN2l853JZgnZWNnoUbStxeae1nYOBHum3CkNEIKTSvgMuakNcJg3Hjt1TI0Teok/ENY4UHXT6CUIWNHom/QgkrKio4b0/HrjmKUNAz4sQ9DJG4wdUPYpcgQnqhn18165lN05JobKyk5YZPjfJPL6qTOcFah97uh8c8Z4ThSuMt+Z28TXJpfbjau5ftkOasHNwi0p8jMrbvVFqeP+nM0oPn3zyTTBQUYcjWt0IvpBsO1mH9lKGV7FBCoeePPXHC39vDT5KWp58iACqZoXqOVdYrMGF84Lfo +Xw08+wM Ek5+6wudTf8537T22zRVVI6ti5+u9e0YdyiFHCpenzdJ11i7x8teTF7YL3laORBCFZnmd46Xt8jHeWeWqVipGWebdpg+Kp3aXbK0HQjA8VLhVDmFyrfm+AGAi6eqtmkaHpsmSXE5tpTAKuVCSAPMJXgi/alvC/G2FEhv6YCo9D4J4fyXZ3C4tVPqQavRlYo293NlexOw54yNoST1tD5j9SNMFG0KwSftFa3t05YVTVCOzZ0R8+ebYYbwju7UoZuGlV0xz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 31/01/2024 10:16, David Hildenbrand wrote: > On 31.01.24 03:20, Yin Fengwei wrote: >> On 1/29/24 22:32, David Hildenbrand wrote: >>> This series is based on [1] and must be applied on top of it. >>> Similar to what we did with fork(), let's implement PTE batching >>> during unmap/zap when processing PTE-mapped THPs. >>> >>> We collect consecutive PTEs that map consecutive pages of the same large >>> folio, making sure that the other PTE bits are compatible, and (a) adjust >>> the refcount only once per batch, (b) call rmap handling functions only >>> once per batch, (c) perform batch PTE setting/updates and (d) perform TLB >>> entry removal once per batch. >>> >>> Ryan was previously working on this in the context of cont-pte for >>> arm64, int latest iteration [2] with a focus on arm6 with cont-pte only. >>> This series implements the optimization for all architectures, independent >>> of such PTE bits, teaches MMU gather/TLB code to be fully aware of such >>> large-folio-pages batches as well, and amkes use of our new rmap batching >>> function when removing the rmap. >>> >>> To achieve that, we have to enlighten MMU gather / page freeing code >>> (i.e., everything that consumes encoded_page) to process unmapping >>> of consecutive pages that all belong to the same large folio. I'm being >>> very careful to not degrade order-0 performance, and it looks like I >>> managed to achieve that. >> > > Let's CC Linus and Michal to make sure I'm not daydreaming. > > Relevant patch: >   https://lkml.kernel.org/r/20240129143221.263763-8-david@redhat.com > > Context: I'm adjusting MMU gather code to support batching of consecutive pages > that belong to the same large folio, when unmapping/zapping PTEs. > > For small folios, there is no (relevant) change. > > Imagine we have a PTE-mapped THP (2M folio -> 512 pages) and zap all 512 PTEs: > Instead of adding 512 individual encoded_page entries, we add a combined entry > that expresses "page+nr_pages". That allows for "easily" adding various other > per-folio batching (refcount, rmap, swap freeing). > > The implication is, that we can now batch effective more pages with large > folios, exceeding the old 10000 limit. The number of involved *folios* does not > increase, though. > >> One possible scenario: >> If all the folio is 2M size folio, then one full batch could hold 510M memory. >> Is it too much regarding one full batch before just can hold (2M - 4096 * 2) >> memory? > > Excellent point, I think there are three parts to it: > > (1) Batch pages / folio fragments per batch page > > Before this change (and with 4k folios) we have exactly one page (4k) per > encoded_page entry in the batch. Now, we can have (with 2M folios), 512 pages > for every two encoded_page entries (page+nr_pages) in a batch page. So an > average ~256 pages per encoded_page entry. > > So one batch page can now store in the worst case ~256 times the number of > pages, but the number of folio fragments ("pages+nr_pages") would not increase. > > The time it takes to perform the actual page freeing of a batch will not be 256 > times higher -- the time is expected to be much closer to the old time (i.e., > not freeing more folios). IIRC there is an option to zero memory when it is freed back to the buddy? So that could be a place where time is proportional to size rather than proportional to folio count? But I think that option is intended for debug only? So perhaps not a problem in practice? > > (2) Delayed rmap handling > > We limit batching early (see tlb_next_batch()) when we have delayed rmap > pending. Reason being, that we don't want to check for many entries if they > require delayed rmap handling, while still holding the page table lock (see > tlb_flush_rmaps()), because we have to remove the rmap before dropping the PTL. > > Note that we perform the check whether we need delayed rmap handling per > page+nr_pages entry, not per page. So we won't perform more such checks. > > Once we set tlb->delayed_rmap (because we add one entry that requires it), we > already force a flush before dropping the PT lock. So once we get a single > delayed rmap entry in there, we will not batch more than we could have in the > same page table: so not more than 512 entries (x86-64) in the worst case. So it > will still be bounded, and not significantly more than what we had before. > > So regarding delayed rmap handling I think this should be fine. > > (3) Total patched pages > > MAX_GATHER_BATCH_COUNT effectively limits the number of pages we allocate (full > batches), and thereby limits the number of pages we were able to batch. > > The old limit was ~10000 pages, now we could batch ~5000 folio fragments > (page+nr_pages), resulting int the "times 256" increase in the worst case on > x86-64 as you point out. > > This 10000 pages limit was introduced in 53a59fc67f97 ("mm: limit mmu_gather > batching to fix soft lockups on !CONFIG_PREEMPT") where we wanted to handle > soft-lockups. > > As the number of effective folios we are freeing does not increase, I *think* > this should be fine. > > > If any of that is a problem, we would have to keep track of the total number of > pages in our batch, and stop as soon as we hit our 10000 limit -- independent of > page vs. folio fragment. Something I would like to avoid of possible. >