From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66F47C636D4 for ; Tue, 7 Feb 2023 16:31:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 02C4E6B0101; Tue, 7 Feb 2023 11:31:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1EAB6B0103; Tue, 7 Feb 2023 11:31:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE6CF6B0104; Tue, 7 Feb 2023 11:31:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D095D6B0101 for ; Tue, 7 Feb 2023 11:31:39 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4026EA088E for ; Tue, 7 Feb 2023 16:31:39 +0000 (UTC) X-FDA: 80441036718.01.7917324 Received: from mail-vk1-f174.google.com (mail-vk1-f174.google.com [209.85.221.174]) by imf29.hostedemail.com (Postfix) with ESMTP id 3A765120027 for ; Tue, 7 Feb 2023 16:31:36 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="jX/debV+"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of jthoughton@google.com designates 209.85.221.174 as permitted sender) smtp.mailfrom=jthoughton@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675787496; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5+NWv+v0u+gyuH6pNNxAhrPXIw7rtt876QooeySaSHk=; b=B2kmg8ijDfSWcQCFwB7mCUW8RS6nAJyoC26WuiLqABFvkBP1C/ze1VmIa/xirR/hBnslO1 A6Mus/0sgOGNxklkqeYNANsl1hLrYwxdWWvGThhn6jf6EtkILUXq5Hto9aSHJVjZ1ua4qu qVFRybh79HZGaqQSfjLbiykwVRYjC7g= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="jX/debV+"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of jthoughton@google.com designates 209.85.221.174 as permitted sender) smtp.mailfrom=jthoughton@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675787496; a=rsa-sha256; cv=none; b=hLXIrHnVPa3KTmLfqM30vISgaaa5F4TKIsxik8giNX+bCCXqmas0ElrmU9NeYv+8LjDtty d5AJf7ge6uAtZ8u9KUejYPjg1wPLt5hcBRRtVKU7WEax7C/k3smsRzbcUhAqPpD4xPHZ+7 K7cBmavy1o+Tr+motOwYoIr0zKy4yMk= Received: by mail-vk1-f174.google.com with SMTP id bs10so8135514vkb.3 for ; Tue, 07 Feb 2023 08:31:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=5+NWv+v0u+gyuH6pNNxAhrPXIw7rtt876QooeySaSHk=; b=jX/debV+lIPZ0PF/j3IleCqARlildVYdAfjlC2mGaqeiWXobB5V8e3jZyWYQnV7fD3 3OOYKrNJ7YTB0OTiks4vc6eYb7suiYE0U/o0RuAEc/WuO08wrmh9ynl6+K7MgSiIW8N4 /zoI1OuM2wVUE54n8scRMgW8UDH4sl282cKwO8rUD5O2Pzz0MugqieWpT1mRZ7z0+nTs cOe3kcw20eE2lH/RP5k6Cl4GU9tfKFoWF6yLRLvEsfql4Wp7AcG5QjWv/UiSLemihngE uJJp0xFwf37k/GqbB3mrUDozrnwZ0DqplHUoPS5gvrTWg79tjtJlsZqIBdvWeicD2uRO lXDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5+NWv+v0u+gyuH6pNNxAhrPXIw7rtt876QooeySaSHk=; b=F8Kq7WF1sKp8LrP6j0xLMjur1DIErHtigC3J5lBrgpLYOgwkUUnIETFR4Y5DBa0U4h 64NKluaVtstl6U6sH8cpudH3lBn7qE0Rz6/3SabQQt8uK0u2tMvcGJN1VIx0UOApADf/ 0yc8b7vxb6sNCHRlpHiWYqntAgWmd0bUdQ5N/UJitt02cXlcxUmYQsrMqhz/RrLCS3Ed tis94C7z+Djh9T15yTHx4Wm3oG6r/ZxXzwW7ZxndTIlW9eJpyFXrfGIxr+JPC4ccR9tO 8qu2++KId251BJeQe1ffXPzNyMN6B7Jr5cj3UPQng2MS25J2jO5pJS2algNj/PA12eql hB/A== X-Gm-Message-State: AO0yUKX+HtGBM+96IUkuL/BXaZ+Lrxl8rgeFgigk5D8ZxsGu6qZs1CNu ct4ZVV4MOQJHm8K5ggDhuQdEdHOSpb7s2G5YoRYHiQ== X-Google-Smtp-Source: AK7set/iNpN4mjYDiW+tgdkpX/+Vgr9inD7DaVgn0BkHC45yap0X+dzLhEmq5Pn/EbzOO7nXRwG8WSh5Ky5x4AaU41A= X-Received: by 2002:a1f:9f57:0:b0:3fc:475b:1a76 with SMTP id i84-20020a1f9f57000000b003fc475b1a76mr543128vke.24.1675787495128; Tue, 07 Feb 2023 08:31:35 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: James Houghton Date: Tue, 7 Feb 2023 08:30:58 -0800 Message-ID: Subject: Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range To: Peter Xu Cc: Mike Kravetz , David Hildenbrand , Muchun Song , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3A765120027 X-Stat-Signature: cbd3fpz4zaenf5e96qjjpcn3eb9a4zmp X-HE-Tag: 1675787496-686963 X-HE-Meta: U2FsdGVkX18wgfO+klJ6SuoNbD1u4Tn1/wSCFSt/OPKiCene6eyUBwlbc5T60zdvTZvmxJpWM2A0J8yGS+66tBOUEnXvwKCvQg8dyNxmbJgW7rFWTz+RcPTvfsL4GBJLizLpFDPYujDdlG1B4bA2RTRv8xAp7x1zISgkgpbloSAXfadEF9EcvL5y7isEjSZTWJx112tPTFb3BoCURhYclo7jBO3CCW2G4imOasmtmAP+fFjuTGuRtcuFe29aprxvVQckfiCdrdYV5meriJ5o7FMLxcYd8yQdqXq5S+DeUjqNo6EpO0YEfLobrxCVX2hCn4dc72wXHLTSEPZOlAk5FsfTMa1in1LGtykWBqx2u/6z68o9oTNohwgfA904Utz4mfWd5GDcjen4eIT6MkO77td6VSCcsKo1n8GBxaSFW2aTBKAaNwDTF1xoecCPJNnefvIy8pCUXckpegOEq05lTLR2bs/l4V6ThOHoaCdJRFPKrUKQDXxKXAx2dIyD+PPhJgxb4d0poGc+Lt5lWGU8fRNyfsCyHukJaSwJhy+FuQO6kmrUK6cNiq4ZpytCNaRhkp9d1FYmk8Rr1bkW+x9DPWOHX1BjO9tu/UAgVMQOJn/hiWwmFc0Zo7Fi67733yHhgX18Xx0saxIqWGyHdDyDYOv6Y9OUoKOEmYkC9tNrcL6pdi0QdMUeGtEQ371Xi29e+vXH/Y4y1mNCCb4lS1H1ZWT52nLSJuSLsgCHFPwSprk4ddT3JLeoPavsYGAxNw3kWtQVfwBHzOUdQSMys8nZOCo9niIWhGULWgx3P4jzwx0xzYRRgeA50gWvR8Ggo5FGZoOXjYh414sQzs8+sPSMUT9aNuz9sy3tbFLHXFZIZZ7uWccNDSXeHMWkrBL6+9maCaBFqOIxIxI3+NY35L/fQMCurOX5XhdpRdCGzC6Z4TdK8HxNzrKol70byGysjashUKRC8s1xWfoWqVAeoFZ Ixw8dBFC yiFr0Y6/iJiie2MqRYM0z51tlLLx7nijDXcCWCr154TDFRi5aE5JESmSYm7g5plM/bzoYjKbOEF+Qb7T8T7tVgXbuS5DoBFM7wo7r88vM772dYaSorC883TxNZ5+q7gMTuTLTbkNiUAZBkO0ofMyV199Xtmm6JcOGBzJJsesBUjqVgmIJv5LdCzH2wI+/e/fV4XJr8UAelmonci2inDoPNduH/FLFqpBVicxAueyEU/iNlsN7e8pdGwMDaLwU+49ohO4Yik/KdqLXRu7BVTtnY+i2TRx7BnyuVYbfV3AljVmyL9y4PUGCNhBKaqQ+zGG+wmKRva5MTUS4MDH+ROB5D/uo6rBZnbLMRKyFx1a/XQRjxRM2bEQJSrog8rZt2I7SSl4Oc3lX8jMl2f7Q/0RWadeyHiEB5TqMi9HihYIaI7qeZEe4Wb2WUyVZIQhi4PWW4Le1CcGmPv5s9GmFXcf23RAiqTfzRIjf8bTaa5P6GquNN9KybaKM8SPRKSL2q5kreuPA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 1, 2023 at 4:24 PM James Houghton wrote: > > On Wed, Feb 1, 2023 at 1:51 PM Peter Xu wrote: > > > > On Wed, Feb 01, 2023 at 01:32:21PM -0800, James Houghton wrote: > > > On Wed, Feb 1, 2023 at 8:22 AM Peter Xu wrote: > > > > > > > > On Wed, Feb 01, 2023 at 07:45:17AM -0800, James Houghton wrote: > > > > > On Tue, Jan 31, 2023 at 5:24 PM Peter Xu wrote: > > > > > > > > > > > > On Tue, Jan 31, 2023 at 04:24:15PM -0800, James Houghton wrote: > > > > > > > On Mon, Jan 30, 2023 at 1:14 PM Peter Xu wrote: > > > > > > > > > > > > > > > > On Mon, Jan 30, 2023 at 10:38:41AM -0800, James Houghton wrote: > > > > > > > > > On Mon, Jan 30, 2023 at 9:29 AM Peter Xu wrote: > > > > > > > > > > > > > > > > > > > > On Fri, Jan 27, 2023 at 01:02:02PM -0800, James Houghton wrote: > > > > > [snip] > > > > > > > > > > Another way to not use thp mapcount, nor break smaps and similar calls to > > > > > > > > > > page_mapcount() on small page, is to only increase the hpage mapcount only > > > > > > > > > > when hstate pXd (in case of 1G it's PUD) entry being populated (no matter > > > > > > > > > > as leaf or a non-leaf), and the mapcount can be decreased when the pXd > > > > > > > > > > entry is removed (for leaf, it's the same as for now; for HGM, it's when > > > > > > > > > > freeing pgtable of the PUD entry). > > > > > > > > > > > > > > > > > > Right, and this is doable. Also it seems like this is pretty close to > > > > > > > > > the direction Matthew Wilcox wants to go with THPs. > > > > > > > > > > > > > > > > I may not be familiar with it, do you mean this one? > > > > > > > > > > > > > > > > https://lore.kernel.org/all/Y9Afwds%2FJl39UjEp@casper.infradead.org/ > > > > > > > > > > > > > > Yep that's it. > > > > > > > > > > > > > > > > > > > > > > > For hugetlb I think it should be easier to maintain rather than any-sized > > > > > > > > folios, because there's the pgtable non-leaf entry to track rmap > > > > > > > > information and the folio size being static to hpage size. > > > > > > > > > > > > > > > > It'll be different to folios where it can be random sized pages chunk, so > > > > > > > > it needs to be managed by batching the ptes when install/zap. > > > > > > > > > > > > > > Agreed. It's probably easier for HugeTLB because they're always > > > > > > > "naturally aligned" and yeah they can't change sizes. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Something I noticed though, from the implementation of > > > > > > > > > folio_referenced()/folio_referenced_one(), is that folio_mapcount() > > > > > > > > > ought to report the total number of PTEs that are pointing on the page > > > > > > > > > (or the number of times page_vma_mapped_walk returns true). FWIW, > > > > > > > > > folio_referenced() is never called for hugetlb folios. > > > > > > > > > > > > > > > > FWIU folio_mapcount is the thing it needs for now to do the rmap walks - > > > > > > > > it'll walk every leaf page being mapped, big or small, so IIUC that number > > > > > > > > should match with what it expects to see later, more or less. > > > > > > > > > > > > > > I don't fully understand what you mean here. > > > > > > > > > > > > I meant the rmap_walk pairing with folio_referenced_one() will walk all the > > > > > > leaves for the folio, big or small. I think that will match the number > > > > > > with what got returned from folio_mapcount(). > > > > > > > > > > See below. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > But I agree the mapcount/referenced value itself is debatable to me, just > > > > > > > > like what you raised in the other thread on page migration. Meanwhile, I > > > > > > > > am not certain whether the mapcount is accurate either because AFAICT the > > > > > > > > mapcount can be modified if e.g. new page mapping established as long as > > > > > > > > before taking the page lock later in folio_referenced(). > > > > > > > > > > > > > > > > It's just that I don't see any severe issue either due to any of above, as > > > > > > > > long as that information is only used as a hint for next steps, e.g., to > > > > > > > > swap which page out. > > > > > > > > > > > > > > I also don't see a big problem with folio_referenced() (and you're > > > > > > > right that folio_mapcount() can be stale by the time it takes the > > > > > > > folio lock). It still seems like folio_mapcount() should return the > > > > > > > total number of PTEs that map the page though. Are you saying that > > > > > > > breaking this would be ok? > > > > > > > > > > > > I didn't quite follow - isn't that already doing so? > > > > > > > > > > > > folio_mapcount() is total_compound_mapcount() here, IIUC it is an > > > > > > accumulated value of all possible PTEs or PMDs being mapped as long as it's > > > > > > all or part of the folio being mapped. > > > > > > > > > > We've talked about 3 ways of handling mapcount: > > > > > > > > > > 1. The RFC v2 way, which is head-only, and we increment the compound > > > > > mapcount for each PT mapping we have. So a PTE-mapped 2M page, > > > > > compound_mapcount=512, subpage->_mapcount=0 (ignoring the -1 bias). > > > > > 2. The THP-like way. If we are fully mapping the hugetlb page with the > > > > > hstate-level PTE, we increment the compound mapcount, otherwise we > > > > > increment subpage->_mapcount. > > > > > 3. The RFC v1 way (the way you have suggested above), which is > > > > > head-only, and we increment the compound mapcount if the hstate-level > > > > > PTE is made present. > > > > > > > > Oh that's where it come from! It took quite some months going through all > > > > these, I can hardly remember the details. > > > > > > > > > > > > > > With #1 and #2, there is no concern with folio_mapcount(). But with > > > > > #3, folio_mapcount() for a PTE-mapped 2M page mapped in a single VMA > > > > > would yield 1 instead of 512 (right?). That's what I mean. > > > > > > > > > > #1 has problems wrt smaps and migration (though there were other > > > > > problems with those anyway that Mike has fixed), and #2 makes > > > > > MADV_COLLAPSE slow to the point of being unusable for some > > > > > applications. > > > > > > > > Ah so you're talking about after HGM being applied.. while I was only > > > > talking about THPs. > > > > > > > > If to apply the logic here with idea 3), the worst case is we'll need to > > > > have special care of HGM hugetlb in folio_referenced_one(), so the default > > > > page_vma_mapped_walk() may not apply anymore - the resource is always in > > > > hstate sized, so counting small ptes do not help too - we can just walk > > > > until the hstate entry and do referenced++ if it's not none, at the > > > > entrance of folio_referenced_one(). > > > > > > > > But I'm not sure whether that'll be necessary at all, as I'm not sure > > > > whether that path can be triggered at all in any form (where from the top > > > > it should always be shrink_page_list()). In that sense maybe we can also > > > > consider adding a WARN_ON_ONCE() in folio_referenced() where it is a > > > > hugetlb page that got passed in? Meanwhile, adding a TODO comment > > > > explaining that current walk won't work easily for HGM only, so when it > > > > will be applicable to hugetlb we need to rework? > > > > > > > > I confess that's not pretty, though. But that'll make 3) with no major > > > > defect from function-wise. > > > > > > Another potential idea would be to add something like page_vmacount(). > > > For non-HugeTLB pages, page_vmacount() == page_mapcount(). Then for > > > HugeTLB pages, we could keep a separate count (in one of the tail > > > pages, I guess). And then in the places that matter (so smaps, > > > migration, and maybe CoW and hwpoison), potentially change their calls > > > to page_vmacount() instead of page_mapcount(). > > > > > > Then to implement page_vmacount(), we do the RFC v1 mapcount approach > > > (but like.... correctly this time). And then for page_mapcount(), we > > > do the RFC v2 mapcount approach (head-only, once per PTE). > > > > > > Then we fix folio_referenced() without needing to special-case it for > > > HugeTLB. :) Or we could just special-case it. *shrug* > > > > > > Does that sound reasonable? We still have the problem where a series > > > of partially unmaps could leave page_vmacount() incremented, but I > > > don't think that's a big problem. > > > > I'm afraid someone will stop you from introducing yet another definition of > > mapcount, where others are trying to remove it. :) > > > > Or, can we just drop folio_referenced_arg.mapcount? We need to keep: > > > > if (!pra.mapcount) > > return 0; > > > > By replacing it with folio_mapcount which is definitely something > > worthwhile, but what about the rest? > > > > If it can be dropped, afaict it'll naturally work with HGM again. > > > > IIUC that's an optimization where we want to stop the rmap walk as long as > > we found all the pages, however (1) IIUC it's not required to function, and > > (2) it's not guaranteed to work as solid anyway.. As we've discussed > > before: right after it reads mapcount (but before taking the page lock), > > the mapcount can get decreased by 1, then it'll still need to loop over all > > the vmas just to find that there's one "misterious" mapcount lost. > > > > Personally I really have no idea on how much that optimization can help. > > Ok, yeah, I think pra.mapcount can be removed too. (And we replace > !pra.mapcount with !folio_mapcount().) > > I don't see any other existing users of folio_mapcount() and > total_mapcount() that are problematic. We do need to make sure to keep > refcount and mapcount in sync though; it can be done. > > So I'll compare this "RFC v1" way with the THP-like way and get you a > performance comparison. Here is the result: [1] (sorry it took a little while heh). The implementation of the "RFC v1" way is pretty horrible[2] (and this implementation probably has bugs anyway; it doesn't account for the folio_referenced() problem). Matthew is trying to solve the same problem with THPs right now: [3]. I haven't figured out how we can apply Matthews's approach to HGM right now, but there probably is a way. (If we left the mapcount increment bits in the same place, we couldn't just check the hstate-level PTE; it would have already been made present.) We could: - use the THP-like way and tolerate ~1 second collapses - use the (non-RFC) v1 way and tolerate the migration/smaps differences - use the RFC v1 way and tolerate the complicated mapcount accounting - flesh out [3] and see if it can be applied to HGM nicely I'm happy to go with any of these approaches. [1]: https://pastebin.com/raw/hJzFJHiD [2]: https://github.com/48ca/linux/commit/4495f16a09b660aff44b3edcc125aa3a3df85976 [3]: https://lore.kernel.org/linux-mm/Y+FkV4fBxHlp6FTH@casper.infradead.org/ - James