From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D945EC636D4 for ; Wed, 1 Feb 2023 21:51:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37E676B0071; Wed, 1 Feb 2023 16:51:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 32F1D6B0075; Wed, 1 Feb 2023 16:51:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F5846B0078; Wed, 1 Feb 2023 16:51:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0EC066B0071 for ; Wed, 1 Feb 2023 16:51:56 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id DA613A05C8 for ; Wed, 1 Feb 2023 21:51:55 +0000 (UTC) X-FDA: 80420070990.08.F847913 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id BA2D18000C for ; Wed, 1 Feb 2023 21:51:53 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=g2pAKxpr; spf=pass (imf30.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675288313; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9t9k8aWRK4blC4OqTHddYSdS2T+jZZbO6udhmg2isMM=; b=AF4muKn0/HjLuqTFg5lQgI2Y7pTmWIJCMLwNn9iUUIxE3GWm6yoLxTO3msEuUFP5LeXkhy vQY3kvElMK7lFklUlw/EiFHmPW0hZg8gWJGKZyhKTT1m2X0dcdPeOzzND20CRt3bqWPs0s eVdL4byYPZ3ikNFcRfdhG0VEy4/F9gM= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=g2pAKxpr; spf=pass (imf30.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675288313; a=rsa-sha256; cv=none; b=VUpbgNSZFw6mMR62KZRHcP0GNMcRl2Qt0xNkDP4xMtFo+Mg8t1OoNG1Be7WVJ1ZtBSwwGw j3EIs3efyEH3usM0eyNLl+XDDrRQED0KGYoqcXwrop/PmOlCHBtJGwGuNPwvH570he/pfm zOP7yg/AUbFyBYohzNVHTa2M9fQIOgY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675288313; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=9t9k8aWRK4blC4OqTHddYSdS2T+jZZbO6udhmg2isMM=; b=g2pAKxprnjhEMxyPLLjOBV8s2uYJf+MCDgCa8tKcsajV7/BBwecDBzhGcSbzA9tvPJ6lWX wJbTL8lH4gCneO5Dxkzlo5o+3n675iBOJrzlvXJvMtkivVy/ZM/2dbfAKJaH2OSWMwrbdP 1x4ytuak53i2QfsTCXgaD9d0vh6nx9g= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-445-I1vCLZpeNMifYUzFIBpZjw-1; Wed, 01 Feb 2023 16:51:51 -0500 X-MC-Unique: I1vCLZpeNMifYUzFIBpZjw-1 Received: by mail-qv1-f70.google.com with SMTP id px22-20020a056214051600b00537657b0449so11020736qvb.23 for ; Wed, 01 Feb 2023 13:51:51 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=9t9k8aWRK4blC4OqTHddYSdS2T+jZZbO6udhmg2isMM=; b=65b00KQEfg+UijKsgEuja9gom1kY9zcg8sqMZSN/Vr1xQutN/EFjV9icKjWGEvAxUi JMUpfdQKSdNAqhIDlCIbOjybdB21ytCQEU4LNCxRmsolTXAVP5olPoZhwPf9WbzGuiXc +mWgrCpj/TV61IkEOGh0isaawRyjeu7xYtctQzOxNXFdq4TFipA+7RaurD/STFn9oFVf Zo/x4/sfiaDpeXStZMIzVqMWWkS15jndAcMgxtJPSjkz0FtiuosQ7KvbBZ4Sne5mjkpg h2vlG+xoruF0ie3POOvEpw+hldWDWnYIl9MPutYpUvkmBv5gIogD5kHcQGE+BngUJ1GQ GLhg== X-Gm-Message-State: AO0yUKUNastAM84pLwgs18e2myd2wvXwCSNOQnR6FmfUp0N/v+Gd8t4w DClPmN1V8s9uZvKHSjwGBHlSlwx9cTieqB83kaPFvIsw66fWPW04HVxFSvnp0tMta0/vTSdfB0s f0xQZDQEW9TY= X-Received: by 2002:a05:622a:24f:b0:3b8:689c:a8aa with SMTP id c15-20020a05622a024f00b003b8689ca8aamr8144757qtx.1.1675288310782; Wed, 01 Feb 2023 13:51:50 -0800 (PST) X-Google-Smtp-Source: AK7set+OIfzWlm0+FMHVWN4+GwMyYNkF7LLX+N22hWr+F2wGHDdP5cHtmVTqzTdy6vA2l6Ex3wSUlg== X-Received: by 2002:a05:622a:24f:b0:3b8:689c:a8aa with SMTP id c15-20020a05622a024f00b003b8689ca8aamr8144720qtx.1.1675288310446; Wed, 01 Feb 2023 13:51:50 -0800 (PST) Received: from x1n (bras-base-aurron9127w-grc-56-70-30-145-63.dsl.bell.ca. [70.30.145.63]) by smtp.gmail.com with ESMTPSA id eb10-20020a05620a480a00b0071323d3e37fsm12656413qkb.133.2023.02.01.13.51.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 13:51:48 -0800 (PST) Date: Wed, 1 Feb 2023 16:51:47 -0500 From: Peter Xu To: James Houghton Cc: Mike Kravetz , David Hildenbrand , Muchun Song , David Rientjes , Axel Rasmussen , Mina Almasry , Zach O'Keefe , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range Message-ID: References: MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: BA2D18000C X-Rspam-User: X-Stat-Signature: jweqxcqc375p3wb7rody553e8s1o8jxp X-HE-Tag: 1675288313-923554 X-HE-Meta: U2FsdGVkX1/zG4nfO/b0S4QEZ0jG5J/EL/xK0J0Bg+1toEv8YCYB3G7SPhZoJzFKfPI/5RpXXeE3hbpcm0nO4tj/gQxPE3Wtuje5F89FKXWVmhbX6hnKe8tCKv0hJDTCE4jBp7GEdVynRTcA3Jew3Ula/3LTLETX8zVeibedIfgs4ZDK7HYD4GzfUpm66K7GKbx54o6lu8Q2PTOnRhZoUdCAeCwsur7ZW3v4iWGzDNFkwxQeNcxqQxN115A7lQASO6rkBkiXs7IWJsuM4efL5VUpn51XdfTtoVU9HfHpGHk0EbqqpnRfBGZeLeMF+FiPDWIHv1sPwCnfzzNXnn4pttxTt5J+MHE0hW00/Xo2iVt+oKbiLDN1h1foE16CZ6spaKffaFLAVHSHYMNRSrFsoW+WChjA5Wtufau+g7NUgJegHO/AatDlpfiPoaDtquYuHoFD/ce/ZoxHvGDrsPa2PClleSxQL6ulhuUKhxXxZKNR7y+rcf7/RdwwEwOWz8eZFaVpHlugPJpImVHSaTunUKErXu2ekvW5l4fgP/AQzsZ5HrJFNAQyywtTTd1HxxMgrGiphXDXVOkjkQrpeA5wqzz5Qqehd8wm3NR31CFXFSR/LVvizE2kfxisYT6Rx2ZtWKpfUNTH9qXdXXSPtKParfFHqf9aNyppkvxXTmgfBmZer+W1NX2Wr0akWoWt1rCkoQ86E+go+EHaRyDxgT8hrv6TCWw5Q1lcOe4FJQw8SYBCZ/PXVBkYTp8fImE78FN5zX+ntYs4+MtYDFcRldU1536vi9bce2LyGwKxeGB5otVDiBpQoYJ0Adgj32X37SBSXftaI05gN0NGPj7DUgww/PnzAS6kMMJH/osgA2CVhinzEPp12vQhoRqAOz5O5itFAPVyb8c/LRxIc3S1gZ7+emnuX6McfJwjmtG3HViAorpWO4UvMuX0fVMOHweOZB7kQpb8u6bzQ1LM7k+qHIn +VRhnZ1C mm27fkYo3PCB9RKs7VNFgV8xBsEx6IGQ89P/YaojL1lpjADDHL97efZH8phb05UxrY6W15VMFKqiLv59bXP6OlEHj8nxCdHQbQNVTw6CqPwDcvKVhtDeDp1ibeZQ6PxEhrb62zTjUDZmmht1SHPZV8aBqxKBAdElTtA2YMYyVsShyHEKYoMaazcTHvpcwI3IPDU/GE1mtYqI6bl2pwMIhB+nvo7ODLH9/Cfaqo9xRpJ6RChT6RnR6a3Kuqbuf9ob+u4ksuSsfzS0Mjs9xq9h1A9BXbI9RHLeLW5uzw7OCtgr9p/3zuO45dHu6ytDgWczy0piEXiyeLbcD9MjIQx+URn9r0/TtpA8b2WO2buxq/WE+yPIZFr0vh49NW1MBxfNjaPbsfNE94AAn8/h38uYPseyWH3t5CRgnrc9gY1bbzcuQoWZaS/Ldwd/nVfwCDO86OYMJudxj9WkiRmhI7vD0kA4cLX31po8sRKNHSyrakzCgNWNCIfFUmwGNPIsBpK8H1tZJjBopCcJRD/POIgzpMnR71QgfN2KXOU1U2ZqpVC3mbnxxFQs+EFtrgr6TXtmiwkLk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 01, 2023 at 01:32:21PM -0800, James Houghton wrote: > On Wed, Feb 1, 2023 at 8:22 AM Peter Xu wrote: > > > > On Wed, Feb 01, 2023 at 07:45:17AM -0800, James Houghton wrote: > > > On Tue, Jan 31, 2023 at 5:24 PM Peter Xu wrote: > > > > > > > > On Tue, Jan 31, 2023 at 04:24:15PM -0800, James Houghton wrote: > > > > > On Mon, Jan 30, 2023 at 1:14 PM Peter Xu wrote: > > > > > > > > > > > > On Mon, Jan 30, 2023 at 10:38:41AM -0800, James Houghton wrote: > > > > > > > On Mon, Jan 30, 2023 at 9:29 AM Peter Xu wrote: > > > > > > > > > > > > > > > > On Fri, Jan 27, 2023 at 01:02:02PM -0800, James Houghton wrote: > > > [snip] > > > > > > > > Another way to not use thp mapcount, nor break smaps and similar calls to > > > > > > > > page_mapcount() on small page, is to only increase the hpage mapcount only > > > > > > > > when hstate pXd (in case of 1G it's PUD) entry being populated (no matter > > > > > > > > as leaf or a non-leaf), and the mapcount can be decreased when the pXd > > > > > > > > entry is removed (for leaf, it's the same as for now; for HGM, it's when > > > > > > > > freeing pgtable of the PUD entry). > > > > > > > > > > > > > > Right, and this is doable. Also it seems like this is pretty close to > > > > > > > the direction Matthew Wilcox wants to go with THPs. > > > > > > > > > > > > I may not be familiar with it, do you mean this one? > > > > > > > > > > > > https://lore.kernel.org/all/Y9Afwds%2FJl39UjEp@casper.infradead.org/ > > > > > > > > > > Yep that's it. > > > > > > > > > > > > > > > > > For hugetlb I think it should be easier to maintain rather than any-sized > > > > > > folios, because there's the pgtable non-leaf entry to track rmap > > > > > > information and the folio size being static to hpage size. > > > > > > > > > > > > It'll be different to folios where it can be random sized pages chunk, so > > > > > > it needs to be managed by batching the ptes when install/zap. > > > > > > > > > > Agreed. It's probably easier for HugeTLB because they're always > > > > > "naturally aligned" and yeah they can't change sizes. > > > > > > > > > > > > > > > > > > > > > > > > > Something I noticed though, from the implementation of > > > > > > > folio_referenced()/folio_referenced_one(), is that folio_mapcount() > > > > > > > ought to report the total number of PTEs that are pointing on the page > > > > > > > (or the number of times page_vma_mapped_walk returns true). FWIW, > > > > > > > folio_referenced() is never called for hugetlb folios. > > > > > > > > > > > > FWIU folio_mapcount is the thing it needs for now to do the rmap walks - > > > > > > it'll walk every leaf page being mapped, big or small, so IIUC that number > > > > > > should match with what it expects to see later, more or less. > > > > > > > > > > I don't fully understand what you mean here. > > > > > > > > I meant the rmap_walk pairing with folio_referenced_one() will walk all the > > > > leaves for the folio, big or small. I think that will match the number > > > > with what got returned from folio_mapcount(). > > > > > > See below. > > > > > > > > > > > > > > > > > > > > > > > > But I agree the mapcount/referenced value itself is debatable to me, just > > > > > > like what you raised in the other thread on page migration. Meanwhile, I > > > > > > am not certain whether the mapcount is accurate either because AFAICT the > > > > > > mapcount can be modified if e.g. new page mapping established as long as > > > > > > before taking the page lock later in folio_referenced(). > > > > > > > > > > > > It's just that I don't see any severe issue either due to any of above, as > > > > > > long as that information is only used as a hint for next steps, e.g., to > > > > > > swap which page out. > > > > > > > > > > I also don't see a big problem with folio_referenced() (and you're > > > > > right that folio_mapcount() can be stale by the time it takes the > > > > > folio lock). It still seems like folio_mapcount() should return the > > > > > total number of PTEs that map the page though. Are you saying that > > > > > breaking this would be ok? > > > > > > > > I didn't quite follow - isn't that already doing so? > > > > > > > > folio_mapcount() is total_compound_mapcount() here, IIUC it is an > > > > accumulated value of all possible PTEs or PMDs being mapped as long as it's > > > > all or part of the folio being mapped. > > > > > > We've talked about 3 ways of handling mapcount: > > > > > > 1. The RFC v2 way, which is head-only, and we increment the compound > > > mapcount for each PT mapping we have. So a PTE-mapped 2M page, > > > compound_mapcount=512, subpage->_mapcount=0 (ignoring the -1 bias). > > > 2. The THP-like way. If we are fully mapping the hugetlb page with the > > > hstate-level PTE, we increment the compound mapcount, otherwise we > > > increment subpage->_mapcount. > > > 3. The RFC v1 way (the way you have suggested above), which is > > > head-only, and we increment the compound mapcount if the hstate-level > > > PTE is made present. > > > > Oh that's where it come from! It took quite some months going through all > > these, I can hardly remember the details. > > > > > > > > With #1 and #2, there is no concern with folio_mapcount(). But with > > > #3, folio_mapcount() for a PTE-mapped 2M page mapped in a single VMA > > > would yield 1 instead of 512 (right?). That's what I mean. > > > > > > #1 has problems wrt smaps and migration (though there were other > > > problems with those anyway that Mike has fixed), and #2 makes > > > MADV_COLLAPSE slow to the point of being unusable for some > > > applications. > > > > Ah so you're talking about after HGM being applied.. while I was only > > talking about THPs. > > > > If to apply the logic here with idea 3), the worst case is we'll need to > > have special care of HGM hugetlb in folio_referenced_one(), so the default > > page_vma_mapped_walk() may not apply anymore - the resource is always in > > hstate sized, so counting small ptes do not help too - we can just walk > > until the hstate entry and do referenced++ if it's not none, at the > > entrance of folio_referenced_one(). > > > > But I'm not sure whether that'll be necessary at all, as I'm not sure > > whether that path can be triggered at all in any form (where from the top > > it should always be shrink_page_list()). In that sense maybe we can also > > consider adding a WARN_ON_ONCE() in folio_referenced() where it is a > > hugetlb page that got passed in? Meanwhile, adding a TODO comment > > explaining that current walk won't work easily for HGM only, so when it > > will be applicable to hugetlb we need to rework? > > > > I confess that's not pretty, though. But that'll make 3) with no major > > defect from function-wise. > > Another potential idea would be to add something like page_vmacount(). > For non-HugeTLB pages, page_vmacount() == page_mapcount(). Then for > HugeTLB pages, we could keep a separate count (in one of the tail > pages, I guess). And then in the places that matter (so smaps, > migration, and maybe CoW and hwpoison), potentially change their calls > to page_vmacount() instead of page_mapcount(). > > Then to implement page_vmacount(), we do the RFC v1 mapcount approach > (but like.... correctly this time). And then for page_mapcount(), we > do the RFC v2 mapcount approach (head-only, once per PTE). > > Then we fix folio_referenced() without needing to special-case it for > HugeTLB. :) Or we could just special-case it. *shrug* > > Does that sound reasonable? We still have the problem where a series > of partially unmaps could leave page_vmacount() incremented, but I > don't think that's a big problem. I'm afraid someone will stop you from introducing yet another definition of mapcount, where others are trying to remove it. :) Or, can we just drop folio_referenced_arg.mapcount? We need to keep: if (!pra.mapcount) return 0; By replacing it with folio_mapcount which is definitely something worthwhile, but what about the rest? If it can be dropped, afaict it'll naturally work with HGM again. IIUC that's an optimization where we want to stop the rmap walk as long as we found all the pages, however (1) IIUC it's not required to function, and (2) it's not guaranteed to work as solid anyway.. As we've discussed before: right after it reads mapcount (but before taking the page lock), the mapcount can get decreased by 1, then it'll still need to loop over all the vmas just to find that there's one "misterious" mapcount lost. Personally I really have no idea on how much that optimization can help. -- Peter Xu