From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D099C38142 for ; Wed, 1 Feb 2023 01:24:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 916056B0072; Tue, 31 Jan 2023 20:24:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C4E76B0074; Tue, 31 Jan 2023 20:24:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7665F6B0075; Tue, 31 Jan 2023 20:24:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 684D46B0072 for ; Tue, 31 Jan 2023 20:24:46 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 32F67120DA5 for ; Wed, 1 Feb 2023 01:24:46 +0000 (UTC) X-FDA: 80416978572.16.E1102D0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id EF2A7140005 for ; Wed, 1 Feb 2023 01:24:43 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FwzSs65y; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675214684; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GMkaYOaMy3F2SWkAT76k5oWR2ccIe8K3HXPCAqK8Lps=; b=c+W8wNRbcDkewlpUnD0TakK17rfn48LtulgNSjh72fxR0nXtA7X4ejJlg0SkEe5a5n97g/ oVg6bg0Fyz1INhLRchD/FtYypc/+bouLqieMRUXqPJnkYVOjYcAb79hHu98HR3ukCO3q8s 3BYPZm75Dg7bm8cp//6M4OlJbMohwA0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FwzSs65y; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675214684; a=rsa-sha256; cv=none; b=rgdvkh+SJxton48Q6VWPolsrrlnhneNp5gTB8gscnrbBOmUut7Z4GO9PRS+v/B6sq95VzO qUE9HVI8sNaOYFjbNFPrDBcNtdaaNMgCvxzwCxYajtXkgl9ABEMuivHn0Il48GBJfQJfRd GnjNpUls7yP5EhIVZyznnyQL3d5zDx4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675214683; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=GMkaYOaMy3F2SWkAT76k5oWR2ccIe8K3HXPCAqK8Lps=; b=FwzSs65yo7WDMoajLaJ7s/BMoJouNerP0h6rd9eT2dzVeWFTWG6KlxQv3cvY+uGEU3+Sz7 KRdHbyBKWWmYbjDUG6rlq0yjErIJr1DfiafXBDoovd8B+wFW45EF3zGRrc/1a85qUiixxp aeLdCruT6jeXUgRI1tmbOCC4WnVFeTs= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-397-GvXiRVUcPhi-ByMIXl0V_Q-1; Tue, 31 Jan 2023 20:24:42 -0500 X-MC-Unique: GvXiRVUcPhi-ByMIXl0V_Q-1 Received: by mail-qk1-f197.google.com with SMTP id a3-20020a05620a438300b007069b068069so10224348qkp.2 for ; Tue, 31 Jan 2023 17:24:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=GMkaYOaMy3F2SWkAT76k5oWR2ccIe8K3HXPCAqK8Lps=; b=nQs0H+r6SoxhzhaptaTEnFo84QA2nWnCB0yfafx5Zl253mVf7/SGmUHrFyUVsOhSuU kcg6Z5So6PHkw35I6lUtN8n/HNXXh08c+rl5JGcWK+cbR2CdoBhyldiPIBwKqJaTgky0 AZROYKFKM6Dp7afzTiCMg5ye3I5hqaGCIkq1iBxiF3D2tOb+mrluZDNoiRFXmPrD/vIS kZq8yybyruX9Kzc2mTkXey/zHdt7DIOkZsy4LerHdFobDKJBFZiBhz33XONj3CWjslm2 lSWOudh4YGqGpqB6/UdyJ/H3tTd+vIzIjdjQ3285NOpsbtxCzJi/mAeIoxQauojQ7yS9 RwZg== X-Gm-Message-State: AO0yUKWL1qfpFiE6cKrh403kLoOrkrcJJdErGZlvTpxU9Jc0baApkSNT v7zJNi+4lsEHliUzR2o5Qbhuv5MOVHe7mzhM3vC5zxRD+wg2X8XwdKV+OH51FPHEJZKjMtwiYHw cA7UJhd+ooE4= X-Received: by 2002:ac8:4b79:0:b0:3b9:b801:8744 with SMTP id g25-20020ac84b79000000b003b9b8018744mr1263304qts.4.1675214681508; Tue, 31 Jan 2023 17:24:41 -0800 (PST) X-Google-Smtp-Source: AK7set/ERAy4HM0cD+7JT1/Zk8sIs6f464RvbsDc2g6bRWCPUTCkOYta+ZxRAHIqo+MkrJgXDY8fLQ== X-Received: by 2002:ac8:4b79:0:b0:3b9:b801:8744 with SMTP id g25-20020ac84b79000000b003b9b8018744mr1263278qts.4.1675214681085; Tue, 31 Jan 2023 17:24:41 -0800 (PST) Received: from x1n (bras-base-aurron9127w-grc-56-70-30-145-63.dsl.bell.ca. [70.30.145.63]) by smtp.gmail.com with ESMTPSA id s39-20020a05622a1aa700b003a7e38055c9sm10931353qtc.63.2023.01.31.17.24.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Jan 2023 17:24:40 -0800 (PST) Date: Tue, 31 Jan 2023 20:24:37 -0500 From: Peter Xu To: James Houghton Cc: Mike Kravetz , David Hildenbrand , Muchun Song , David Rientjes , Axel Rasmussen , Mina Almasry , Zach O'Keefe , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range Message-ID: References: MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Queue-Id: EF2A7140005 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: p8jmiq9n5behfceieju93qzs6bhog4q7 X-HE-Tag: 1675214683-628334 X-HE-Meta: U2FsdGVkX19f5IPq+blUDjlk+zn2yMZMEwuLM1zgReFhE/jqtng4J0DnPUnl+vz+zrmuDi6rpi3EUqI4v59fBFSIAfQU0AHlSFlD3zkXDA1UldB1vEBct4O/uZjnzb1d0ymwP4HfAQpyuP5iWqGuTG/BcXadqdwDBbgeyD3068oSbbpaHoY/hBkYOP1XpPyAvbgRqTVmQE9IwmR0wxmVgvHnGjoEiAcODQ+DupvWyXWdzmE3sHCH5kQt28wNVxI9p5aJ98vaDCsEQSEnU4uIJl4L3vcblMJOG0WafFzJZypVTOJDG4vrFDWarEwEACXqmwvyD/CnR++ZEx45y1aCujJff6de94wXrV+B3Jr26qhy+x/9S4o2YrQvI/ogGk8/wEoYJzkxHDSMhREYOZLJttbnCfrxBgtzq1o8o7mh1uuAdqG4uho/MT5pgoQx6TsEhbT3AGGLIRxtJujLF5jxl45ntUo1YX9sRRB60m7Xvdzwwqbo4HDQ5du2+GzvarHtlPzFFPpZzCoPwoNEzxKuPQFNffGsKPwWTZbL7TBWeJ4EiH7lICEIeloJiMMyHOOcd6gnaRAsU/iI6eWEio6eqrB+n9u+moBlOipm83g/8ciNgylO7URWGVdt4CDJspL8OrXZl7fAGjXjWMCWRzQwS7h+mRAXOUIgRhm4OkC13vBfCyEvDQNZQgqGEGPJhqA4A2YNnmmfXk/WLUy0Lkb+uP1ccrk+sQvgW1thoZbFYnUAVg9tYb0tlt8NH0WM8+mSjPciK+zeuKIMnYYqKvBxfL4ucfKseKxUl1iz4UGVHNMkwwnuaxsD1weCSFfIROoPuK12EC83YP7Fg/Ce5HP8tFIA2yj4YPqC16ccPaHSCMkycECk1ySsLbPmW8kTg/iYIVy5AK5jfrPTV7WBBxA6M35EU2+SClYBBU7/ZRKoWsbKeATpQprsji7/DhSGisMdQpNb/tkDz/oyKOt83h5 1A51HFN8 NHl55IXhjfw9OvvINFk0091wN+oZSCRXarfe+XRM4Q7u4hiLeAP8PD1W3sklfAMOylDB/x3O3/eZXUfGUk7VJaCjcH1YqA2zmZn6m4P6lkzlm4Py8gqzmpjXCPQczRithJJ9nZayOtnqgr3h7tWX5jlRP+FY9Hp1K2V5AoJ8BY4LLAOouO1jxMTsBkBwd2pB0Z72LZkwnIQ7OuJlQIpp78wnOrcul1T0xzRlzk0Kp3FOo2z7pv/5zeqdMIpEzolKJgoCXTuuiNpxabjYQaWfYhwooj/eF7nIQxEO6ypVuy9mwseJAxyC7mFt2OTvsYCjiyKYLyJIzFvk1Xp+SOyXaBt9LSVAtuV5Ll1thdH5hAmeU1aAxwyRFO89M/4fB92W5Mo0W0TZmIl+bpDX9Xsd26caX/vZmginXC2KEzaOt6v8ypnVfEYSxEE8Azn0LhHbjSv3mWcyTr4Ci5jDCVDaYBF7Wd0Lj6e93uxKACQS7RX042Imz0ItggxiOBzAfakFbeByl7cTX3PnSLIvGWzoks1R8h5YWkzFIahDG+QrbTsxGX2Fa5mpvXZup9cYndz7WN2YjrQiIAdokgtK56RC90umkOw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jan 31, 2023 at 04:24:15PM -0800, James Houghton wrote: > On Mon, Jan 30, 2023 at 1:14 PM Peter Xu wrote: > > > > On Mon, Jan 30, 2023 at 10:38:41AM -0800, James Houghton wrote: > > > On Mon, Jan 30, 2023 at 9:29 AM Peter Xu wrote: > > > > > > > > On Fri, Jan 27, 2023 at 01:02:02PM -0800, James Houghton wrote: > > > > > On Thu, Jan 26, 2023 at 12:31 PM Peter Xu wrote: > > > > > > > > > > > > James, > > > > > > > > > > > > On Thu, Jan 26, 2023 at 08:58:51AM -0800, James Houghton wrote: > > > > > > > It turns out that the THP-like scheme significantly slows down > > > > > > > MADV_COLLAPSE: decrementing the mapcounts for the 4K subpages becomes > > > > > > > the vast majority of the time spent in MADV_COLLAPSE when collapsing > > > > > > > 1G mappings. It is doing 262k atomic decrements, so this makes sense. > > > > > > > > > > > > > > This is only really a problem because this is done between > > > > > > > mmu_notifier_invalidate_range_start() and > > > > > > > mmu_notifier_invalidate_range_end(), so KVM won't allow vCPUs to > > > > > > > access any of the 1G page while we're doing this (and it can take like > > > > > > > ~1 second for each 1G, at least on the x86 server I was testing on). > > > > > > > > > > > > Did you try to measure the time, or it's a quick observation from perf? > > > > > > > > > > I put some ktime_get()s in. > > > > > > > > > > > > > > > > > IIRC I used to measure some atomic ops, it is not as drastic as I thought. > > > > > > But maybe it depends on many things. > > > > > > > > > > > > I'm curious how the 1sec is provisioned between the procedures. E.g., I > > > > > > would expect mmu_notifier_invalidate_range_start() to also take some time > > > > > > too as it should walk the smally mapped EPT pgtables. > > > > > > > > > > Somehow this doesn't take all that long (only like 10-30ms when > > > > > collapsing from 4K -> 1G) compared to hugetlb_collapse(). > > > > > > > > Did you populate as much the EPT pgtable when measuring this? > > > > > > > > IIUC this number should be pretty much relevant to how many pages are > > > > shadowed to the kvm pgtables. If the EPT table is mostly empty it should > > > > be super fast, but OTOH it can be much slower if when it's populated, > > > > because tdp mmu should need to handle the pgtable leaves one by one. > > > > > > > > E.g. it should be fully populated if you have a program busy dirtying most > > > > of the guest pages during test migration. > > > > > > That's what I was doing. I was running a workload in the guest that > > > just writes 8 bytes to a page and jumps ahead a few pages on all > > > vCPUs, touching most of its memory. > > > > > > But there is more to understand; I'll collect more results. I'm not > > > sure why the EPT can be unmapped/collapsed so quickly. > > > > Maybe something smart done by the hypervisor? > > Doing a little bit more digging, it looks like the > invalidate_range_start notifier clears the sptes, and then later on > (on the next EPT violation), the page tables are freed. I still need > to look at how they end up being so much faster still, but I thought > that was interesting. > > > > > > > > > > > > > > Write op should be the worst here case since it'll require the atomic op > > > > being applied; see kvm_tdp_mmu_write_spte(). > > > > > > > > > > > > > > > > > > > > > Since we'll still keep the intermediate levels around - from application > > > > > > POV, one other thing to remedy this is further shrink the size of COLLAPSE > > > > > > so potentially for a very large page we can start with building 2M layers. > > > > > > But then collapse will need to be run at least two rounds. > > > > > > > > > > That's exactly what I thought to do. :) I realized, too, that this is > > > > > actually how userspace *should* collapse things to avoid holding up > > > > > vCPUs too long. I think this is a good reason to keep intermediate > > > > > page sizes. > > > > > > > > > > When collapsing 4K -> 1G, the mapcount scheme doesn't actually make a > > > > > huge difference: the THP-like scheme is about 30% slower overall. > > > > > > > > > > When collapsing 4K -> 2M -> 1G, the mapcount scheme makes a HUGE > > > > > difference. For the THP-like scheme, collapsing 4K -> 2M requires > > > > > decrementing and then re-incrementing subpage->_mapcount, and then > > > > > from 2M -> 1G, we have to decrement all 262k subpages->_mapcount. For > > > > > the head-only scheme, for each 2M in the 4K -> 2M collapse, we > > > > > decrement the compound_mapcount 512 times (once per PTE), then > > > > > increment it once. And then for 2M -> 1G, for each 1G, we decrement > > > > > mapcount again by 512 (once per PMD), incrementing it once. > > > > > > > > Did you have quantified numbers (with your ktime treak) to compare these? > > > > If we want to go the other route, I think these will be materials to > > > > justify any other approach on mapcount handling. > > > > > > Ok, I can do that. GIve me a couple days to collect more results and > > > organize them in a helpful way. > > > > > > (If it's helpful at all, here are some results I collected last week: > > > [2]. Please ignore it if it's not helpful.) > > > > It's helpful already at least to me, thanks. Yes the change is drastic. > > That data only contains THP-like mapcount performance, no performance > for the head-only way. But the head-only scheme makes the 2M -> 1G > very good ("56" comes down to about the same everything else, instead > of being ~100-500x bigger). Oops, I think I misread those. Yeah please keep sharing information if you come up with any. > > > > > > > > > > > > > > > > > > > > The mapcount decrements are about on par with how long it takes to do > > > > > other things, like updating page tables. The main problem is, with the > > > > > THP-like scheme (implemented like this [1]), there isn't a way to > > > > > avoid the 262k decrements when collapsing 1G. So if we want > > > > > MADV_COLLAPSE to be fast and we want a THP-like page_mapcount() API, > > > > > then I think something more clever needs to be implemented. > > > > > > > > > > [1]: https://github.com/48ca/linux/blob/hgmv2-jan24/mm/hugetlb.c#L127-L178 > > > > > > > > I believe the whole goal of HGM is trying to face the same challenge if > > > > we'll allow 1G THP exist and being able to split for anon. > > > > > > > > I don't remember whether we discussed below, maybe we did? Anyway... > > > > > > > > Another way to not use thp mapcount, nor break smaps and similar calls to > > > > page_mapcount() on small page, is to only increase the hpage mapcount only > > > > when hstate pXd (in case of 1G it's PUD) entry being populated (no matter > > > > as leaf or a non-leaf), and the mapcount can be decreased when the pXd > > > > entry is removed (for leaf, it's the same as for now; for HGM, it's when > > > > freeing pgtable of the PUD entry). > > > > > > Right, and this is doable. Also it seems like this is pretty close to > > > the direction Matthew Wilcox wants to go with THPs. > > > > I may not be familiar with it, do you mean this one? > > > > https://lore.kernel.org/all/Y9Afwds%2FJl39UjEp@casper.infradead.org/ > > Yep that's it. > > > > > For hugetlb I think it should be easier to maintain rather than any-sized > > folios, because there's the pgtable non-leaf entry to track rmap > > information and the folio size being static to hpage size. > > > > It'll be different to folios where it can be random sized pages chunk, so > > it needs to be managed by batching the ptes when install/zap. > > Agreed. It's probably easier for HugeTLB because they're always > "naturally aligned" and yeah they can't change sizes. > > > > > > > > > Something I noticed though, from the implementation of > > > folio_referenced()/folio_referenced_one(), is that folio_mapcount() > > > ought to report the total number of PTEs that are pointing on the page > > > (or the number of times page_vma_mapped_walk returns true). FWIW, > > > folio_referenced() is never called for hugetlb folios. > > > > FWIU folio_mapcount is the thing it needs for now to do the rmap walks - > > it'll walk every leaf page being mapped, big or small, so IIUC that number > > should match with what it expects to see later, more or less. > > I don't fully understand what you mean here. I meant the rmap_walk pairing with folio_referenced_one() will walk all the leaves for the folio, big or small. I think that will match the number with what got returned from folio_mapcount(). > > > > > But I agree the mapcount/referenced value itself is debatable to me, just > > like what you raised in the other thread on page migration. Meanwhile, I > > am not certain whether the mapcount is accurate either because AFAICT the > > mapcount can be modified if e.g. new page mapping established as long as > > before taking the page lock later in folio_referenced(). > > > > It's just that I don't see any severe issue either due to any of above, as > > long as that information is only used as a hint for next steps, e.g., to > > swap which page out. > > I also don't see a big problem with folio_referenced() (and you're > right that folio_mapcount() can be stale by the time it takes the > folio lock). It still seems like folio_mapcount() should return the > total number of PTEs that map the page though. Are you saying that > breaking this would be ok? I didn't quite follow - isn't that already doing so? folio_mapcount() is total_compound_mapcount() here, IIUC it is an accumulated value of all possible PTEs or PMDs being mapped as long as it's all or part of the folio being mapped. -- Peter Xu