From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9A26C636D3 for ; Mon, 30 Jan 2023 18:39:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 660EE6B0071; Mon, 30 Jan 2023 13:39:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 611156B0073; Mon, 30 Jan 2023 13:39:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B1FC6B0074; Mon, 30 Jan 2023 13:39:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 386056B0071 for ; Mon, 30 Jan 2023 13:39:21 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 134911606CA for ; Mon, 30 Jan 2023 18:39:21 +0000 (UTC) X-FDA: 80412328122.29.03ABD17 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) by imf02.hostedemail.com (Postfix) with ESMTP id 23B8B80004 for ; Mon, 30 Jan 2023 18:39:18 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RMxQ1RxN; spf=pass (imf02.hostedemail.com: domain of jthoughton@google.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=jthoughton@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675103959; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LPGS2Yo2yPpvShBA+iBIo7QSW0BY3Idei/j1IjlrSi4=; b=KVZzHuIgafvdwk/XBem+01l8bqVHqcH5zAIZS4BRFeYcv75Tl2+N/EzHMMDNGo2KyQsLQP ikxBTDzi7O7FZfJVpwF/njJYA19fmq2I992B3SS+QEVd9vxaw4NBpLzx7gjrTRHU7GqiZj xZq/+HLwPe0DK0rIr755dMVza3d6mYA= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RMxQ1RxN; spf=pass (imf02.hostedemail.com: domain of jthoughton@google.com designates 209.85.128.43 as permitted sender) smtp.mailfrom=jthoughton@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675103959; a=rsa-sha256; cv=none; b=jQRy31J6PKdLgcSf2IjGrAoGDvSzpo8TeUN/ixgDd+8GTchG89o10XXllt2mp9avw1WEV+ 0U0g8+8FTl1383eDmTGRqtRfJrCXjuUuLCJ8Vo7vS6uYxdE6oj0hQSqS2UkqKxT5snX9IW U5wcnZI5PJW0wCAuoJtKpMUxtdXd234= Received: by mail-wm1-f43.google.com with SMTP id j32-20020a05600c1c2000b003dc4fd6e61dso3872760wms.5 for ; Mon, 30 Jan 2023 10:39:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=LPGS2Yo2yPpvShBA+iBIo7QSW0BY3Idei/j1IjlrSi4=; b=RMxQ1RxNtqjpJ146aUokuJY0Jr8rXiFoIpvSwT6QP6fINoSvqZtMpvm1hLUWWcBz1m 1lE9Q1vru62/iHuWPzeo+o06xf4mQY5DBrJtCDhIz7VbFc7ihOdqiIyyIB0SrqkWi4KM mwThWB/kf7nvZWogDuxw4QAhO2BRYDBtcb6FLMcCbknHqj9bg88oEnR/1pKTTDEWQnAS 2JlZYniJQLwffDR8mVghNHfAs6k+8EdeubkEeQ/CaQyOk+sf6B0WEiF80sCESHxkUvbg Q1sWDTe6VQh9pOo848bZmL/335PBUdemeuGZMHgSD4Gb6Q0J+3tyc5VrWqiWHh+OBtBQ Iqcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LPGS2Yo2yPpvShBA+iBIo7QSW0BY3Idei/j1IjlrSi4=; b=vmqGtZTvo9JfpLXL1hYtsrFv922/vYeaHThI2KQZlmlrANOERDGdTZqVhqFu2wuWZP YOszjMxfGGh1BqvCGS0xkq9eD2PZXXYfChZ9ykPV7IGr4L13LYVhkUjUHAM49NHUQjPs xdtc6LbSQY4Tg5TOJyr/2w/IiDVS5S4ZlDt3Yw92FuP6sJWN/fjrNqX82RQDf/UmUjBK 0V6P8eIM0kSwFfsnP0qRjkw9PXlr7jIfY7d6EDLmQskY+WMe20Igt7XggXvw5D7g6dyE 7hTdZDlCZ6w1kYAqRFzp+S4VJHwqJGtCaPtvhu0+Thb8kQyxCwxsKroy3S1kIcjIaV2z ozfA== X-Gm-Message-State: AFqh2kpg0uggRIxTafteKhfTDZUctgodN3cdgJ1YiS4KvFrrEFwgFF5p SUbMVxwsQ+4ZYpd/Sn0AfgprhBlJeJbHCTrrDJyXcw== X-Google-Smtp-Source: AMrXdXvfZrTHu9fiXb6texc9gDHVRBjo2zZ1t485I6dpScbICkrtqErZWB9zrvoq7PkBDRcVScmHEg/2xicob5Spa1o= X-Received: by 2002:a05:600c:3583:b0:3da:221b:fc1f with SMTP id p3-20020a05600c358300b003da221bfc1fmr3065431wmq.175.1675103957799; Mon, 30 Jan 2023 10:39:17 -0800 (PST) MIME-Version: 1.0 References: <6548b3b3-30c9-8f64-7d28-8a434e0a0b80@redhat.com> In-Reply-To: From: James Houghton Date: Mon, 30 Jan 2023 10:38:41 -0800 Message-ID: Subject: Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range To: Peter Xu Cc: Mike Kravetz , David Hildenbrand , Muchun Song , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 23B8B80004 X-Stat-Signature: 31hx8oia3y64rq6kzsk6wsrytydgarmz X-Rspam-User: X-HE-Tag: 1675103958-264214 X-HE-Meta: U2FsdGVkX1/vJFHfvXlnHPMwbR6Cv2mYkqLFFb8RRJyTbyX+nFwScqrgxl6KTWaZO+VedCqF2ZFa6GCQhFkaF1gQYbTvA1V/2YuvMURoaicNloFOKR1zpN8DRC2vcnvWGBaDkU6Yb8R4Sbw6LO3gjrxoqLM6rNX7d2b6ECjOaxZu/zjlbRt1XQ4c8xxORDgPQTgUTJ1XkUq6gMRkhxhlkmm88ZThTSY+6sjoABnsOUuHiiyw7SMQR/JJ7wRC51X6Zh5nLHiZGPlo23meUjHQEYUXuXrCSJJxVw0seYHYfw3to0dRTer9yG+yZq3gGiofKFDrnGkgoydIj2+1bx81L63STfWNGbAgZbIvON7H9BcxfoK3RSyGs5JZXr30EPQTdQfIn+uuDwtRLanQ+/Gl3H6qJ1hXXWpi1aA0EgkDv8g8cMkqwnGN4D5qcsQzIVfWCvSElL8hPJsdkCd6YYR6OeSmt0rBU0Uq6GZtUqi7Vu8iy4cujmL70bvhkx4XQZr56s9u/gi27/AUFOFXBq88ekvA3z9YdzFZc4B0SLm87aUgpZuDanb7ngxsDpyzbtWVupYg2Jxt5l8rBfOc0QSLlDVU1PFhXCzjPCIgYIOwRJb6C17CoXZj0FWRItgWRDtzaPd9n/0K9YnbFL065q9t7gRJiQQq8UTuvUDIJLf9KDuPw1zsr/v0ErmK6fz+gL0r/2QRPPF9vBCLxBLg/cJkikIvzfbAsN90sSPlNtPJYo0oryq20AE4BJYAmk7M77jWl1bKcFVsXES4AzbCqBPvbZFG5+jhH8G47sa7kXRtzebUgbEetyDTcKSF330LV7H2iWs0f4vVbVBm6u8uSkkRt8rmyjNr2/iqWAje1HpmjkufNfr7QHC8Nn+gSQUZIUUCKQUzn1A2tYAFx+X5MMSKPu8vn5qrgHOtUenXYyS1BuSMEV7wmkY7Vu7ZK7cO/OG/aso2L7A26jNzRFb0sIJ O4uCU37E s+v/Hrr/ZUuddNJe76GGgyKitcQ16TroabMiIMxcQyxagD3z02GR2M7O7B5rIlklcX5OP/TiykypaVEAMl+8PGeweCdx4TF2U4sQ1ErYR2dlJm/I0zsdmb3jR6Xw6e9I9jMpU5XgOxL8o95jIwo1gGjrH4Y0lvFNUnnxcRAfKn/lFqMVLMf5edCJHyqpf0+s45wasREZ5UY2ojL9U9Rk+It5ydJcWvr3mghSjD8E5Wz60ivgfDfZvbepsOUvNGozb6n/RIGhU3rsrFG+XcBDRwysT/fUhm5sE7jUKp9kBYx0Hiu6hrRy0LJkR6r4LUmJslFHormezoC2oxJrLBgYUJXSHPCgu1tt8fi5Pizxds/kW5dU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 30, 2023 at 9:29 AM Peter Xu wrote: > > On Fri, Jan 27, 2023 at 01:02:02PM -0800, James Houghton wrote: > > On Thu, Jan 26, 2023 at 12:31 PM Peter Xu wrote: > > > > > > James, > > > > > > On Thu, Jan 26, 2023 at 08:58:51AM -0800, James Houghton wrote: > > > > It turns out that the THP-like scheme significantly slows down > > > > MADV_COLLAPSE: decrementing the mapcounts for the 4K subpages becomes > > > > the vast majority of the time spent in MADV_COLLAPSE when collapsing > > > > 1G mappings. It is doing 262k atomic decrements, so this makes sense. > > > > > > > > This is only really a problem because this is done between > > > > mmu_notifier_invalidate_range_start() and > > > > mmu_notifier_invalidate_range_end(), so KVM won't allow vCPUs to > > > > access any of the 1G page while we're doing this (and it can take like > > > > ~1 second for each 1G, at least on the x86 server I was testing on). > > > > > > Did you try to measure the time, or it's a quick observation from perf? > > > > I put some ktime_get()s in. > > > > > > > > IIRC I used to measure some atomic ops, it is not as drastic as I thought. > > > But maybe it depends on many things. > > > > > > I'm curious how the 1sec is provisioned between the procedures. E.g., I > > > would expect mmu_notifier_invalidate_range_start() to also take some time > > > too as it should walk the smally mapped EPT pgtables. > > > > Somehow this doesn't take all that long (only like 10-30ms when > > collapsing from 4K -> 1G) compared to hugetlb_collapse(). > > Did you populate as much the EPT pgtable when measuring this? > > IIUC this number should be pretty much relevant to how many pages are > shadowed to the kvm pgtables. If the EPT table is mostly empty it should > be super fast, but OTOH it can be much slower if when it's populated, > because tdp mmu should need to handle the pgtable leaves one by one. > > E.g. it should be fully populated if you have a program busy dirtying most > of the guest pages during test migration. That's what I was doing. I was running a workload in the guest that just writes 8 bytes to a page and jumps ahead a few pages on all vCPUs, touching most of its memory. But there is more to understand; I'll collect more results. I'm not sure why the EPT can be unmapped/collapsed so quickly. > > Write op should be the worst here case since it'll require the atomic op > being applied; see kvm_tdp_mmu_write_spte(). > > > > > > > > > Since we'll still keep the intermediate levels around - from application > > > POV, one other thing to remedy this is further shrink the size of COLLAPSE > > > so potentially for a very large page we can start with building 2M layers. > > > But then collapse will need to be run at least two rounds. > > > > That's exactly what I thought to do. :) I realized, too, that this is > > actually how userspace *should* collapse things to avoid holding up > > vCPUs too long. I think this is a good reason to keep intermediate > > page sizes. > > > > When collapsing 4K -> 1G, the mapcount scheme doesn't actually make a > > huge difference: the THP-like scheme is about 30% slower overall. > > > > When collapsing 4K -> 2M -> 1G, the mapcount scheme makes a HUGE > > difference. For the THP-like scheme, collapsing 4K -> 2M requires > > decrementing and then re-incrementing subpage->_mapcount, and then > > from 2M -> 1G, we have to decrement all 262k subpages->_mapcount. For > > the head-only scheme, for each 2M in the 4K -> 2M collapse, we > > decrement the compound_mapcount 512 times (once per PTE), then > > increment it once. And then for 2M -> 1G, for each 1G, we decrement > > mapcount again by 512 (once per PMD), incrementing it once. > > Did you have quantified numbers (with your ktime treak) to compare these? > If we want to go the other route, I think these will be materials to > justify any other approach on mapcount handling. Ok, I can do that. GIve me a couple days to collect more results and organize them in a helpful way. (If it's helpful at all, here are some results I collected last week: [2]. Please ignore it if it's not helpful.) > > > > > The mapcount decrements are about on par with how long it takes to do > > other things, like updating page tables. The main problem is, with the > > THP-like scheme (implemented like this [1]), there isn't a way to > > avoid the 262k decrements when collapsing 1G. So if we want > > MADV_COLLAPSE to be fast and we want a THP-like page_mapcount() API, > > then I think something more clever needs to be implemented. > > > > [1]: https://github.com/48ca/linux/blob/hgmv2-jan24/mm/hugetlb.c#L127-L178 > > I believe the whole goal of HGM is trying to face the same challenge if > we'll allow 1G THP exist and being able to split for anon. > > I don't remember whether we discussed below, maybe we did? Anyway... > > Another way to not use thp mapcount, nor break smaps and similar calls to > page_mapcount() on small page, is to only increase the hpage mapcount only > when hstate pXd (in case of 1G it's PUD) entry being populated (no matter > as leaf or a non-leaf), and the mapcount can be decreased when the pXd > entry is removed (for leaf, it's the same as for now; for HGM, it's when > freeing pgtable of the PUD entry). Right, and this is doable. Also it seems like this is pretty close to the direction Matthew Wilcox wants to go with THPs. Something I noticed though, from the implementation of folio_referenced()/folio_referenced_one(), is that folio_mapcount() ought to report the total number of PTEs that are pointing on the page (or the number of times page_vma_mapped_walk returns true). FWIW, folio_referenced() is never called for hugetlb folios. > > Again, in all cases I think some solid measurements would definitely be > helpful (as commented above) to see how much overhead will there be and > whether that'll start to become a problem at least for the current > motivations of the whole HGM idea. > > Thanks, > > -- > Peter Xu > Thanks, Peter! [2]: https://pastebin.com/raw/DVfNFi2m - James