From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EDD7C369B2 for ; Mon, 14 Apr 2025 23:40:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF41D28011A; Mon, 14 Apr 2025 19:40:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7EB0280118; Mon, 14 Apr 2025 19:40:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F77828011A; Mon, 14 Apr 2025 19:40:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6DD43280118 for ; Mon, 14 Apr 2025 19:40:31 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A4B4E1A078D for ; Mon, 14 Apr 2025 23:40:32 +0000 (UTC) X-FDA: 83334271104.24.CFCFE20 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf22.hostedemail.com (Postfix) with ESMTP id 62873C0003 for ; Mon, 14 Apr 2025 23:40:30 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=IW8YBJtR; spf=pass (imf22.hostedemail.com: domain of npache@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744674030; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=J2OiIsf2W8yk+5b++Qe2omt2QBDI07vwI1CVDpMYdO8=; b=AxrVaAcrthqsiUfgSc1MT5Q8OmZtj5Pv4DlK73FUV2Y0FkgH2lqpbxnhp7WyZirk5fhHQC a78BMAgQpOaunufPHMG48XuW8hJ2e30Zga/K6v/ndTWfN8mGnF19/ii/KF72GVQ888HFyE awLXxuRx8dFY21txnWMhaKeQoy85eTA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744674030; a=rsa-sha256; cv=none; b=Uf52tycio7Ll/zFy7Q28Memknb47WhfZlTxPUBaBPwsWgO/B6SsTLNeLNm6NNqOTQ1iVx1 KfxfmhPCZw2Tf85oON1+N/5iHasOKgyfHDbinhcqs5+KZj0e9xA5r7/lHr7e5rRlW93/yX z05Jd1v0pTOm8q9KUGHlsobgj656SBg= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=IW8YBJtR; spf=pass (imf22.hostedemail.com: domain of npache@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1744674029; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J2OiIsf2W8yk+5b++Qe2omt2QBDI07vwI1CVDpMYdO8=; b=IW8YBJtRpg7AHratOakekrYkEOzo268CUBzkla7CYnuoby9sNnChx5NNRUyoa5BRO2Qthh +xV7CV69wt+QdHpmPvQR3DbAhQKgNHtx9eml6yjBtu260Dil5nbXFXv2A4t7FhtamSsR9B Tlj7aAe1FuHEP1j9w16hW4YULgmg4rI= Received: from mail-yb1-f200.google.com (mail-yb1-f200.google.com [209.85.219.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-36-Rxo5l-bBNbCyV2DUCNdpqA-1; Mon, 14 Apr 2025 19:40:28 -0400 X-MC-Unique: Rxo5l-bBNbCyV2DUCNdpqA-1 X-Mimecast-MFC-AGG-ID: Rxo5l-bBNbCyV2DUCNdpqA_1744674028 Received: by mail-yb1-f200.google.com with SMTP id 3f1490d57ef6-e6df140fda2so1137703276.3 for ; Mon, 14 Apr 2025 16:40:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744674028; x=1745278828; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J2OiIsf2W8yk+5b++Qe2omt2QBDI07vwI1CVDpMYdO8=; b=pUvF7D7uNOrPwSHuf38qgjVMRNH8WYNbSLu+jGwcQACt7fVLmNkRWr0UlX6xiFCrtw 3DNDbjijTDdyWQiUycNr69WyJ4jtA3GZ8qcWr4pGGRnEGybY5yklGVXv4A28Ww0BewUA ynji+9v32FI/Ejcb3cSgnLBOhZT91XPPYm6ItXyghJNjuwYfogzDuWDsPXfiZJdL9dCe 7Jhqea6AUV5uY8wk8xGrYF7P10sc3iT3ED+nHPTEH0uR0mErsODB1qdM8pLalZWIsf6m gO6WNizr0zTr1mBhSdbROuihX40HbZLvdq2D2h5xz+8pU1+NKIdbx1EuRWfx6ZeYLCjS ZjYw== X-Gm-Message-State: AOJu0Yx+BIre+fa8CSvYfA0PEfj7VTCbFyHVH9/cDr8xEX+D9TcaEpd8 gBYpDUoddiRJcuzh5+vik24CF7sR/L9suwDMVwU0NceGzvJ7CYf2skmQbLOyTJdduKoDD1bYPk5 DtgOnPSSCnfxJrTcI1DnWDWMqB4jruxkX0ttEE07mupYcyqYEjNJRXgzJ3QRW2AttBCwsnrXbeO Sd5aXQX0o7wPoQanczoXEE2fW0MBz2VoP8Q/4b X-Gm-Gg: ASbGncuaWCLQTSvB1Cd6RXpk/f7MXzfvz3VbGhJc4DZm0O24aDN5PmARihwLuTq6ERj am2zQlMlDZk8tMwSDoBiCw8WfU2SIs4MWtjC8Ctooj9ff8U80baltvppc79Ou4pCEtB2HuL/7yc lea9btlr4= X-Received: by 2002:a05:6902:1b81:b0:e60:9dd3:38c9 with SMTP id 3f1490d57ef6-e704de7aa84mr21194663276.7.1744674027822; Mon, 14 Apr 2025 16:40:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFsa05k09ZEyzrJpdWgRU+tnATSjobx4Yocj67TnkEdOCqy2gsYruBCtGmLiUfT7BQ630Z7RXLReRvIiw2zlPI= X-Received: by 2002:a05:6902:1b81:b0:e60:9dd3:38c9 with SMTP id 3f1490d57ef6-e704de7aa84mr21194642276.7.1744674027458; Mon, 14 Apr 2025 16:40:27 -0700 (PDT) MIME-Version: 1.0 References: <20250414220557.35388-1-npache@redhat.com> <20250414220557.35388-7-npache@redhat.com> In-Reply-To: From: Nico Pache Date: Mon, 14 Apr 2025 17:40:01 -0600 X-Gm-Features: ATxdqUEqbiSySwg_g9kH4ICQWgseNWYHKE0lhhrHgMaiepLYzEBI-8jaq7Rw1iY Message-ID: Subject: Re: [PATCH v3 06/12] khugepaged: introduce khugepaged_scan_bitmap for mTHP support To: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, david@redhat.com, baohua@kernel.org, baolin.wang@linux.alibaba.com, ryan.roberts@arm.com, willy@infradead.org, peterx@redhat.com, ziy@nvidia.com, wangkefeng.wang@huawei.com, usamaarif642@gmail.com, sunnanyong@huawei.com, vishal.moola@gmail.com, thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org, jglisse@google.com, surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: y8FGuPX-Kx3X_sg6kWUOqlbUPyW3p-MNx-fklJCQ9fk_1744674028 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 62873C0003 X-Stat-Signature: 9nxen5jhk39j6frdnr4bjsbjd8pf3tpy X-HE-Tag: 1744674030-157183 X-HE-Meta: U2FsdGVkX1971ZVz7z883i1mjCmMmTmaXM6smqBdLZQr4nHCP6gUj8NukKm9y7efEvfxeXj/c3iolY59aqXno9XxRsQi/ZNuAWlX+BxOsJuWE2HVdwcD7bXeAUXbP6uEe502Ymnn2VYutfsS+HguIYq5O/PimOVZsp9ehaGVk/q7exUBHrnvyi43pa3GWoZYEpZN/OQd7/t6D4SC5y+J3QDirT2MqD1N/2CmNpLntyhjbJ+v6X84BUlYJZm2ara3dzNTXQX248/7LAw4yUPj6CZ9Bw7iFXbkcaJI/vWJENM6zZPpnteoyvvYSZ764VUhCxW39EBTWPVlWKUpzgsjquDC++lniLJHEzI80Al6gzb8LCggxAiz5/XzbhrODnQUx2AViIUsKfRojjpdQ2yEcKcT8XpTKf0jpiNvEbsyUWF9Ajpqd/bmkNwsmmRBBH6WXP9RMkATwpnx35Qm4Oy5NRg7xILdomxG4APuI3hL20EQID8i/+WMhi+2OnBaLPG3iDAnqQafUFyFF6tgtu6UgIsl5qwHrUPTeotnnCM4gxQRkNzfUece8JyE0K61kMYhgZ5hKYq66wdFRtt8zkBC+Wy2chYi5EqOaSvqQSIzImx5uxNgc3DU9S8x9XTCarG0dvEmGCpA5qs4X+3f7nAR6DwK80seRw7S5eFJireNFtpWKTy/tKdBr3uvEr4iqNRuUHkGHNxzrLgIvEj0vFJK/hoIKTYedBJpesQfMJfgefeork+MAg1VxkSmI1KXolWgxuADCVEEI+4NNa/nciPMf01VUNj/T44dW7WB02TfXKRfUO40GThhENqccTu655xxpgdxyiiJzmTxuybxBRlRFen7ignr2XxzfVGFIKKQzWuHru4aQ5tGyuEYjtlTs5gTPPz5mqKde3zrg9IdLoM2uVP8czSqnNCIntC6zfW/bloRo1yfTPFpTMbG4Avf3oz8R7PtdoXcWgWtSdFti/L g6MqFS9N 0KUoAcuToQXWMDWkXIPM+jee2Df8cgIsxigFyJmxdDhjPuw9GfyxZ27IzNS47INC5wxHkoxV+hMRPZM1q67H20BSoJeWiPF7ibIL1ePp63yFqQEjLXDN5MWWZMRiq5CNJF+5vfZq7hPoPWzEovCSnBWzkkLZrPlYtWE6PgQmIZ7/OvSY9vXP8wPEV/3aEvJv1SABxmFZCnJtGtI7E7sIfUHq7pbnX8ItIVBn4t5zO5+noigAxyiZHxW3cDrkdR8KM4giQUH2gvgmgkCHB4uPQR+cq6Lp/Tw/q8oNaQ3XzTDooRucL3DkqWe/wDlwt4ssch5+KLzqpY2i26C07+loYH+ilV4t4fbGnDghEPl922TqoOJl90YyrsSB7gJFYMf9hI5pegAbR7GBmLZSK40my94kLwQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Apr 14, 2025 at 5:06=E2=80=AFPM Nico Pache wrot= e: > > On Mon, Apr 14, 2025 at 4:07=E2=80=AFPM Nico Pache wr= ote: > > > > khugepaged scans PMD ranges for potential collapse to a hugepage. To ad= d > > mTHP support we use this scan to instead record chunks of fully utilize= d > > sections of the PMD. > > > > create a bitmap to represent a PMD in order MIN_MTHP_ORDER chunks. > > by default we will set this to order 3. The reasoning is that for 4K 51= 2 > > PMD size this results in a 64 bit bitmap which has some optimizations. > > For other arches like ARM64 64K, we can set a larger order if needed. > > > > khugepaged_scan_bitmap uses a stack struct to recursively scan a bitmap > > that represents chunks of utilized regions. We can then determine what > > mTHP size fits best and in the following patch, we set this bitmap whil= e > > scanning the PMD. > > > > max_ptes_none is used as a scale to determine how "full" an order must > > be before being considered for collapse. > > > > If a order is set to "always" lets always collapse to that order in a > > greedy manner. > > > > Signed-off-by: Nico Pache > > --- > > include/linux/khugepaged.h | 4 ++ > > mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++---- > > 2 files changed, 89 insertions(+), 9 deletions(-) > > > > diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h > > index 1f46046080f5..60d41215bc1a 100644 > > --- a/include/linux/khugepaged.h > > +++ b/include/linux/khugepaged.h > > @@ -1,6 +1,10 @@ > > /* SPDX-License-Identifier: GPL-2.0 */ > > #ifndef _LINUX_KHUGEPAGED_H > > #define _LINUX_KHUGEPAGED_H > > +#define KHUGEPAGED_MIN_MTHP_ORDER 3 > Somehow managed to drop > #define KHUGEPAGED_MIN_MTHP_ORDER 2 > When cleaning up my patches. > > Sending a V4 of just this patch in reply to this email. > > Sorry for the noise... Sorry more noise... The #define KHUGEPAGED_MIN_MTHP_ORDER 2 fixup got merged into the wrong commit, and is actually in 07/12. If we take this V4, the merge will clean up the 07/12 commit with no additional changes. If sending out a V4 of 07/12 is needed please let me know. > > > > > +#define KHUGEPAGED_MIN_MTHP_NR (1< > +#define MAX_MTHP_BITMAP_SIZE (1 << (ilog2(MAX_PTRS_PER_PTE) - KHUGEPA= GED_MIN_MTHP_ORDER)) > > +#define MTHP_BITMAP_SIZE (1 << (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP= _ORDER)) > > > > extern unsigned int khugepaged_max_ptes_none __read_mostly; > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index dfecedc6a515..5a3be30096fc 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -94,6 +94,11 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash, M= M_SLOTS_HASH_BITS); > > > > static struct kmem_cache *mm_slot_cache __ro_after_init; > > > > +struct scan_bit_state { > > + u8 order; > > + u16 offset; > > +}; > > + > > struct collapse_control { > > bool is_khugepaged; > > > > @@ -102,6 +107,18 @@ struct collapse_control { > > > > /* nodemask for allocation fallback */ > > nodemask_t alloc_nmask; > > + > > + /* > > + * bitmap used to collapse mTHP sizes. > > + * 1bit =3D order KHUGEPAGED_MIN_MTHP_ORDER mTHP > > + */ > > + DECLARE_BITMAP(mthp_bitmap, MAX_MTHP_BITMAP_SIZE); > > + DECLARE_BITMAP(mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE); > > + struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_SIZE]; > > +}; > > + > > +struct collapse_control khugepaged_collapse_control =3D { > > + .is_khugepaged =3D true, > > }; > > > > /** > > @@ -851,10 +868,6 @@ static void khugepaged_alloc_sleep(void) > > remove_wait_queue(&khugepaged_wait, &wait); > > } > > > > -struct collapse_control khugepaged_collapse_control =3D { > > - .is_khugepaged =3D true, > > -}; > > - > > static bool khugepaged_scan_abort(int nid, struct collapse_control *cc= ) > > { > > int i; > > @@ -1118,7 +1131,8 @@ static int alloc_charge_folio(struct folio **foli= op, struct mm_struct *mm, > > > > static int collapse_huge_page(struct mm_struct *mm, unsigned long addr= ess, > > int referenced, int unmapped, > > - struct collapse_control *cc) > > + struct collapse_control *cc, bool *mmap_l= ocked, > > + u8 order, u16 offset) > > { > > LIST_HEAD(compound_pagelist); > > pmd_t *pmd, _pmd; > > @@ -1137,8 +1151,12 @@ static int collapse_huge_page(struct mm_struct *= mm, unsigned long address, > > * The allocation can take potentially a long time if it involv= es > > * sync compaction, and we do not need to hold the mmap_lock du= ring > > * that. We will recheck the vma after taking it again in write= mode. > > + * If collapsing mTHPs we may have already released the read_lo= ck. > > */ > > - mmap_read_unlock(mm); > > + if (*mmap_locked) { > > + mmap_read_unlock(mm); > > + *mmap_locked =3D false; > > + } > > > > result =3D alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); > > if (result !=3D SCAN_SUCCEED) > > @@ -1273,12 +1291,72 @@ static int collapse_huge_page(struct mm_struct = *mm, unsigned long address, > > out_up_write: > > mmap_write_unlock(mm); > > out_nolock: > > + *mmap_locked =3D false; > > if (folio) > > folio_put(folio); > > trace_mm_collapse_huge_page(mm, result =3D=3D SCAN_SUCCEED, res= ult); > > return result; > > } > > > > +// Recursive function to consume the bitmap > > +static int khugepaged_scan_bitmap(struct mm_struct *mm, unsigned long = address, > > + int referenced, int unmapped, struct collapse_c= ontrol *cc, > > + bool *mmap_locked, unsigned long enabled_orders= ) > > +{ > > + u8 order, next_order; > > + u16 offset, mid_offset; > > + int num_chunks; > > + int bits_set, threshold_bits; > > + int top =3D -1; > > + int collapsed =3D 0; > > + int ret; > > + struct scan_bit_state state; > > + bool is_pmd_only =3D (enabled_orders =3D=3D (1 << HPAGE_PMD_ORD= ER)); > > + > > + cc->mthp_bitmap_stack[++top] =3D (struct scan_bit_state) > > + { HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0 }; > > + > > + while (top >=3D 0) { > > + state =3D cc->mthp_bitmap_stack[top--]; > > + order =3D state.order + KHUGEPAGED_MIN_MTHP_ORDER; > > + offset =3D state.offset; > > + num_chunks =3D 1 << (state.order); > > + // Skip mTHP orders that are not enabled > > + if (!test_bit(order, &enabled_orders)) > > + goto next; > > + > > + // copy the relavant section to a new bitmap > > + bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitma= p, offset, > > + MTHP_BITMAP_SIZE); > > + > > + bits_set =3D bitmap_weight(cc->mthp_bitmap_temp, num_ch= unks); > > + threshold_bits =3D (HPAGE_PMD_NR - khugepaged_max_ptes_= none - 1) > > + >> (HPAGE_PMD_ORDER - state.order); > > + > > + //Check if the region is "almost full" based on the thr= eshold > > + if (bits_set > threshold_bits || is_pmd_only > > + || test_bit(order, &huge_anon_orders_always)) { > > + ret =3D collapse_huge_page(mm, address, referen= ced, unmapped, cc, > > + mmap_locked, order, offset * KH= UGEPAGED_MIN_MTHP_NR); > > + if (ret =3D=3D SCAN_SUCCEED) { > > + collapsed +=3D (1 << order); > > + continue; > > + } > > + } > > + > > +next: > > + if (state.order > 0) { > > + next_order =3D state.order - 1; > > + mid_offset =3D offset + (num_chunks / 2); > > + cc->mthp_bitmap_stack[++top] =3D (struct scan_b= it_state) > > + { next_order, mid_offset }; > > + cc->mthp_bitmap_stack[++top] =3D (struct scan_b= it_state) > > + { next_order, offset }; > > + } > > + } > > + return collapsed; > > +} > > + > > static int khugepaged_scan_pmd(struct mm_struct *mm, > > struct vm_area_struct *vma, > > unsigned long address, bool *mmap_lo= cked, > > @@ -1445,9 +1523,7 @@ static int khugepaged_scan_pmd(struct mm_struct *= mm, > > pte_unmap_unlock(pte, ptl); > > if (result =3D=3D SCAN_SUCCEED) { > > result =3D collapse_huge_page(mm, address, referenced, > > - unmapped, cc); > > - /* collapse_huge_page will return with the mmap_lock re= leased */ > > - *mmap_locked =3D false; > > + unmapped, cc, mmap_locked, = HPAGE_PMD_ORDER, 0); > > } > > out: > > trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, refere= nced, > > -- > > 2.48.1 > >