From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBE0AC282C6 for ; Mon, 3 Mar 2025 19:13:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D9FD280004; Mon, 3 Mar 2025 14:13:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 28984280002; Mon, 3 Mar 2025 14:13:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12AB9280004; Mon, 3 Mar 2025 14:13:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E53D5280002 for ; Mon, 3 Mar 2025 14:13:52 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6B886B6C2C for ; Mon, 3 Mar 2025 19:13:52 +0000 (UTC) X-FDA: 83181189504.30.E6DF381 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf05.hostedemail.com (Postfix) with ESMTP id 28A5210000E for ; Mon, 3 Mar 2025 19:13:50 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=T1HYvEtL; spf=pass (imf05.hostedemail.com: domain of npache@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741029230; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=irvX8HaK6hJSzkI8/UVseCVdZFZ7gxSmTvIl9v8M8ao=; b=Bo4rMnnaG8e6kU1WxlDsa7lFcWZt7ktbOuaHJD0T9lT00JQIy2Pq7Ffq1lZIXN2n/dzMXL 9rYTJivrtM/4I/HJpDBSJI22rKx6z6dC3dzRwmYZ3jLe3VA6ncZ0fplnjCbwttfkskvcHk 2oQN08r+mS6Gq49bmqNsun2Grw40exQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741029230; a=rsa-sha256; cv=none; b=dBKYTgMG2r2Dg7A8NGK3uqtmghnx/kurnccdsHx/WZZkRwzoJhwBEtVQZhC0tzXYmaY8OT MvQ0qXIGVnbBoyXpNH82w0qRBYxFIFHd71qFEjBJApvL/2lAFzLeq5VyGe0fG+LrkjuYaP 5LSIU3jo6VFv69wDVaYmUJwuyJX+DK4= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=T1HYvEtL; spf=pass (imf05.hostedemail.com: domain of npache@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741029229; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=irvX8HaK6hJSzkI8/UVseCVdZFZ7gxSmTvIl9v8M8ao=; b=T1HYvEtLEwbppibcyEqIAN6LQ/dt1Cb1ePi5z9VtRSwrNRwl0/CZLyVRYhucvkchNS2LyK iRnhIMqg7yAZfr3KpLjn2nSQah7RU70L9PwHW2tEzEQMSPGZdZXTaxe4ORou/xch4v/vPp ugnBXF59rD56qWHl4/7q4ltYtjnBkYs= Received: from mail-yw1-f200.google.com (mail-yw1-f200.google.com [209.85.128.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-288--lPeBXWONJGThDW5oFvx4w-1; Mon, 03 Mar 2025 14:13:46 -0500 X-MC-Unique: -lPeBXWONJGThDW5oFvx4w-1 X-Mimecast-MFC-AGG-ID: -lPeBXWONJGThDW5oFvx4w_1741029225 Received: by mail-yw1-f200.google.com with SMTP id 00721157ae682-6f3ff1ff117so67084387b3.0 for ; Mon, 03 Mar 2025 11:13:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741029225; x=1741634025; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=irvX8HaK6hJSzkI8/UVseCVdZFZ7gxSmTvIl9v8M8ao=; b=EkcHdOlFtyqQ1J6YuZV8UX4ZYbCdwaiSI4uLENJaJRQniTklGuUFlji5BKaPy4V8tM B/E4w02rGxqvjh186WkKxCitO5fmdsnVwV0RXAbvHabMHMwyeqmqH5vOIFJqMTOFinQM 5s9orq3YotBRwiRx4TUhjrhF5d0DaR5gwtk/uL/cTIiZud7IqaxWCQI9UpgXGZHIotZ2 VCvb/UfCGZzpBVxckBAnjKyEsySInuPFzWzE25ZxYPmLU3r39C6YOHh2rbEgHyGlz/sT CYE18vIi/Z4TMOFLFwbZLSBzX8FUVtn+uufyQF4AE1l1j0qJi8MPvLzPlpVB2FL5IrvV yhGA== X-Forwarded-Encrypted: i=1; AJvYcCXl6zjYBfwj7Xsu+z0LZovbJa6jhQuiCQGjjnUFZKBKFqrPDfz4tH1/X3BTb2V3Qx9qtroLfT9Lvg==@kvack.org X-Gm-Message-State: AOJu0YzJ0QEjt7fns4EbnIwSwlmG2KebYLOWAIJQJFYMPdIkMFV2XprF mhJeoQoWQ4bArzJZvbgIvG7LVCtxTxcr/AviCjMvlHJM/hTSkphNKWuyTRsRYKMEnOpU0eMYZhS s8LS9ZFJaYy1+E5UWDSBannAuGuGPyY3+89CGmjdqbDL4AlA17ga6ycEjVwgeN73DDuOImi2rWZ X5JG9/yLcErE2CDHzWr+PN9j0= X-Gm-Gg: ASbGnct9XBs2eM7W8HWCe8IUa3jScYcMYEGXt2wKcYyWUJzGA+lEhx6Uf3nFbL3BWZU CQA3cI9amG9/eB4AFEV7LTwhzvkHkqHm1bKiTtkTuFPy+szQQe9VFMZwzcmlq8Fp1EGv1UzqPYw QzPYZvZWFo8N8= X-Received: by 2002:a05:690c:3390:b0:6f9:aeee:8f26 with SMTP id 00721157ae682-6fd49e42491mr209885887b3.0.1741029225468; Mon, 03 Mar 2025 11:13:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IH753F+WZZhkeZ2s8P17zVoSn9ceOxsNxKxezeAy3J2LeiCTKX11mEwaL5ILDda7wt6sOW5FGo1VXCJgGsqgk4= X-Received: by 2002:a05:690c:3390:b0:6f9:aeee:8f26 with SMTP id 00721157ae682-6fd49e42491mr209885407b3.0.1741029225116; Mon, 03 Mar 2025 11:13:45 -0800 (PST) MIME-Version: 1.0 References: <20250211003028.213461-1-npache@redhat.com> <20250211003028.213461-8-npache@redhat.com> <0319c841-cde9-42f6-a230-39b050659f1a@arm.com> In-Reply-To: <0319c841-cde9-42f6-a230-39b050659f1a@arm.com> From: Nico Pache Date: Mon, 3 Mar 2025 12:13:19 -0700 X-Gm-Features: AQ5f1JqKujvB-JH3nb6Lk4MFEu1ezwB4pLSMxRWU2UB4S4VCAAnK7zfjkGySMcA Message-ID: Subject: Re: [RFC v2 7/9] khugepaged: add mTHP support To: Ryan Roberts Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, willy@infradead.org, kirill.shutemov@linux.intel.com, david@redhat.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, sunnanyong@huawei.com, usamaarif642@gmail.com, audra@redhat.com, akpm@linux-foundation.org, rostedt@goodmis.org, mathieu.desnoyers@efficios.com, tiwai@suse.de X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: bIM7iuC7KhHSJ24QiqTAf6wgZtseiEZAqm18rZuDg48_1741029225 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam02 X-Stat-Signature: mgzuniz336wi9z6azcutdojmjiypubym X-Rspamd-Queue-Id: 28A5210000E X-Rspam-User: X-HE-Tag: 1741029229-689913 X-HE-Meta: U2FsdGVkX19ki2MIfWqG6yklbh7pE+Fg05fPhjkjYKWE8r3uw337WUflzxvI/937hvtrVeRGuBVY4C0eM+hy+Um+733sG67eZXN3si4S+4fXkqSSB0HemFiUM+sC3ucNmwUoOdWPCj7k7k9iiKTwnFwObPXzy7dKYuqwAEv5rmPMQVV8WbPtEopXFYLGw6MHZbDgTAqnwI9x43qqeZA2JbLcc/LQGTNhXnzLt66jC6KIdQ3NZVjpGk9CV4sojCJ0Uwijv9SHrgMf/fAK/cHSiP6N3cwu1HMY+erI1hK8jXy5+aLEOQovLQV+ZpyhIQvt3JP35XUIpzy4Vkh05fcaRD6pN33N3YGWCCNGuAJyv8HD5JRiwsWUXBkVxBX5rkweBWZ1S9/mCyXS7Uf1Ia/y5fX2YYwElxzCTI0xXLKD3U56chxYdtvS86w1jVODTOAM/XVsaAKGeB7j1xQVb9oJfUDIywjjshrF0Fm9Ngfiw1KvnP5lyU07BBKWI8LlO0ujW60ldIAfRqHGCDBTKrz9jZtkxFvE02dIf5Np5wVPD3ltuwaZ4LpJz+ZChf2xN9fj8c7sWHG1EwHDTK0RiDKIpG+YDTt4aTqv3/f9Njk4/C52wGnoPjDWyte8ovQxhrP6lk+0gMd7OMf+bkcX7sRLUvhjzaDlT3rvjezLiABurckOsV32BkQMgtB98suCEiTGPnHMv1zv3Q1ukrXDOfMkB+l+Q5aS/uN6thUomXoIZ0NI5zHCSoKHvZlKD/4cel/O0DB1Zqd3Dhrj4ilSGUxd8ZbUhkO3z4ob6ZbtyQHyqjuSEByAOAIBRqnDwS3+QWc+Cplvrl8LzRdKIdNEHsvKC/zFk27GSo2RHjxMC5iPt8APGl77gQB+Ok+qV4jNzpLRp63qXs0+/ymwtKhD+lgcsCShWQFOhuSkUc1HB2K0FsIZ9sw47C8B7x+Vl2t34khqeP9Evia2MSy5DrYaXtM aUltR2X6 yRbTmsJ+GITPmUc055o8LVmMBrFDTaJqTwB8PTR/fy2x8zVMwMVmoL7s7ZZWY+ECKBlDK28qF/UwrLrMTlyjhqFJ+4izDMBC1AkGIJ7EuuG8ZqMUetf2ApZtw6KPxzW+tppNhjDxcAaO5dRUGL4YVMUFmdg4KtaKt+Pfe4C19OZ4cbKmvmBjYDDqg7CO9T8Gzir17Gnjo2kqqM1ZC6R62nOdOlUiPxKcxF21Ybk6q9mtLmkP+oMKRCOjoqHPhKohq2UM+cysmp2VvSdG3pwxh8l0I4QCx8vsAnHYDi78gVLldQxumTxBhLtqRcKn7LeX7lUww7I0c5iQ+w7SwarE2vW+nC8jE05zpf5v0BF+7aCIuVvpVmAA5bi4x3t+ug/ctufmuwQsJE6BaTBS9IwJAtWHZPQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Feb 19, 2025 at 9:52=E2=80=AFAM Ryan Roberts = wrote: > > On 11/02/2025 00:30, Nico Pache wrote: > > Introduce the ability for khugepaged to collapse to different mTHP size= s. > > While scanning a PMD range for potential collapse candidates, keep trac= k > > of pages in MIN_MTHP_ORDER chunks via a bitmap. Each bit represents a > > utilized region of order MIN_MTHP_ORDER ptes. We remove the restriction > > of max_ptes_none during the scan phase so we dont bailout early and mis= s > > potential mTHP candidates. > > > > After the scan is complete we will perform binary recursion on the > > bitmap to determine which mTHP size would be most efficient to collapse > > to. max_ptes_none will be scaled by the attempted collapse order to > > determine how full a THP must be to be eligible. > > > > If a mTHP collapse is attempted, but contains swapped out, or shared > > pages, we dont perform the collapse. > > > > Signed-off-by: Nico Pache > > --- > > mm/khugepaged.c | 122 ++++++++++++++++++++++++++++++++---------------- > > 1 file changed, 83 insertions(+), 39 deletions(-) > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index c8048d9ec7fb..cd310989725b 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -1127,13 +1127,14 @@ static int collapse_huge_page(struct mm_struct = *mm, unsigned long address, > > { > > LIST_HEAD(compound_pagelist); > > pmd_t *pmd, _pmd; > > - pte_t *pte; > > + pte_t *pte, mthp_pte; > > pgtable_t pgtable; > > struct folio *folio; > > spinlock_t *pmd_ptl, *pte_ptl; > > int result =3D SCAN_FAIL; > > struct vm_area_struct *vma; > > struct mmu_notifier_range range; > > + unsigned long _address =3D address + offset * PAGE_SIZE; > > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > > > > /* > > @@ -1148,12 +1149,13 @@ static int collapse_huge_page(struct mm_struct = *mm, unsigned long address, > > *mmap_locked =3D false; > > } > > > > - result =3D alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); > > + result =3D alloc_charge_folio(&folio, mm, cc, order); > > if (result !=3D SCAN_SUCCEED) > > goto out_nolock; > > > > mmap_read_lock(mm); > > - result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, H= PAGE_PMD_ORDER); > > + *mmap_locked =3D true; > > + result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, o= rder); > > if (result !=3D SCAN_SUCCEED) { > > mmap_read_unlock(mm); > > goto out_nolock; > > @@ -1171,13 +1173,14 @@ static int collapse_huge_page(struct mm_struct = *mm, unsigned long address, > > * released when it fails. So we jump out_nolock directly= in > > * that case. Continuing to collapse causes inconsistenc= y. > > */ > > - result =3D __collapse_huge_page_swapin(mm, vma, address, = pmd, > > - referenced, HPAGE_PMD_ORDER); > > + result =3D __collapse_huge_page_swapin(mm, vma, _address,= pmd, > > + referenced, order); > > if (result !=3D SCAN_SUCCEED) > > goto out_nolock; > > } > > > > mmap_read_unlock(mm); > > + *mmap_locked =3D false; > > /* > > * Prevent all access to pagetables with the exception of > > * gup_fast later handled by the ptep_clear_flush and the VM > > @@ -1187,7 +1190,7 @@ static int collapse_huge_page(struct mm_struct *m= m, unsigned long address, > > * mmap_lock. > > */ > > mmap_write_lock(mm); > > - result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, H= PAGE_PMD_ORDER); > > + result =3D hugepage_vma_revalidate(mm, address, true, &vma, cc, o= rder); > > if (result !=3D SCAN_SUCCEED) > > goto out_up_write; > > /* check if the pmd is still valid */ > > @@ -1198,11 +1201,12 @@ static int collapse_huge_page(struct mm_struct = *mm, unsigned long address, > > vma_start_write(vma); > > anon_vma_lock_write(vma->anon_vma); > > > > - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, > > - address + HPAGE_PMD_SIZE); > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address= , > > + _address + (PAGE_SIZE << order)); > > mmu_notifier_invalidate_range_start(&range); > > > > pmd_ptl =3D pmd_lock(mm, pmd); /* probably unnecessary */ > > + > > /* > > * This removes any huge TLB entry from the CPU so we won't allow > > * huge and small TLB entries for the same virtual address to > > @@ -1216,10 +1220,10 @@ static int collapse_huge_page(struct mm_struct = *mm, unsigned long address, > > mmu_notifier_invalidate_range_end(&range); > > tlb_remove_table_sync_one(); > > > > - pte =3D pte_offset_map_lock(mm, &_pmd, address, &pte_ptl); > > + pte =3D pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl); > > if (pte) { > > - result =3D __collapse_huge_page_isolate(vma, address, pte= , cc, > > - &compound_pagelist, HPAGE_PMD_ORD= ER); > > + result =3D __collapse_huge_page_isolate(vma, _address, pt= e, cc, > > + &compound_pagelist, order); > > spin_unlock(pte_ptl); > > } else { > > result =3D SCAN_PMD_NULL; > > @@ -1248,8 +1252,8 @@ static int collapse_huge_page(struct mm_struct *m= m, unsigned long address, > > anon_vma_unlock_write(vma->anon_vma); > > > > result =3D __collapse_huge_page_copy(pte, folio, pmd, _pmd, > > - vma, address, pte_ptl, > > - &compound_pagelist, HPAGE_PMD_= ORDER); > > + vma, _address, pte_ptl, > > + &compound_pagelist, order); > > pte_unmap(pte); > > if (unlikely(result !=3D SCAN_SUCCEED)) > > goto out_up_write; > > @@ -1260,20 +1264,37 @@ static int collapse_huge_page(struct mm_struct = *mm, unsigned long address, > > * write. > > */ > > __folio_mark_uptodate(folio); > > - pgtable =3D pmd_pgtable(_pmd); > > - > > - _pmd =3D mk_huge_pmd(&folio->page, vma->vm_page_prot); > > - _pmd =3D maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); > > - > > - spin_lock(pmd_ptl); > > - BUG_ON(!pmd_none(*pmd)); > > - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); > > - folio_add_lru_vma(folio, vma); > > - pgtable_trans_huge_deposit(mm, pmd, pgtable); > > - set_pmd_at(mm, address, pmd, _pmd); > > - update_mmu_cache_pmd(vma, address, pmd); > > - deferred_split_folio(folio, false); > > - spin_unlock(pmd_ptl); > > + if (order =3D=3D HPAGE_PMD_ORDER) { > > + pgtable =3D pmd_pgtable(_pmd); > > + _pmd =3D mk_huge_pmd(&folio->page, vma->vm_page_prot); > > + _pmd =3D maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); > > + > > + spin_lock(pmd_ptl); > > + BUG_ON(!pmd_none(*pmd)); > > + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUS= IVE); > > + folio_add_lru_vma(folio, vma); > > + pgtable_trans_huge_deposit(mm, pmd, pgtable); > > + set_pmd_at(mm, address, pmd, _pmd); > > + update_mmu_cache_pmd(vma, address, pmd); > > + deferred_split_folio(folio, false); > > + spin_unlock(pmd_ptl); > > + } else { //mTHP > > + mthp_pte =3D mk_pte(&folio->page, vma->vm_page_prot); > > + mthp_pte =3D maybe_mkwrite(pte_mkdirty(mthp_pte), vma); > > + > > + spin_lock(pmd_ptl); > > + folio_ref_add(folio, (1 << order) - 1); > > + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUS= IVE); > > + folio_add_lru_vma(folio, vma); > > + spin_lock(pte_ptl); > > + set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order= )); > > + update_mmu_cache_range(NULL, vma, _address, pte, (1 << or= der)); > > + spin_unlock(pte_ptl); > > + smp_wmb(); /* make pte visible before pmd */ > > + pmd_populate(mm, pmd, pmd_pgtable(_pmd)); > > + deferred_split_folio(folio, false); > > + spin_unlock(pmd_ptl); > > I've only stared at this briefly, but it feels like there might be some b= ugs: Sorry for the delayed response, I needed to catch up on some other work and wanted to make sure I looked into your questions before answering. > > - Why are you taking the pmd ptl? and calling pmd_populate? Surely the p= te > table already exists and is attached to the pmd? So we are only need to u= pdate > the pte entries here? Or perhaps the whole pmd was previously isolated? The previous locking behavior is kept; however, because we are not installing a NEW pmd we need to repopulate the old PMD (like we do in the fail case). The PMD entry was cleared to avoid GUP-fast races. > > - I think some arches use a single PTL for all levels of the pgtable? So= in > this case it's probably not a good idea to nest the pmd and pte spin lock= ? Thanks for pointing that out, I corrected it by making sure they dont nest! > > - Given the pte PTL is dropped then reacquired, is there any way that th= e ptes > could have changed under us? Is any revalidation required? Perhaps not if= pte > table was removed from the PMD. Correct, I believe we dont even need to take the PTL because of all the write locks we took-- but for now i'm trying to keep the locking changes to a minimum. We can focus on locking optimizations later. > > - I would have guessed the memory ordering you want from smp_wmb() would > already be handled by the spin_unlock()? Yes I think that is correct, I noticed other callers doing this, but on a second pass those are all lockless, so in this case we dont need it. > > > > + } > > > > folio =3D NULL; > > > > @@ -1353,21 +1374,27 @@ static int khugepaged_scan_pmd(struct mm_struct= *mm, > > { > > pmd_t *pmd; > > pte_t *pte, *_pte; > > + int i; > > int result =3D SCAN_FAIL, referenced =3D 0; > > int none_or_zero =3D 0, shared =3D 0; > > struct page *page =3D NULL; > > struct folio *folio =3D NULL; > > unsigned long _address; > > + unsigned long enabled_orders; > > spinlock_t *ptl; > > int node =3D NUMA_NO_NODE, unmapped =3D 0; > > bool writable =3D false; > > - > > + int chunk_none_count =3D 0; > > + int scaled_none =3D khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER = - MIN_MTHP_ORDER); > > + unsigned long tva_flags =3D cc->is_khugepaged ? TVA_ENFORCE_SYSFS= : 0; > > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > > > > result =3D find_pmd_or_thp_or_none(mm, address, &pmd); > > if (result !=3D SCAN_SUCCEED) > > goto out; > > > > + bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE); > > + bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE); > > memset(cc->node_load, 0, sizeof(cc->node_load)); > > nodes_clear(cc->alloc_nmask); > > pte =3D pte_offset_map_lock(mm, pmd, address, &ptl); > > @@ -1376,8 +1403,12 @@ static int khugepaged_scan_pmd(struct mm_struct = *mm, > > goto out; > > } > > > > - for (_address =3D address, _pte =3D pte; _pte < pte + HPAGE_PMD_N= R; > > - _pte++, _address +=3D PAGE_SIZE) { > > + for (i =3D 0; i < HPAGE_PMD_NR; i++) { > > + if (i % MIN_MTHP_NR =3D=3D 0) > > + chunk_none_count =3D 0; > > + > > + _pte =3D pte + i; > > + _address =3D address + i * PAGE_SIZE; > > pte_t pteval =3D ptep_get(_pte); > > if (is_swap_pte(pteval)) { > > ++unmapped; > > @@ -1400,16 +1431,14 @@ static int khugepaged_scan_pmd(struct mm_struct= *mm, > > } > > } > > if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { > > + ++chunk_none_count; > > ++none_or_zero; > > - if (!userfaultfd_armed(vma) && > > - (!cc->is_khugepaged || > > - none_or_zero <=3D khugepaged_max_ptes_none))= { > > - continue; > > - } else { > > + if (userfaultfd_armed(vma)) { > > result =3D SCAN_EXCEED_NONE_PTE; > > count_vm_event(THP_SCAN_EXCEED_NONE_PTE); > > goto out_unmap; > > } > > + continue; > > } > > if (pte_uffd_wp(pteval)) { > > /* > > @@ -1500,7 +1529,16 @@ static int khugepaged_scan_pmd(struct mm_struct = *mm, > > folio_test_referenced(folio) || mmu_notifier_test_yo= ung(vma->vm_mm, > > addr= ess))) > > referenced++; > > + > > + /* > > + * we are reading in MIN_MTHP_NR page chunks. if there ar= e no empty > > + * pages keep track of it in the bitmap for mTHP collapsi= ng. > > + */ > > + if (chunk_none_count < scaled_none && > > + (i + 1) % MIN_MTHP_NR =3D=3D 0) > > + bitmap_set(cc->mthp_bitmap, i / MIN_MTHP_NR, 1); > > } > > + > > if (!writable) { > > result =3D SCAN_PAGE_RO; > > } else if (cc->is_khugepaged && > > @@ -1513,10 +1551,14 @@ static int khugepaged_scan_pmd(struct mm_struct= *mm, > > out_unmap: > > pte_unmap_unlock(pte, ptl); > > if (result =3D=3D SCAN_SUCCEED) { > > - result =3D collapse_huge_page(mm, address, referenced, > > - unmapped, cc, mmap_locked, HP= AGE_PMD_ORDER, 0); > > - /* collapse_huge_page will return with the mmap_lock rele= ased */ > > - *mmap_locked =3D false; > > + enabled_orders =3D thp_vma_allowable_orders(vma, vma->vm_= flags, > > + tva_flags, THP_ORDERS_ALL_ANON); > > + result =3D khugepaged_scan_bitmap(mm, address, referenced= , unmapped, cc, > > + mmap_locked, enabled_orders); > > + if (result > 0) > > + result =3D SCAN_SUCCEED; > > + else > > + result =3D SCAN_FAIL; > > } > > out: > > trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenc= ed, > > @@ -2476,11 +2518,13 @@ static int khugepaged_collapse_single_pmd(unsig= ned long addr, struct mm_struct * > > fput(file); > > if (result =3D=3D SCAN_PTE_MAPPED_HUGEPAGE) { > > mmap_read_lock(mm); > > + *mmap_locked =3D true; > > if (khugepaged_test_exit_or_disable(mm)) > > goto end; > > result =3D collapse_pte_mapped_thp(mm, ad= dr, > > !cc->is_= khugepaged); > > mmap_read_unlock(mm); > > + *mmap_locked =3D false; > > } > > } else { > > result =3D khugepaged_scan_pmd(mm, vma, addr, >