From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74276C021A9 for ; Mon, 17 Feb 2025 19:13:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B5B028008C; Mon, 17 Feb 2025 14:13:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 065F2280085; Mon, 17 Feb 2025 14:13:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6EDB28008C; Mon, 17 Feb 2025 14:12:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C3636280085 for ; Mon, 17 Feb 2025 14:12:59 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 69C044B5FE for ; Mon, 17 Feb 2025 19:12:59 +0000 (UTC) X-FDA: 83130384078.28.F910065 Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by imf27.hostedemail.com (Postfix) with ESMTP id 580B240007 for ; Mon, 17 Feb 2025 19:12:57 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VHR9YlSZ; spf=pass (imf27.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.208.49 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739819577; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=L1XqE+N5gxunVpF/j2ifr6T1PnuKn5HHjxQR0vrMik0=; b=MUgcbRKDx27pk/LAXu1hog8iMv20e/g5iGMmfyQdYr1HGNIJWdX+ypMi5zWpYlWlfeImLj pHhl4ELL8GjP+7psx/M5YVZRTjt21K3wYkwZSfCsKpBbEqzMgGXwF8TWeogdOyPK4Pngl6 /zFVyyWmL3rIDGScdSlEIBxy/gEjB/Y= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=VHR9YlSZ; spf=pass (imf27.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.208.49 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739819577; a=rsa-sha256; cv=none; b=MJS5RQUS+2enClbRnG2BTWz+PJSplzNhtrw0UpxPF97kwzotZ1SjVC0ZnYNJkvMhRJPfEp X0N6s+bJ+vr3Ni9eAY9w/rsDc6+SD54N1lJSjCfBPyiMAYFS+HRpbMToZWHGx0sKDOxv9r 2GpTWNB0cthN1mFGkrdT/0vXMQ1P6tU= Received: by mail-ed1-f49.google.com with SMTP id 4fb4d7f45d1cf-5e050b1491eso2644125a12.0 for ; Mon, 17 Feb 2025 11:12:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1739819575; x=1740424375; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=L1XqE+N5gxunVpF/j2ifr6T1PnuKn5HHjxQR0vrMik0=; b=VHR9YlSZuqJNptn63v4FFd+gXR5nan2HwNjGTjRzp6Sme/M0D/8y+c7CQ+QJiIsTt4 7rQpPraipV4VDj9oNDJyxDY//0vfQhKOYVOtFuXIEVu969ce+Wwnn2JMlLOEANDRJwZl IdiSaPzOOKBp1QDE4kJI3/wYMkChJCN4clz6GAi2wwZAT2mgkFW+fFTLCHG/tiWAe3hK +xh6l6shcMpo7WOrQb83qxQn5YpBgozZ4WbjJGZZhRcz0P0ubfqylk/U5OQzEHUoBQIX FmOcn53nyU8ycQYTXQTRLGEZoy7YGiKxd7pkafC7PDYjV1uf9VJSC9FgXh08tYTXWVZy Hu2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739819575; x=1740424375; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=L1XqE+N5gxunVpF/j2ifr6T1PnuKn5HHjxQR0vrMik0=; b=vpMLoY/CwYV1jVXh6B4CV5vOgf2xyFQ320sVA9mY5iE5ZL1PkM6uIFglDVhbZIk1Io v2iQm0LjSdu1RJ99RDFdhz9dYY4K6gHHOeingUwO/03xG38mpgCxUELczm5L2BJKI2wB Ilm6bRUija0Ff8YtD+qcCjQU+piiWWt3Jld9eI8UIvQIYtFJlHeIbDr0+uI66Bk+RPFT WVURSKFOaKV83ZUUX7OwnJjlM4HJ6frVvVh2H8hwSO8wtYppNppMafNUY1qXjgHeq69k jg0BTX1QYo+laSUGr6g67MAiEAll5eZa0xCv6NGwDAIgEd5vA9UOtHROmXfIUiW+RtbM hgzQ== X-Forwarded-Encrypted: i=1; AJvYcCXXXimNd3qnNW54eYyHOj1Siqd+YJ9E1ftEHuyZ0oGWb4ZjPIQenT7E5VeF1I1UvKrM96kt9AiBZA==@kvack.org X-Gm-Message-State: AOJu0YyBE0gstaQJ27Om6KnYsc/2F4Awm4xaYa6JmzxmJrrgHfC+7M88 4IS4QTQ+V5MhO3rWE8IegMfOPEYsf908XdlhnwS4YB9v/yxUZDA3 X-Gm-Gg: ASbGncvOiVdPgnGN95lSI/0QVTdLuIvVzhcxkkADZaTqh1TmjTQsj6I39XRZHmPjXQe uujgw6N49RD/h551wEJvuYhSE6vhJwfo1WW9lOqFIpKE4aHA25nWFQUpgnSUr+zEYQ21EdCnODg T5w64CjAXojtzUUdZZ+Lrpf17IaP/fLOfVvU0hZkPu7+S9wTv8FvkyjBP7F9776to15vCPJVUeF x+mWppY8M1+vLC4ZS8FDtCs2FAzVUnJ7vlDxeEP0OEuA1TBfCLekWn2hhkG/LOAdq+yEUBSyEEB v+EywmTgLxiWnfMSeHU9J8K1GX7aKdqqnrkb99GVaQ5UrEIs7nxXMOkYk9zrva1GlU0= X-Google-Smtp-Source: AGHT+IH7FT7dwaTGDtD7OHpQ+eIRglHZ11kMv1CSIodwYEeW1DhLomSnsedDeLX6GIb2YnfnGleW3g== X-Received: by 2002:a17:906:99d2:b0:abb:b666:8e4e with SMTP id a640c23a62f3a-abbb6668e5emr40552066b.26.1739819575268; Mon, 17 Feb 2025 11:12:55 -0800 (PST) Received: from ?IPV6:2a03:83e0:1126:4:fb:39c9:9a24:d181? ([2620:10d:c092:500::7:6466]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aba53232275sm926999866b.31.2025.02.17.11.12.54 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 17 Feb 2025 11:12:54 -0800 (PST) Message-ID: Date: Mon, 17 Feb 2025 19:12:53 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC v2 6/9] khugepaged: introduce khugepaged_scan_bitmap for mTHP support To: Nico Pache , linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Cc: ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu, haowenchao22@gmail.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com, surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com, zhengqi.arch@bytedance.com, jhubbard@nvidia.com, 21cnbao@gmail.com, willy@infradead.org, kirill.shutemov@linux.intel.com, david@redhat.com, aarcange@redhat.com, raquini@redhat.com, dev.jain@arm.com, sunnanyong@huawei.com, audra@redhat.com, akpm@linux-foundation.org, rostedt@goodmis.org, mathieu.desnoyers@efficios.com, tiwai@suse.de References: <20250211003028.213461-1-npache@redhat.com> <20250211003028.213461-7-npache@redhat.com> Content-Language: en-US From: Usama Arif In-Reply-To: <20250211003028.213461-7-npache@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Stat-Signature: dx66fdtgod3aqz5sc51f1cxc1dme88ae X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 580B240007 X-HE-Tag: 1739819577-113425 X-HE-Meta: U2FsdGVkX1+5LNKlcpZ4b9l7jxmyEWYzQ8itlkn7kN3byBHr1G73TQ1UMTfbB4Q9O5HBqWF/YNAaIlp5huWUYG8sJQ0vP2Oc67WSYu/a8Oes6OGxsVBYIsNYqCbX89Npr5IgANS7Pzmn+tnpzkqMzMBxQIMWQj5gyXUQy9sXMM2w+2fh1+GJacG0nGK//fYdMGEyDNroQPTLQYtI5+KDTjRn+tcY9VHfWnmpMLeTvzt/oRSzHD/yg5eahIvewEzbtcIJ0dnneBv2t40qMeyBD6oAuZKrfv/E7WJjqMw9xRCbKxX998/vEPgECVYPDJoz4zzQSYDmD/98AiDNW4Xosj5waM0d5wkqyYTYJ3Fy+CsDXaxsZedzbpszFd3fOEjP+CoXOLsXx3X4Weq90yP+ON7bJYAWPHqE2Jb95TxRyR45H1hsM/xWL4bXeq/EJezIR/A932iN1w3j3ZyLYaJOtwriUZW7V/ttpM/e1g34HhfjUWlKWyNAFf0JgGHh6i+MeyUdFd8JmsYqPxebOw/TDciOcM18knDZvO7DGVs/aUQNe8Wketx+9m8zK92FX3nLheY9Cdn2kjoWlj4d8CAQsqmBY0SwRubi2kwMNTRhyHCbguIFHsMZ+I7T5+Y4VH9ec9jGPxILfWcx96S6lgH/TvbeRTcaaRSYIklOCFkdX53nmABVNxoYmK1SJJrcgUwbmr0jCvxm4fr0sSHRLUWyHwuDM2YZQadvCMkJIcE0AHoKx/UpNt5HpF0+7kkzg2tyNXfPZ15fga3D/IXARJ3heMPxfvfylacnUQVs5/VnA8BIvzbqwPBI+1C8Wxr/1CrkbmQq08kWwPiw78ubksA/Q8PBG8sr7XbNM+WsJTtVAAo+V9kJmufuDrseRcsWOPiOCaux3lAhk8FxHrZ8plxNh37VLIiC7Q05NbWZPseboET9Mt5ppomZr2puZJGoFlfoiz0mhFG91ONmHvHn+mk 6NZ5AKoR GyrhYWg3VuLvvlLr5hwOgMUJ7+rko5IIRY/7fVQp3jykkzAQHK8ozxHSqeNYaW814Thb1INm4rnkttLrApUV0TjKMiE7Cbe+XTDwlQKK0LBcTwVqWM+rM8aLjJr0NBIiR1eOt/89pUytJ1K8+f9ZaTc21xQUltTwKuY6kJux32e4R0xYdcMXCkWcow2zz06v8oxl6aIy0sUJmmYEZ7OacwKSPrw5WrEJbb9qvqPgOjU4EkNrZsBJslrl0l6T2ESKKX7Fu1QnkDVuYdQ555Ff7R5YS2TvE68S/mmYjkkH2BBrJ07sh8BXXqfQp9QtPDCb/jI7sGzazuxJmTtIwlmJmSIlUoFknMd18pLNzNECLKIm4xskLgoagqO6DNKFqkmLoh8DCi4Z8hOqov7eUhLiLSj8f3ukrduEgwQM224C8Ke+W5ipJLJZ+Xc/PWytjRosajNL36kPXlndDbrdW3l0vVzkrrhFU/12GDcFDV/m8EvxWGOtfwQNsbArJZgyJULjGwuu0yBXW2+/dO8HokqtHoE9oi0RwhQapJ5DUBl96fgoPTbTKmiizQuxbaJTeZHFyPlGBo5i+y6wYubFZnzG18anz7AO1gLTbrHJvo7XGf0cBrk8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11/02/2025 00:30, Nico Pache wrote: > khugepaged scans PMD ranges for potential collapse to a hugepage. To add > mTHP support we use this scan to instead record chunks of fully utilized > sections of the PMD. > > create a bitmap to represent a PMD in order MTHP_MIN_ORDER chunks. nit: s/MTHP_MIN_ORDER/MIN_MTHP_ORDER/ > by default we will set this to order 3. The reasoning is that for 4K 512 > PMD size this results in a 64 bit bitmap which has some optimizations. > For other arches like ARM64 64K, we can set a larger order if needed. > > khugepaged_scan_bitmap uses a stack struct to recursively scan a bitmap > that represents chunks of utilized regions. We can then determine what > mTHP size fits best and in the following patch, we set this bitmap while > scanning the PMD. > > max_ptes_none is used as a scale to determine how "full" an order must > be before being considered for collapse. > > If a order is set to "always" lets always collapse to that order in a > greedy manner. > > Signed-off-by: Nico Pache > --- > include/linux/khugepaged.h | 4 ++ > mm/khugepaged.c | 89 +++++++++++++++++++++++++++++++++++--- > 2 files changed, 86 insertions(+), 7 deletions(-) > > diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h > index 1f46046080f5..1fe0c4fc9d37 100644 > --- a/include/linux/khugepaged.h > +++ b/include/linux/khugepaged.h > @@ -1,6 +1,10 @@ > /* SPDX-License-Identifier: GPL-2.0 */ > #ifndef _LINUX_KHUGEPAGED_H > #define _LINUX_KHUGEPAGED_H > +#define MIN_MTHP_ORDER 3 > +#define MIN_MTHP_NR (1< +#define MAX_MTHP_BITMAP_SIZE (1 << (ilog2(MAX_PTRS_PER_PTE * PAGE_SIZE) - MIN_MTHP_ORDER)) > +#define MTHP_BITMAP_SIZE (1 << (HPAGE_PMD_ORDER - MIN_MTHP_ORDER)) > > extern unsigned int khugepaged_max_ptes_none __read_mostly; > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 3776055bd477..c8048d9ec7fb 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -94,6 +94,11 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS); > > static struct kmem_cache *mm_slot_cache __ro_after_init; > > +struct scan_bit_state { > + u8 order; > + u16 offset; > +}; > + > struct collapse_control { > bool is_khugepaged; > > @@ -102,6 +107,15 @@ struct collapse_control { > > /* nodemask for allocation fallback */ > nodemask_t alloc_nmask; > + > + /* bitmap used to collapse mTHP sizes. 1bit = order MIN_MTHP_ORDER mTHP */ > + DECLARE_BITMAP(mthp_bitmap, MAX_MTHP_BITMAP_SIZE); > + DECLARE_BITMAP(mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE); > + struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_SIZE]; > +}; > + > +struct collapse_control khugepaged_collapse_control = { > + .is_khugepaged = true, > }; > > /** > @@ -851,10 +865,6 @@ static void khugepaged_alloc_sleep(void) > remove_wait_queue(&khugepaged_wait, &wait); > } > > -struct collapse_control khugepaged_collapse_control = { > - .is_khugepaged = true, > -}; > - > static bool khugepaged_scan_abort(int nid, struct collapse_control *cc) > { > int i; > @@ -1112,7 +1122,8 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, > > static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > int referenced, int unmapped, > - struct collapse_control *cc) > + struct collapse_control *cc, bool *mmap_locked, > + u8 order, u16 offset) > { > LIST_HEAD(compound_pagelist); > pmd_t *pmd, _pmd; > @@ -1130,8 +1141,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > * The allocation can take potentially a long time if it involves > * sync compaction, and we do not need to hold the mmap_lock during > * that. We will recheck the vma after taking it again in write mode. > + * If collapsing mTHPs we may have already released the read_lock. > */ > - mmap_read_unlock(mm); > + if (*mmap_locked) { > + mmap_read_unlock(mm); > + *mmap_locked = false; > + } > > result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); > if (result != SCAN_SUCCEED) > @@ -1266,12 +1281,71 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > out_up_write: > mmap_write_unlock(mm); > out_nolock: > + *mmap_locked = false; > if (folio) > folio_put(folio); > trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result); > return result; > } > > +// Recursive function to consume the bitmap > +static int khugepaged_scan_bitmap(struct mm_struct *mm, unsigned long address, > + int referenced, int unmapped, struct collapse_control *cc, > + bool *mmap_locked, unsigned long enabled_orders) > +{ Introducing a function and not using it probably might make the kernel test bot and compiler complain at this commit, you might want to merge this with the next commit where you actually use it. > + u8 order, next_order; > + u16 offset, mid_offset; > + int num_chunks; > + int bits_set, threshold_bits; > + int top = -1; > + int collapsed = 0; > + int ret; > + struct scan_bit_state state; > + > + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state) > + { HPAGE_PMD_ORDER - MIN_MTHP_ORDER, 0 }; > + > + while (top >= 0) { > + state = cc->mthp_bitmap_stack[top--]; > + order = state.order + MIN_MTHP_ORDER; > + offset = state.offset; > + num_chunks = 1 << (state.order); > + // Skip mTHP orders that are not enabled > + if (!test_bit(order, &enabled_orders)) > + goto next; > + > + // copy the relavant section to a new bitmap > + bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitmap, offset, > + MTHP_BITMAP_SIZE); > + > + bits_set = bitmap_weight(cc->mthp_bitmap_temp, num_chunks); > + threshold_bits = (HPAGE_PMD_NR - khugepaged_max_ptes_none - 1) > + >> (HPAGE_PMD_ORDER - state.order); > + > + //Check if the region is "almost full" based on the threshold > + if (bits_set > threshold_bits > + || test_bit(order, &huge_anon_orders_always)) { > + ret = collapse_huge_page(mm, address, referenced, unmapped, cc, > + mmap_locked, order, offset * MIN_MTHP_NR); > + if (ret == SCAN_SUCCEED) { > + collapsed += (1 << order); > + continue; > + } > + } > + > +next: > + if (state.order > 0) { > + next_order = state.order - 1; > + mid_offset = offset + (num_chunks / 2); > + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state) > + { next_order, mid_offset }; > + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state) > + { next_order, offset }; > + } > + } > + return collapsed; > +} > + > static int khugepaged_scan_pmd(struct mm_struct *mm, > struct vm_area_struct *vma, > unsigned long address, bool *mmap_locked, > @@ -1440,7 +1514,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > pte_unmap_unlock(pte, ptl); > if (result == SCAN_SUCCEED) { > result = collapse_huge_page(mm, address, referenced, > - unmapped, cc); > + unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0); > /* collapse_huge_page will return with the mmap_lock released */ > *mmap_locked = false; > } > @@ -2856,6 +2930,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, > mmdrop(mm); > kfree(cc); > > + > return thps == ((hend - hstart) >> HPAGE_PMD_SHIFT) ? 0 > : madvise_collapse_errno(last_fail); > }