From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0487BF327A8 for ; Tue, 21 Apr 2026 06:17:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9E0A6B0098; Tue, 21 Apr 2026 02:17:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C75726B0099; Tue, 21 Apr 2026 02:17:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B15F16B009B; Tue, 21 Apr 2026 02:17:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 92F606B0098 for ; Tue, 21 Apr 2026 02:17:00 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3DA838D001 for ; Tue, 21 Apr 2026 06:17:00 +0000 (UTC) X-FDA: 84681555000.15.3590CB7 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf18.hostedemail.com (Postfix) with ESMTP id 30DE71C0004 for ; Tue, 21 Apr 2026 06:16:57 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YafV4LNu; spf=pass (imf18.hostedemail.com: domain of devnull+kasong.tencent.com@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=devnull+kasong.tencent.com@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1776752218; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RK/OxTjko3OhtSro3ZGRLm0SaHfpfulE1aomhCcdn3A=; b=HqJ3WyeN5Hi/iT+doDHBeiWLG1Hu7fXTHjofK6Tenaec8ed40sIRrasQ20nJUwn+eEve7C 1J4p1tQpQ4qa+k8jD4kXgYKt78aEcrPrNaMKCfpwPTzSQCizqgtD4CYZNBm0tJFcwGWTuw qHQqYSWao1vY155FF63NHK4ormiYpNk= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YafV4LNu; spf=pass (imf18.hostedemail.com: domain of devnull+kasong.tencent.com@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=devnull+kasong.tencent.com@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1776752218; a=rsa-sha256; cv=none; b=Yst+gcA8cOsTHTALuRawiNI0PcShEu05vAHhDtPDAQQaeBvli+RyqArsfWFEotmvga1HXe GkJDm4VZsVeVGJSFpm6A0+stHJk8qh13lp+X87F0liEUtT6QedHY7o7Jpbpib5QJWp7aIl 5/E/4FOVDCmHtdMqNZ4SnPC7aZ+qT0Q= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id E88F94470D; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPS id 9F3A8C2BCF6; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776752214; bh=jUy9oYRs+atgXlFKR438oou9gHp7SspGAlgyZwd9RFg=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=YafV4LNub9AyD1nXwjKsbp+LSfLSVNyVaSuRnQa1Q8SJKixvid4w3XFkCchDuyYBZ +Lj3RrqPKfmCVm/hP/3aTLLZQmETmU9PXQMQSsno2/uvxSBQa7xlptnggfN7RbT5tM N7g9VDKb4qzUCFfFJAswuohfuJ0xHnMFSgWujMjvf5hclromYWqkWBRwFA1pWO3r8e vit2uia27KsPZUNW9Qvf4d3HYZ2VjA2TD/f3f7lqsyvj1vIznp2J0gDHOjpz7s15rB E/c5cG2EKNz6bjSKEGQOJeN7obZv6P9J1m7w6M1FcAw2jsYgPBvIbEakQMxVYj0ma4 5S2MAxVecH0cA== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 947DFF327AA; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) From: Kairui Song via B4 Relay Date: Tue, 21 Apr 2026 14:16:52 +0800 Subject: [PATCH v3 08/12] mm, swap: delay and unify memcg lookup and charging for swapin MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260421-swap-table-p4-v3-8-2f23759a76bc@tencent.com> References: <20260421-swap-table-p4-v3-0-2f23759a76bc@tencent.com> In-Reply-To: <20260421-swap-table-p4-v3-0-2f23759a76bc@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , David Hildenbrand , Zi Yan , Baolin Wang , Barry Song , Hugh Dickins , Chris Li , Kemeng Shi , Nhat Pham , Baoquan He , Johannes Weiner , Youngjun Park , Chengming Zhou , Roman Gushchin , Shakeel Butt , Muchun Song , Qi Zheng , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Kairui Song , Yosry Ahmed , Lorenzo Stoakes , Dev Jain , Lance Yang , Michal Hocko , Michal Hocko , Suren Baghdasaryan , Axel Rasmussen , Lorenzo Stoakes , Yosry Ahmed X-Mailer: b4 0.15.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1776752211; l=7735; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=vQpLlkYrTuiOXS5UJH0oGm4wufG/sCOPvJMEoPHMsVg=; b=IS4/2XdO5Va2GVkDuTOph7ESuaHNgUmQPboTHfqkIFc54VkkgyUb6hAEOkm0M01J8vMkLsxGw GcOWd6vr4geAlIBYNncl4wGCfx48bIZe8aqAVk0DFH92i54MWve2Qbx X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com X-Stat-Signature: uegqyxmtu5qr7ixbnxs566bpanie8mx6 X-Rspamd-Queue-Id: 30DE71C0004 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1776752217-179698 X-HE-Meta: U2FsdGVkX1+GbNPmD8cp148l2wyCDHNa5hAgfX2rDCAqwaS8ZL3vxXh8i5iEjL1xXyheU97E9xKWlqID+RpDbJ0i3Dw847oV4GWuS0+OYBug1ficszFIfbvLk6CcppLTeA/Ai6fn+9uIraFo9GIFDrspEkiLw3IuMWqhYLCMeS5T5PJTmkNLuGiDKCeWCqpk3GgLUVOmcPeChekVtob/KK3IBB6EQ3Urvss+YHzHWDIiG+F/z3pq2SbGGuN4OCsnkeNIW9hl65Muzcy+h8Zn0s3i+6Scz1Q/wGtO1CVWcyL5N9rQOBjjOn3PDqHrzVqniX3j8iqHd8uunVF8FhDpuo7Tdz7fUvlsk25WAgzkmv/nmEh45xjafDW8Pl6i4Ico6OS7PC41wgjlWOP5mMfhftWZXoYOHIbBxS9Se5RrsYGeJsIzXxHEBFkOSgAvenNp/ZORCjKbRDyr6FIe53ssXB7cL1gIWtthKekrqMCnLp4AdTFwdzIT4hmuG079n5uTkosd/YMhoEJyajQbYEaMtf5hqXOF9HlY2TmPgGNlS//mUCzAKwcmTiHUIpCzRxTXwUmp1CN+rFgIetYMyBgiA4oOuyAtvxuDYMT/oDkaXp8fVK7RH30z97umWy4ZE4DttXBqCV3XiyG55UyaSx6Kl9eydbbsiN3VMHKpbicLInT2jV+4204kD1idvXwkV0YirhT8kgPZWfa6fxoSGmndHuJtX03bSnsLQDWKuaGR6zEBk2c61Nz41T9ORwdxMn2BfdtkxX69bu3PwGb8CBWG2qAjS8EIP+z0306PY1rKtnqu+FpXi2ioHaZ+R89F17ybvRqb2wklL64agNYiobckgnhfGEOpWo1NirOC3rMzleyszkkcE3yBmlJxqQqrKdxdhcHXqMjrZ9D6TfWcR7S6rW3SxHOf+SAf+bivAsv4qtzUlq2vMQlBlYVCvMnPksIzgN5/f/bkUqKn/I19WJt HFwnCKIP sRuNa2klyI+UxOZH/o5xvSY5TgUAFKkYDYPY0qNyKwx+uhGT7OVbHG3RqENQtVJHH90LdTd53A+dlVUWwLiDJdj2sdUzyFY0MeM1T5KF5buMuAcu30RsqYY9e2DJPqH5Xu6QZrnwvGbc8cqgyPWrbAW9n6IFUh6A0VBXXQ6mCqtmiMaylIsctgNP50crDY+cUO3LTGvnYqoS1opo5LCCOrmRjlR2mAX0B6z7zXDmex6NtMleTkIL+N9Id2MYMbqYJNnGkvoL5Qo/zo0NXKh26iH/9dQIXtqQJP6nF5YunC7kUmcyNjxt/ljdUji4v2uIimmnPyvsoI0/RK1lq9MEau/u1H7TDYzoHf5BOBKw+pYEtGUhJcfDTXAXgUrV1fvJPudXZSKBklzbT+jPI2MAyJrI30w== Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Instead of checking the cgroup private ID during page table walk in swap_pte_batch(), move the memcg lookup into __swap_cache_add_check() under the cluster lock. The first pre-alloc check is speculative and skips the memcg check since the post-alloc stable check ensures all slots covered by the folio belong to the same memcg. It is very rare for contiguous and aligned entries across a contiguous region of a page table of the same process or shmem mapping to belong to different memcgs. This also prepares for recording the memcg info in the cluster's table. Also make the order check and fallback more compact. There should be no user-observable behavior change. Signed-off-by: Kairui Song --- include/linux/memcontrol.h | 6 +++--- mm/internal.h | 10 +--------- mm/memcontrol.c | 10 ++++------ mm/swap_state.c | 28 +++++++++++++++++++--------- 4 files changed, 27 insertions(+), 27 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 7d08128de1fd..a013f37f24aa 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -646,8 +646,8 @@ static inline int mem_cgroup_charge(struct folio *folio, struct mm_struct *mm, int mem_cgroup_charge_hugetlb(struct folio* folio, gfp_t gfp); -int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, - gfp_t gfp, swp_entry_t entry); +int mem_cgroup_swapin_charge_folio(struct folio *folio, unsigned short id, + struct mm_struct *mm, gfp_t gfp); void __mem_cgroup_uncharge(struct folio *folio); @@ -1137,7 +1137,7 @@ static inline int mem_cgroup_charge_hugetlb(struct folio* folio, gfp_t gfp) } static inline int mem_cgroup_swapin_charge_folio(struct folio *folio, - struct mm_struct *mm, gfp_t gfp, swp_entry_t entry) + unsigned short id, struct mm_struct *mm, gfp_t gfp) { return 0; } diff --git a/mm/internal.h b/mm/internal.h index 5a2ddcf68e0b..9d2fec696bd6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -451,24 +451,16 @@ static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte) { pte_t expected_pte = pte_next_swp_offset(pte); const pte_t *end_ptep = start_ptep + max_nr; - const softleaf_t entry = softleaf_from_pte(pte); pte_t *ptep = start_ptep + 1; - unsigned short cgroup_id; VM_WARN_ON(max_nr < 1); - VM_WARN_ON(!softleaf_is_swap(entry)); + VM_WARN_ON(!softleaf_is_swap(softleaf_from_pte(pte))); - cgroup_id = lookup_swap_cgroup_id(entry); while (ptep < end_ptep) { - softleaf_t entry; - pte = ptep_get(ptep); if (!pte_same(pte, expected_pte)) break; - entry = softleaf_from_pte(pte); - if (lookup_swap_cgroup_id(entry) != cgroup_id) - break; expected_pte = pte_next_swp_offset(expected_pte); ptep++; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c7df30ca5aa7..641706fa47bf 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5062,27 +5062,25 @@ int mem_cgroup_charge_hugetlb(struct folio *folio, gfp_t gfp) /** * mem_cgroup_swapin_charge_folio - Charge a newly allocated folio for swapin. - * @folio: folio to charge. + * @folio: the folio to charge + * @id: memory cgroup id * @mm: mm context of the victim * @gfp: reclaim mode - * @entry: swap entry for which the folio is allocated * * This function charges a folio allocated for swapin. Please call this before * adding the folio to the swapcache. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, - gfp_t gfp, swp_entry_t entry) +int mem_cgroup_swapin_charge_folio(struct folio *folio, unsigned short id, + struct mm_struct *mm, gfp_t gfp) { struct mem_cgroup *memcg; - unsigned short id; int ret; if (mem_cgroup_disabled()) return 0; - id = lookup_swap_cgroup_id(entry); rcu_read_lock(); memcg = mem_cgroup_from_private_id(id); if (!memcg || !css_tryget_online(&memcg->css)) diff --git a/mm/swap_state.c b/mm/swap_state.c index 12b290d43e45..86d517a33a55 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -142,16 +142,20 @@ void *swap_cache_get_shadow(swp_entry_t entry) * @ci: The locked swap cluster * @targ_entry: The target swap entry to check, will be rounded down by @nr * @nr: Number of slots to check, must be a power of 2 - * @shadowp: Returns the shadow value if one exists in the range. + * @shadowp: Returns the shadow value if one exists in the range + * @memcg_id: Returns the memory cgroup id, NULL to ignore cgroup check * * Check if all slots covered by given range have a swap count >= 1. - * Retrieves the shadow if there is one. + * Retrieves the shadow if there is one. If @memcg_id is not NULL, also + * checks if all slots belong to the same cgroup and return the cgroup + * private id. * * Context: Caller must lock the cluster. */ static int __swap_cache_add_check(struct swap_cluster_info *ci, swp_entry_t targ_entry, - unsigned long nr, void **shadowp) + unsigned long nr, void **shadowp, + unsigned short *memcg_id) { unsigned int ci_off, ci_end; unsigned long old_tb; @@ -169,19 +173,24 @@ static int __swap_cache_add_check(struct swap_cluster_info *ci, return -EEXIST; if (!__swp_tb_get_count(old_tb)) return -ENOENT; - if (swp_tb_is_shadow(old_tb) && shadowp) + if (shadowp && swp_tb_is_shadow(old_tb)) *shadowp = swp_tb_to_shadow(old_tb); + if (memcg_id) + *memcg_id = lookup_swap_cgroup_id(targ_entry); if (nr == 1) return 0; + targ_entry.val = round_down(targ_entry.val, nr); ci_off = round_down(ci_off, nr); ci_end = ci_off + nr; do { old_tb = __swap_table_get(ci, ci_off); if (unlikely(swp_tb_is_folio(old_tb) || - !__swp_tb_get_count(old_tb))) + !__swp_tb_get_count(old_tb) || + (memcg_id && *memcg_id != lookup_swap_cgroup_id(targ_entry)))) return -EBUSY; + targ_entry.val++; } while (++ci_off < ci_end); return 0; @@ -397,6 +406,7 @@ static struct folio *__swap_cache_alloc(struct swap_cluster_info *ci, swp_entry_t entry; struct folio *folio; void *shadow = NULL; + unsigned short memcg_id; unsigned long address, nr_pages = 1 << order; struct vm_area_struct *vma = vmf ? vmf->vma : NULL; @@ -404,7 +414,7 @@ static struct folio *__swap_cache_alloc(struct swap_cluster_info *ci, /* Check if the slot and range are available, skip allocation if not */ spin_lock(&ci->lock); - err = __swap_cache_add_check(ci, targ_entry, nr_pages, NULL); + err = __swap_cache_add_check(ci, targ_entry, nr_pages, NULL, NULL); spin_unlock(&ci->lock); if (unlikely(err)) return ERR_PTR(err); @@ -427,7 +437,7 @@ static struct folio *__swap_cache_alloc(struct swap_cluster_info *ci, /* Double check the range is still not in conflict */ spin_lock(&ci->lock); - err = __swap_cache_add_check(ci, targ_entry, nr_pages, &shadow); + err = __swap_cache_add_check(ci, targ_entry, nr_pages, &shadow, &memcg_id); if (unlikely(err)) { spin_unlock(&ci->lock); folio_put(folio); @@ -439,8 +449,8 @@ static struct folio *__swap_cache_alloc(struct swap_cluster_info *ci, __swap_cache_do_add_folio(ci, folio, entry); spin_unlock(&ci->lock); - if (mem_cgroup_swapin_charge_folio(folio, vmf ? vmf->vma->vm_mm : NULL, - gfp, entry)) { + if (mem_cgroup_swapin_charge_folio(folio, memcg_id, + vmf ? vmf->vma->vm_mm : NULL, gfp)) { spin_lock(&ci->lock); __swap_cache_do_del_folio(ci, folio, entry, shadow); spin_unlock(&ci->lock); -- 2.53.0