From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECA59C83F03 for ; Fri, 4 Jul 2025 10:26:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 25EAA6B8034; Fri, 4 Jul 2025 06:26:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 20F6B6B800A; Fri, 4 Jul 2025 06:26:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B02F6B8034; Fri, 4 Jul 2025 06:26:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DE3C26B800A for ; Fri, 4 Jul 2025 06:26:33 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id AF9C41A0294 for ; Fri, 4 Jul 2025 10:26:33 +0000 (UTC) X-FDA: 83626203066.23.D605464 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 58CEC40005 for ; Fri, 4 Jul 2025 10:26:31 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K3g8omLM; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751624791; a=rsa-sha256; cv=none; b=3nx9DKjhy+1HbgYWdDnUGLtwnIaE0O0leuwB899Oh57IsRZzzacKqzwJR6XekZmE4Fqibb tDTEDKz49FalSQZwGwR3s5dG+bs0nJyHSDNrP0PxOi9h6LgV7Ia4iI4N3fF6r/7gh5L9xd fVLvChk1Wb9zyovB2L2w3N2fS7l6zXo= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=K3g8omLM; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751624791; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yV5TxHJVxkcIeudKWCbk5OXSzT/72+A13Qm9FGClEWo=; b=m3RUoSe/3+X6Sz0LvNzfBE3Q8YxxbUKfA/zOizMUXXF35u+T3tggoUoniDLm1k4gygG/lC A/HEz75oqBH0BmFyoXjmKva5DvVrW5k7Qda1sgYJKSiW65ewWcJZzCp4nzrbLDKo6Gx4SR 0dXAzQofYW9yhhQA2oIVjTVLkLmy5IU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1751624790; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yV5TxHJVxkcIeudKWCbk5OXSzT/72+A13Qm9FGClEWo=; b=K3g8omLMvv14QzHWZcfckGFrO8xyA5JsaCHalhtFLkMgoFpaHAowaF9jVuaKKcwI3LfAAD xlTP6WWKlcaaIhd5zH32ozOQcNUBkW/h2lOfiSU/glvdNJ4Vsj0lgGcTlgBkzT4IaPDjdv L5uTm+n1wZ9fKENX5Y7R4qoPBOtzltw= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-320-h9l8VoOAM_u9oKLxOflZCQ-1; Fri, 04 Jul 2025 06:26:27 -0400 X-MC-Unique: h9l8VoOAM_u9oKLxOflZCQ-1 X-Mimecast-MFC-AGG-ID: h9l8VoOAM_u9oKLxOflZCQ_1751624787 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3a6d1394b07so461309f8f.3 for ; Fri, 04 Jul 2025 03:26:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751624786; x=1752229586; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yV5TxHJVxkcIeudKWCbk5OXSzT/72+A13Qm9FGClEWo=; b=ouq4Z7Zpn70W+9pryEFLC9s06Mkaij4yC62VXR7rO65PVmlv78rL9/S0E7+cO28G5X NmeFD8OIZbD9YuObk6F4aWqAOzBYA6rycs+eUMtSAcT3zKwP473mZjz7HaNLBKlpriPQ hLR8pFoVMqVxbOKYFrjqJwBA5X07i6Ho1k7Y4KebWH5TpMRXxK0gDGDu/2S1rOIC7jY4 9LmD3qjH4iIs5ixf32WYOfSTn9E939ohIg0AufvzkkiOi4fvZtIsdBnR4Yh1CsPdT64s 4rbjdukcQu+lBd7KdpCMnF9CdxBfbpLs0reXz4CdCc4HDDuxHJ9A+52IkRe4p1YqNMBt CvKQ== X-Gm-Message-State: AOJu0YyNWX5Cwa/SjxdWwmUlRAskyZrtBNVd0ALaT/nDkwZWatN6IKoc ChqpiBidIh8fUQVv0Fg0/BgYyXBMeAfcpwW+LLA+ip6R8z47WptOynsox/9NTybfufq/XB2tK4B uQ3M9YttYPpQGn03Vu3kPFErYIR5rkGyzHAXG0XVmMVtiXdm2xMOq X-Gm-Gg: ASbGnct6wG4J5JkVu7dFsgGIwq2CHcw+6nLcTkiys0RJWtwQL8b43mp9Q1K1Q5JEcU9 +sv17o46bb4kqqpUqIo0yoMufdYc17dJxeXlTXCOVOhN2IxoRy2BffQ0EYqZDUtxuo84wjlOiUT ueSmNDjxueTNZtv9tzwYJJdG5S6n4a6iz/EkXvxa3ApyvzqK/X3hjV4y7+Ln6a8HCV2A8OjHFvW P6vcokn1njYcBMeBdVpt9YGBe3KSR3wY7o34EDJzWCgy5ccCVu9oOZDBynQpjQvai81yVhZM+vZ wRqaHZNgsExbfXPUN7qjW7bF6aO/ED0q0mFNpmToBQkjibAV1FnssPsVSroeMR/63zw+yY05rRy mMxzNkg== X-Received: by 2002:a05:6000:250f:b0:3a4:ebc2:d6ec with SMTP id ffacd0b85a97d-3b4964f8b3cmr1912190f8f.14.1751624786366; Fri, 04 Jul 2025 03:26:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGYzN00fFm73NixytzJyssEXPIKOjCSlRb2NM6DBPHsu+yzHnAEmiJul0LvFdC9y1xbNexZhg== X-Received: by 2002:a05:6000:250f:b0:3a4:ebc2:d6ec with SMTP id ffacd0b85a97d-3b4964f8b3cmr1912120f8f.14.1751624785787; Fri, 04 Jul 2025 03:26:25 -0700 (PDT) Received: from localhost (p200300d82f2c5500098823f9faa07232.dip0.t-ipconnect.de. [2003:d8:2f2c:5500:988:23f9:faa0:7232]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-3b4708d0aebsm2166371f8f.37.2025.07.04.03.26.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 04 Jul 2025 03:26:25 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux.dev, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Jerrin Shaji George , Arnd Bergmann , Greg Kroah-Hartman , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Viro , Christian Brauner , Jan Kara , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , "Matthew Wilcox (Oracle)" , Minchan Kim , Sergey Senozhatsky , Brendan Jackman , Johannes Weiner , Jason Gunthorpe , John Hubbard , Peter Xu , Xu Xin , Chengming Zhou , Miaohe Lin , Naoya Horiguchi , Oscar Salvador , Rik van Riel , Harry Yoo , Qi Zheng , Shakeel Butt Subject: [PATCH v2 20/29] mm: convert "movable" flag in page->mapping to a page flag Date: Fri, 4 Jul 2025 12:25:14 +0200 Message-ID: <20250704102524.326966-21-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250704102524.326966-1-david@redhat.com> References: <20250704102524.326966-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: ipFjRc8nMcsK3iX6VS66rLFfqUxfKIpOELkeGTJh9c0_1751624787 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 58CEC40005 X-Stat-Signature: t7mueoi5e7hcrwgmfuihtsjnojir7kha X-Rspam-User: X-HE-Tag: 1751624791-467697 X-HE-Meta: U2FsdGVkX18wdJq0w97+u6TpNxr7YZ2MXjOzxV46IPsfTtQi/VuSlnKPEfxOza8u2PtPIgH2WxyON3eZq4kILJebhJSwunTykjUjN13MDjMvAeG/IHPnmi8GaS62JGhap0OHBtJ/NfBtcM4w1biMMKv3zm0SJTaRh0eQ2faWyYJ67SrhedenH/Lp8Qi7daA0EMEKkW0TjCa36kjvusIDsbO4CibasmHUZYDL6k/f7K1pOk6yJwSMRdrLvMQSvgOXUYjD0ue01Wh2arBMIICZSqPtX0r3QD9OcxxYTn4Cp5GGNDvpVtoQygmMYUaqz3Yy5TP8F9hRWvKWrI3qI+JxfCCquA5LWzj0Bq6Ejo7pAlx/UKYH3WTNspb3CRFZ9UBxFwOmBXIv8viub4mCWFFMUwn1f3vbQowciWqD6Xqc+4uh5N0hKs+cwHLGphVEiEpDJOoCaIQPSPQe+KQ10cPy0vSRloMYKLt+1lfDVOgE2Ufv/YzoBTNMyEBDeyZajrVS9VKrkPjvpvYA9z6ra61GmPreAinJEcPvTHf7Bs/J3rYW+/tByutYMEPo8BilpLqxcPt5r4ImLQgVWwDHRfvvI16tkhDssoMzjyCssBes4OFr86KY2sE2Nbj05qeeXziiTnxxGGSn0fSlXKEE9wIjyyX2S372cvlhFLPeDXlZgCaMwbVeNPc72rQrKv7zWsWKhuwliEaUzzwf1BYx80cavftz1eHatMbe658vHQLC19N9sKbJ9U1DeE1Vdx6P7FEfGj6CceC5PTvAZIPuy37/uDaR0pUJxvGW6/OziI0RDNpVefaeT+NtYU/Nz6WyyiPvMlBLJtP6O5h3rgFqQ3MFw/HF7ByD8xTu+q0U5rwEddYQMcLYYZOGcFwWzJ7Z3G/4MpBRY8aUAP9hKyC0WvtOkqY9W6J52eot5645yL1Fb9D9AR4GCiHNN1yvYGkkZ/mEQcJl7Rs/U+Nh+/qLcWF hvI7GSbA cdVjcwrvgg+l2rLKJNnDjmqy3av2IzSaDO1yTD+UDN9cE0OA+PxksBUHOjVlKJwAK951922WH2gMsa2qel1QokmBnmsb8aFTsS28Bfcw14xW4CO7xGMGMYBjSNiLXw1tVeIfN4JrogeJRYoBXeU7IMTcjZOjnrQIBACAF7tfKtNEPrsUh0xSOHKscy2xEdVQd7OO5fachaJSWso/ZlCvsqRlH0AnduzB60jju4hwzD96I8c1PoCX2paoz0hhZmgXCpRswz1CPjCJWNQkwMtS0tyCZzFiOcBLytRMqqrtLL8FZhvMsAzinO7Rvz4VL2NC6F0xRkxlDs90irn7/Y8iUitPtZqrCcKZuCvjAhJfy3Rk+Aj5Zcugof6ldhMG7g9cLGHBnmXChzPXWoXgXhvBVdEz7RSFk6oJ7WtuCpOmeXkVWUMbrURR5CHjtSPcYs8JjgCNJezlYkFiUbSNiWKomQ9MDQdAB8w5vUT1bty2Gi1tvhlk3mPy5e9qRxnM3tzYm+1X1WT85oZSeHMbRU2sAueGcjZ83ROlFV637auXhiDErZkzDdR0XjmGYP+EmLriO8NT4YlBzoxxYrdTa+o0fDmz3M57wbdlTh+4pFSYF4YCV/tdzjR+XtmuFMiU/VbBAWIZb78/Iu2QqEmOHr2QppWBFL5ZZRg1NBJ/N X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Instead, let's use a page flag. As the page flag can result in false-positives, glue it to the page types for which we support/implement movable_ops page migration. We are reusing PG_uptodate, that is for example used to track file system state and does not apply to movable_ops pages. So warning in case it is set in page_has_movable_ops() on other page types could result in false-positive warnings. Likely we could set the bit using a non-atomic update: in contrast to page->mapping, we could have others trying to update the flags concurrently when trying to lock the folio. In isolate_movable_ops_page(), we already take care of that by checking if the page has movable_ops before locking it. Let's start with the atomic variant, we could later switch to the non-atomic variant once we are sure other cases are similarly fine. Once we perform the switch, we'll have to introduce __SETPAGEFLAG_NOOP(). Reviewed-by: Zi Yan Reviewed-by: Lorenzo Stoakes Signed-off-by: David Hildenbrand --- include/linux/balloon_compaction.h | 2 +- include/linux/migrate.h | 8 ----- include/linux/page-flags.h | 54 ++++++++++++++++++++++++------ mm/compaction.c | 6 ---- mm/zpdesc.h | 2 +- 5 files changed, 46 insertions(+), 26 deletions(-) diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h index a8a1706cc56f3..b222b0737c466 100644 --- a/include/linux/balloon_compaction.h +++ b/include/linux/balloon_compaction.h @@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon, struct page *page) { __SetPageOffline(page); - __SetPageMovable(page); + SetPageMovableOps(page); set_page_private(page, (unsigned long)balloon); list_add(&page->lru, &balloon->pages); } diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 6aece3f3c8be8..acadd41e0b5cf 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -103,14 +103,6 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping, #endif /* CONFIG_MIGRATION */ -#ifdef CONFIG_COMPACTION -void __SetPageMovable(struct page *page); -#else -static inline void __SetPageMovable(struct page *page) -{ -} -#endif - #ifdef CONFIG_NUMA_BALANCING int migrate_misplaced_folio_prepare(struct folio *folio, struct vm_area_struct *vma, int node); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 4c27ebb689e3c..5f2b570735852 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -170,6 +170,11 @@ enum pageflags { /* non-lru isolated movable page */ PG_isolated = PG_reclaim, +#ifdef CONFIG_MIGRATION + /* this is a movable_ops page (for selected typed pages only) */ + PG_movable_ops = PG_uptodate, +#endif + /* Only valid for buddy pages. Used to track pages that are reported */ PG_reported = PG_uptodate, @@ -698,9 +703,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted) * bit; and then folio->mapping points, not to an anon_vma, but to a private * structure which KSM associates with that merged page. See ksm.h. * - * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable - * page and then folio->mapping points to a struct movable_operations. - * * Please note that, confusingly, "folio_mapping" refers to the inode * address_space which maps the folio from disk; whereas "folio_mapped" * refers to user virtual address space into which the folio is mapped. @@ -743,13 +745,6 @@ static __always_inline bool PageAnon(const struct page *page) { return folio_test_anon(page_folio(page)); } - -static __always_inline bool page_has_movable_ops(const struct page *page) -{ - return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == - PAGE_MAPPING_MOVABLE; -} - #ifdef CONFIG_KSM /* * A KSM page is one of those write-protected "shared pages" or "merged pages" @@ -1133,6 +1128,45 @@ bool is_free_buddy_page(const struct page *page); PAGEFLAG(Isolated, isolated, PF_ANY); +#ifdef CONFIG_MIGRATION +/* + * This page is migratable through movable_ops (for selected typed pages + * only). + * + * Page migration of such pages might fail, for example, if the page is + * already isolated by somebody else, or if the page is about to get freed. + * + * While a subsystem might set selected typed pages that support page migration + * as being movable through movable_ops, it must never clear this flag. + * + * This flag is only cleared when the page is freed back to the buddy. + * + * Only selected page types support this flag (see page_movable_ops()) and + * the flag might be used in other context for other pages. Always use + * page_has_movable_ops() instead. + */ +TESTPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL); +SETPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL); +#else /* !CONFIG_MIGRATION */ +TESTPAGEFLAG_FALSE(MovableOps, movable_ops); +SETPAGEFLAG_NOOP(MovableOps, movable_ops); +#endif /* CONFIG_MIGRATION */ + +/** + * page_has_movable_ops - test for a movable_ops page + * @page The page to test. + * + * Test whether this is a movable_ops page. Such pages will stay that + * way until freed. + * + * Returns true if this is a movable_ops page, otherwise false. + */ +static inline bool page_has_movable_ops(const struct page *page) +{ + return PageMovableOps(page) && + (PageOffline(page) || PageZsmalloc(page)); +} + static __always_inline int PageAnonExclusive(const struct page *page) { VM_BUG_ON_PGFLAGS(!PageAnon(page), page); diff --git a/mm/compaction.c b/mm/compaction.c index 348eb754cb227..349f4ea0ec3e5 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -114,12 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages) } #ifdef CONFIG_COMPACTION -void __SetPageMovable(struct page *page) -{ - VM_BUG_ON_PAGE(!PageLocked(page), page); - page->mapping = (void *)(PAGE_MAPPING_MOVABLE); -} -EXPORT_SYMBOL(__SetPageMovable); /* Do not skip compaction more than 64 times */ #define COMPACT_MAX_DEFER_SHIFT 6 diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 6855d9e2732d8..25bf5ea0beb83 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -154,7 +154,7 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn) static inline void __zpdesc_set_movable(struct zpdesc *zpdesc) { - __SetPageMovable(zpdesc_page(zpdesc)); + SetPageMovableOps(zpdesc_page(zpdesc)); } static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc) -- 2.49.0