From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1961C46467 for ; Wed, 11 Jan 2023 14:29:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A266900005; Wed, 11 Jan 2023 09:29:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B38A940007; Wed, 11 Jan 2023 09:29:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72FA6900008; Wed, 11 Jan 2023 09:29:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0AA3190000A for ; Wed, 11 Jan 2023 09:29:11 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C45EAAB1FD for ; Wed, 11 Jan 2023 14:29:10 +0000 (UTC) X-FDA: 80342750460.20.73FF444 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id 20A8920009 for ; Wed, 11 Jan 2023 14:29:08 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aIh1l+UT; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673447349; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DqzXVQK7tx6rDR0l+Ej6O1ZL2mvrlzVDMprUi1Asko0=; b=ScjflXJIxDJa585DtyNwIueiCSdS5KejaPoflGSV49tqKz+i5lTwEoE3YtqePIvWZb4U8v x7WEJULzbQ6G6uiHLKQl1e1oUU4WH8Q6tUeNVf9sWOz+k+18w7F5gV8itXYCSKe+jwCV+/ p+4hdzAMRW0MNTBUoSy3oB7RmNqYBlA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aIh1l+UT; dmarc=none; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673447349; a=rsa-sha256; cv=none; b=Gbqyh4UBmG2lCzzw5E7wM20UvaQK9DuLycmsWajgLCgV1hZ0jtZ6KkTizNHXFTOD4XGfHp wCqlkSLHorsQi559/+M2g4EiZn4XW95Kep+0OzxIVI7i2ksrpuTkAq17XhdYFyI8f0TRrF QOwMAxwAbFWTxUI/d34E7tPtN+WSbEk= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=DqzXVQK7tx6rDR0l+Ej6O1ZL2mvrlzVDMprUi1Asko0=; b=aIh1l+UT570jexXJDDtA0ez1yV LzjDo/asQASuU7aYe5Tyw873f+YNxxKNE+k3agDCOAq5KKTb0v0+JjlyiUKOnVA1BISKCkeu5jPfB HHKvXB4l+4W8iOnIzOJUHpkmk3yncIwOLj16zSBOqqn/HW+NyHe23IGgjq0388Y3t/6FFCR7yNBU6 UowOq0gqY3rSKHF2LzpB1QA9JiYBEPxgSxbHuQovVOyau8i9yPZE8Mmq6MUK1YRafMAp9PCZzDLLp TDD81QA2kH73g28CgZscOy3AhlElb3/64sQUsyypg3GrzXBSfwNnrY/C7GxzLHsl6pVijKTjqXg67 ynsc7y6g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFc6R-004Caj-VO; Wed, 11 Jan 2023 14:29:20 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Hugh Dickins Subject: [PATCH 24/28] mm: Move page->deferred_list to folio->_deferred_list Date: Wed, 11 Jan 2023 14:29:10 +0000 Message-Id: <20230111142915.1001531-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111142915.1001531-1-willy@infradead.org> References: <20230111142915.1001531-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 20A8920009 X-Stat-Signature: 7t5s54kqu3te8bwkwdox7yfkxm83jfw4 X-HE-Tag: 1673447348-669752 X-HE-Meta: U2FsdGVkX1+XUk3BZXR5jHk5+FD8D6LP4CbSziVJrkw6sq3yEFF5z4AY0FkIS8LNJebPxj8CPKUXkM23nl1RYk1WwLubX65pAvhD/sG6jB6dNSAYFWhg+pEaIqXN+nCUTCFkSUxF19lMYRjLirdh2L+khF6DK79MD+/ydN8p/8Qz0XFY/lj7kGN2AlUwL4JnqSIQCiTF6vy7Ye1D8YrKAREMNdCdNwqMv8DSEe65W7xIO1tJsfRAEsfSO8OozHQCltO2PJ0vdm9prab5ZR7gEdW/8Z2Zu7zVaLA07nLsZO28ol+GOucnXwB7ZCjiy17zdofO/ud5KCzlip5ZZphw1yxGZnC7KOomChTeKwNI2hEV11cLBInuB2RvmLwn7omTkDKAwgL4Mph2F5XRuAhJruVAwAm6GM9wqMCoa9tGpup6Fukz+hAJC/coOk/33Vn9PJ139j44RTWLvdXqH9vL1+MUWUlAnJYHsHidVp7SpHBKQwE4md1N6BirVcrtxZKRZJiFpn3xS/CuenADLDTKVlruIpqRDDk/QXx3t5+TmQWyD1EVIkWGV1dFfvZyOmzL+OXMnHvq3DA8xTkJl5qW8DiOiC39gUbrR+KNPCuET98FXm6TuqT/U/QdKJlFcQ06o7sJslMcWZxPtlw2D6qLBP7OiZeUfwF/ckuxUQCPeQqZT1ZPTaNZ9jfqHYMTWK8G3oSc8rSPZJZWs7VPjcFHXrrrLv9Ft1ftWo+qO9Hf6pnFUs84SlfOEM7PQCKIRfRiT5eem9+TKML5jo6QGQ1Oy5Ach0aFjAzfuMsEQ2LmVadkLX4dg49lKoD7hl1fqj4l9zYZ3Nz0VXPVYvnwM0nGgLQcFodX6Ypdn87O1G3t8N97jYNflj6bu1gLOOIZeZ5ikOw/wkT9vZPH121kGGUinuG/76cOGIP96tRSpa1AvrGQwEBezR6A3A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove the entire block of definitions for the second tail page, and add the deferred list to the struct folio. This actually moves _deferred_list to a different offset in struct folio because I don't see a need to include the padding. This lets us use list_for_each_entry_safe() in deferred_split_scan() and avoid a number of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 9 ++++----- include/linux/mm_types.h | 14 ++++++++------ mm/huge_memory.c | 32 +++++++++++++++----------------- 3 files changed, 27 insertions(+), 28 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index a1341fdcf666..aacfcb02606f 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -295,11 +295,10 @@ static inline bool thp_migration_supported(void) static inline struct list_head *page_deferred_list(struct page *page) { - /* - * See organization of tail pages of compound page in - * "struct page" definition. - */ - return &page[2].deferred_list; + struct folio *folio = (struct folio *)page; + + VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); + return &folio->_deferred_list; } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4b8aa0f8f9fe..c464205cf7ea 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -141,12 +141,6 @@ struct page { struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ }; - struct { /* Second tail page of transparent huge page */ - unsigned long _compound_pad_1; /* compound_head */ - unsigned long _compound_pad_2; - /* For both global and memcg */ - struct list_head deferred_list; - }; struct { /* Second tail page of hugetlb page */ unsigned long _hugetlb_pad_1; /* compound_head */ void *hugetlb_subpool; @@ -302,6 +296,7 @@ static inline struct page *encoded_page_ptr(struct encoded_page *page) * @_hugetlb_cgroup: Do not use directly, use accessor in hugetlb_cgroup.h. * @_hugetlb_cgroup_rsvd: Do not use directly, use accessor in hugetlb_cgroup.h. * @_hugetlb_hwpoison: Do not use directly, call raw_hwp_list_head(). + * @_deferred_list: Folios to be split under memory pressure. * * A folio is a physically, virtually and logically contiguous set * of bytes. It is a power-of-two in size, and it is aligned to that @@ -366,6 +361,13 @@ struct folio { void *_hugetlb_cgroup; void *_hugetlb_cgroup_rsvd; void *_hugetlb_hwpoison; + /* private: the union with struct page is transitional */ + }; + struct { + unsigned long _flags_2a; + unsigned long _head_2a; + /* public: */ + struct list_head _deferred_list; /* private: the union with struct page is transitional */ }; struct page __page_2; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bfa960f012fa..a4138daaa0b8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2756,9 +2756,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); if (folio_ref_freeze(folio, 1 + extra_pins)) { - if (!list_empty(page_deferred_list(&folio->page))) { + if (!list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; - list_del(page_deferred_list(&folio->page)); + list_del(&folio->_deferred_list); } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { @@ -2873,8 +2873,8 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, struct pglist_data *pgdata = NODE_DATA(sc->nid); struct deferred_split *ds_queue = &pgdata->deferred_split_queue; unsigned long flags; - LIST_HEAD(list), *pos, *next; - struct page *page; + LIST_HEAD(list); + struct folio *folio, *next; int split = 0; #ifdef CONFIG_MEMCG @@ -2884,14 +2884,13 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, spin_lock_irqsave(&ds_queue->split_queue_lock, flags); /* Take pin on all head pages to avoid freeing them under us */ - list_for_each_safe(pos, next, &ds_queue->split_queue) { - page = list_entry((void *)pos, struct page, deferred_list); - page = compound_head(page); - if (get_page_unless_zero(page)) { - list_move(page_deferred_list(page), &list); + list_for_each_entry_safe(folio, next, &ds_queue->split_queue, + _deferred_list) { + if (folio_try_get(folio)) { + list_move(&folio->_deferred_list, &list); } else { - /* We lost race with put_compound_page() */ - list_del_init(page_deferred_list(page)); + /* We lost race with folio_put() */ + list_del_init(&folio->_deferred_list); ds_queue->split_queue_len--; } if (!--sc->nr_to_scan) @@ -2899,16 +2898,15 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, } spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); - list_for_each_safe(pos, next, &list) { - page = list_entry((void *)pos, struct page, deferred_list); - if (!trylock_page(page)) + list_for_each_entry_safe(folio, next, &list, _deferred_list) { + if (!folio_trylock(folio)) goto next; /* split_huge_page() removes page from list on success */ - if (!split_huge_page(page)) + if (!split_folio(folio)) split++; - unlock_page(page); + folio_unlock(folio); next: - put_page(page); + folio_put(folio); } spin_lock_irqsave(&ds_queue->split_queue_lock, flags); -- 2.35.1