From: Yosry Ahmed <yosryahmed@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Yu Zhao <yuzhao@google.com>,
"Jan Alexander Steffens (heftig)" <heftig@archlinux.org>,
Steven Barrett <steven@liquorix.net>,
Brian Geffon <bgeffon@google.com>,
"T.J. Alumbaugh" <talumbau@google.com>,
Gaosheng Cui <cuigaosheng1@huawei.com>,
Suren Baghdasaryan <surenb@google.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
David Hildenbrand <david@redhat.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
David Howells <dhowells@redhat.com>,
Hugh Dickins <hughd@google.com>,
Greg Thelen <gthelen@google.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Yosry Ahmed <yosryahmed@google.com>
Subject: [RFC PATCH 4/5] mm/vmscan: revive the unevictable LRU
Date: Sun, 18 Jun 2023 06:58:16 +0000 [thread overview]
Message-ID: <20230618065816.1365301-1-yosryahmed@google.com> (raw)
Now that mlock_count no longer overlays page->lru, revive the
unevictable LRU. No need to special case it when adding/removing a folio
to the LRUs. This also enables future work that will use the LRUs to
find all user folios charged to a memcg, having the unevictable LRU
makes sure we are not missing a significant chunk of those.
Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
include/linux/mm_inline.h | 11 +++--------
mm/huge_memory.c | 3 +--
mm/mmzone.c | 8 --------
3 files changed, 4 insertions(+), 18 deletions(-)
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 0e1d239a882c..203b8db6b4a2 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -319,8 +319,7 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
update_lru_size(lruvec, lru, folio_zonenum(folio),
folio_nr_pages(folio));
- if (lru != LRU_UNEVICTABLE)
- list_add(&folio->lru, &lruvec->lists[lru]);
+ list_add(&folio->lru, &lruvec->lists[lru]);
}
static __always_inline void add_page_to_lru_list(struct page *page,
@@ -339,21 +338,17 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
update_lru_size(lruvec, lru, folio_zonenum(folio),
folio_nr_pages(folio));
- /* This is not expected to be used on LRU_UNEVICTABLE */
list_add_tail(&folio->lru, &lruvec->lists[lru]);
}
static __always_inline
void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
{
- enum lru_list lru = folio_lru_list(folio);
-
if (lru_gen_del_folio(lruvec, folio, false))
return;
- if (lru != LRU_UNEVICTABLE)
- list_del(&folio->lru);
- update_lru_size(lruvec, lru, folio_zonenum(folio),
+ list_del(&folio->lru);
+ update_lru_size(lruvec, folio_lru_list(folio), folio_zonenum(folio),
-folio_nr_pages(folio));
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0e5b58ca603f..4aa2f4ad8da7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2407,8 +2407,7 @@ static void lru_add_page_tail(struct page *head, struct page *tail,
} else {
/* head is still on lru (and we have it frozen) */
VM_WARN_ON(!PageLRU(head));
- if (!PageUnevictable(tail))
- list_add_tail(&tail->lru, &head->lru);
+ list_add_tail(&tail->lru, &head->lru);
SetPageLRU(tail);
}
}
diff --git a/mm/mmzone.c b/mm/mmzone.c
index 68e1511be12d..7678177bd639 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -81,14 +81,6 @@ void lruvec_init(struct lruvec *lruvec)
for_each_lru(lru)
INIT_LIST_HEAD(&lruvec->lists[lru]);
- /*
- * The "Unevictable LRU" is imaginary: though its size is maintained,
- * it is never scanned, and unevictable pages are not threaded on it
- * (so that their lru fields can be reused to hold mlock_count).
- * Poison its list head, so that any operations on it would crash.
- */
- list_del(&lruvec->lists[LRU_UNEVICTABLE]);
-
lru_gen_init_lruvec(lruvec);
}
--
2.41.0.162.gfafddb0af9-goog
reply other threads:[~2023-06-18 6:58 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230618065816.1365301-1-yosryahmed@google.com \
--to=yosryahmed@google.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=bgeffon@google.com \
--cc=cuigaosheng1@huawei.com \
--cc=david@redhat.com \
--cc=dhowells@redhat.com \
--cc=gthelen@google.com \
--cc=heftig@archlinux.org \
--cc=hughd@google.com \
--cc=jgg@ziepe.ca \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=steven@liquorix.net \
--cc=surenb@google.com \
--cc=talumbau@google.com \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox