From: Matthew Wilcox <willy@infradead.org>
To: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: "Vlastimil Babka" <vbabka@suse.cz>,
"Christoph Lameter" <cl@linux.com>,
"Pekka Enberg" <penberg@kernel.org>,
"David Rientjes" <rientjes@google.com>,
"Joonsoo Kim" <iamjoonsoo.kim@lge.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Roman Gushchin" <roman.gushchin@linux.dev>,
"HORIGUCHI NAOYA(堀口 直也)" <naoya.horiguchi@nec.com>,
"Joe Perches" <joe@perches.com>, "Petr Mladek" <pmladek@suse.com>,
"Andy Shevchenko" <andriy.shevchenko@linux.intel.com>,
"David Hildenbrand" <david@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
"Alexander Potapenko" <glider@google.com>,
"Marco Elver" <elver@google.com>
Subject: Re: [RFC v3 2/4] mm: move PG_slab flag to page_type
Date: Mon, 30 Jan 2023 05:11:48 +0000 [thread overview]
Message-ID: <Y9dRlNhh6O99tg4E@casper.infradead.org> (raw)
In-Reply-To: <Y9dI88l2YJZfZ8ny@hyeyoo>
On Mon, Jan 30, 2023 at 01:34:59PM +0900, Hyeonggon Yoo wrote:
> > Seems like quite some changes to page_type to accomodate SLAB, which is
> > hopefully going away soon(TM). Could we perhaps avoid that?
>
> If it could be done with less changes, I'll try to avoid that.
Let me outline the idea I had for removing PG_slab:
Observe that PG_reserved and PG_slab are mutually exclusive. Also,
if PG_reserved is set, no other flags are set. If PG_slab is set, only
PG_locked is used. Many of the flags are only for use by anon/page
cache pages (eg referenced, uptodate, dirty, lru, active, workingset,
waiters, error, owner_priv_1, writeback, mappedtodisk, reclaim,
swapbacked, unevictable, mlocked).
Redefine PG_reserved as PG_kernel. Now we can use the other _15_
flags to indicate pagetype, as long as PG_kernel is set. So, eg
PageSlab() can now be (page->flags & PG_type) == PG_slab where
#define PG_kernel 0x00001
#define PG_type (PG_kernel | 0x7fff0)
#define PG_slab (PG_kernel | 0x00010)
#define PG_reserved (PG_kernel | 0x00020)
#define PG_buddy (PG_kernel | 0x00030)
#define PG_offline (PG_kernel | 0x00040)
#define PG_table (PG_kernel | 0x00050)
#define PG_guard (PG_kernel | 0x00060)
That frees up the existing PG_slab, lets us drop the page_type field
altogether and gives us space to define all the page types we might
want (eg PG_vmalloc)
We'll want to reorganise all the flags which are for anon/file pages
into a contiguous block. And now that I think about it, vmalloc pages
can be mapped to userspace, so they can get marked dirty, so only
14 bits are available. Maybe rearrange to ...
PG_locked 0x000001
PG_writeback 0x000002
PG_head 0x000004
PG_dirty 0x000008
PG_owner_priv_1 0x000010
PG_arch_1 0x000020
PG_private 0x000040
PG_waiters 0x000080
PG_kernel 0x000100
PG_referenced 0x000200
PG_uptodate 0x000400
PG_lru 0x000800
PG_active 0x001000
PG_workingset 0x002000
PG_error 0x004000
PG_private_2 0x008000
PG_mappedtodisk 0x010000
PG_reclaim 0x020000
PG_swapbacked 0x040000
PG_unevictable 0x080000
PG_mlocked 0x100000
... or something. There are a number of constraints and it may take
a few iterations to get this right. Oh, and if this is the layout
we use, then:
PG_type 0x1fff00
PG_reserved (PG_kernel | 0x200)
PG_slab (PG_kernel | 0x400)
PG_buddy (PG_kernel | 0x600)
PG_offline (PG_kernel | 0x800)
PG_table (PG_kernel | 0xa00)
PG_guard (PG_kernel | 0xc00)
PG_vmalloc (PG_kernel | 0xe00)
This is going to make show_page_flags() more complex :-P
Oh, and while we're doing this, we should just make PG_mlocked
unconditional. NOMMU doesn't need the extra space in page flags
(for what? their large number of NUMA nodes?)
next prev parent reply other threads:[~2023-01-30 5:12 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-18 10:18 [RFC v3 0/4] " Hyeonggon Yoo
2022-12-18 10:18 ` [RFC v3 1/4] mm/hwpoison: remove MF_MSG_SLAB from action_page_types Hyeonggon Yoo
2022-12-20 23:53 ` HORIGUCHI NAOYA(堀口 直也)
2022-12-21 17:00 ` Andy Shevchenko
2022-12-29 13:18 ` Hyeonggon Yoo
2022-12-29 13:17 ` Hyeonggon Yoo
2022-12-18 10:18 ` [RFC v3 2/4] mm: move PG_slab flag to page_type Hyeonggon Yoo
2023-01-12 16:27 ` Vlastimil Babka
2023-01-30 4:34 ` Hyeonggon Yoo
2023-01-30 5:11 ` Matthew Wilcox [this message]
2023-02-03 16:00 ` Hyeonggon Yoo
2023-02-03 16:19 ` Matthew Wilcox
2023-02-08 13:56 ` Hyeonggon Yoo
2023-02-03 16:04 ` David Hildenbrand
2023-02-08 9:44 ` Mike Rapoport
2023-02-08 10:13 ` David Hildenbrand
2022-12-18 10:19 ` [RFC v3 3/4] mm, printk: introduce new format %pGt for page_type Hyeonggon Yoo
2022-12-19 9:44 ` Andy Shevchenko
2022-12-19 19:35 ` Randy Dunlap
2022-12-20 10:58 ` Andy Shevchenko
2022-12-29 13:35 ` Hyeonggon Yoo
2022-12-20 15:20 ` Petr Mladek
2022-12-29 13:30 ` Hyeonggon Yoo
2022-12-18 10:19 ` [RFC v3 4/4] mm/debug: use %pGt to print page_type in dump_page() Hyeonggon Yoo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y9dRlNhh6O99tg4E@casper.infradead.org \
--to=willy@infradead.org \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=cl@linux.com \
--cc=david@redhat.com \
--cc=elver@google.com \
--cc=glider@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=joe@perches.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=naoya.horiguchi@nec.com \
--cc=penberg@kernel.org \
--cc=pmladek@suse.com \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox