From: Matthew Wilcox <willy@infradead.org>
To: Dev Jain <dev.jain@arm.com>
Cc: Pedro Falcato <pfalcato@suse.de>,
lsf-pc@lists.linux-foundation.org, ryan.roberts@arm.com,
catalin.marinas@arm.com, will@kernel.org, ardb@kernel.org,
hughd@google.com, baolin.wang@linux.alibaba.com,
akpm@linux-foundation.org, david@kernel.org,
lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
vbabka@suse.cz, rppt@kernel.org, surenb@google.com,
mhocko@suse.com, linux-mm@kvack.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: [LSF/MM/BPF TOPIC] Per-process page size
Date: Mon, 23 Feb 2026 15:18:42 +0000 [thread overview]
Message-ID: <aZxv0mEd2BA3Mq-N@casper.infradead.org> (raw)
In-Reply-To: <a778d249-afea-4488-b045-beff3492a48a@arm.com>
On Mon, Feb 23, 2026 at 10:37:55AM +0530, Dev Jain wrote:
> I didn't understand the reclaimable reference, but yes we need to make this efficient.
this goes over 80 columns so much and so often, it's painful to read.
so i didn't.
> So for the above example I gave, native_set_ptes knows the virtual address to set -
> walking the native hierarchy from native_pgd->native_pmd->native_pte (in case of 64K native
> geometry) is inefficient. So we need to maintain a lookup mechanism from a linux pgtable
> pointer to the native pgtable pointer.
> The idea we have currently is to store such lookup in the struct ptdesc of the pagetable page.
> For 4K Linux pagetable and 64K native pagetable, 512M/2M = 256 Linux PTE tables correspond
> to different sections of the native PTE table. We will maintain the pointer to the relevant
> section in the native PTE table, in the struct ptdesc of the pagetable page of the Linux
> PTE table.
> The other case is that a single Linux pgtable leaf entry corresponds to multiple native
> leaf entries - take the case of a Linux PMD table which maps 1G of memory, this corresponds
> to 2 native PTE tables (2 x 512M). We will have to store a list of pointers here.
>
> >
>
next prev parent reply other threads:[~2026-02-23 15:18 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-17 14:50 Dev Jain
2026-02-17 15:22 ` Matthew Wilcox
2026-02-17 15:30 ` David Hildenbrand (Arm)
2026-02-17 15:51 ` Ryan Roberts
2026-02-20 4:49 ` Matthew Wilcox
2026-02-20 16:50 ` David Hildenbrand (Arm)
2026-02-23 13:02 ` [Lsf-pc] " Jan Kara
2026-02-18 8:39 ` Dev Jain
2026-02-18 8:58 ` Dev Jain
2026-02-18 9:15 ` David Hildenbrand (Arm)
2026-02-20 9:49 ` Arnd Bergmann
2026-02-20 13:37 ` Pedro Falcato
2026-02-23 5:07 ` Dev Jain
2026-02-23 12:49 ` Pedro Falcato
2026-02-23 13:01 ` David Hildenbrand (Arm)
2026-02-23 15:18 ` Matthew Wilcox [this message]
2026-02-23 16:28 ` David Hildenbrand (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aZxv0mEd2BA3Mq-N@casper.infradead.org \
--to=willy@infradead.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=catalin.marinas@arm.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=hughd@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mhocko@suse.com \
--cc=pfalcato@suse.de \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox