From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
To: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: bpf <bpf@vger.kernel.org>, Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Barret Rhoden <brho@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Lorenzo Stoakes <lstoakes@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Uladzislau Rezki <urezki@gmail.com>,
Christoph Hellwig <hch@infradead.org>,
Mike Rapoport <rppt@kernel.org>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
sstabellini@kernel.org, Juergen Gross <jgross@suse.com>,
linux-mm <linux-mm@kvack.org>,
xen-devel@lists.xenproject.org, Kernel Team <kernel-team@fb.com>
Subject: Re: [PATCH v4 bpf-next 2/2] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().
Date: Wed, 6 Mar 2024 14:12:54 -0800 [thread overview]
Message-ID: <CAADnVQL9h7R0zYyr=P4jm9AFvK27Vx+rrUuPjTuw4QpubNngpw@mail.gmail.com> (raw)
In-Reply-To: <CA+CK2bAhWgSSotKjiGA4oTbH0XaCAtiWe+5p5u9OAf0ykBekwg@mail.gmail.com>
On Wed, Mar 6, 2024 at 1:46 PM Pasha Tatashin <pasha.tatashin@soleen.com> wrote:
>
> > > This interface and in general VM_SPARSE would be useful for
> > > dynamically grown kernel stacks [1]. However, the might_sleep() here
> > > would be a problem. We would need to be able to handle
> > > vm_area_map_pages() from interrupt disabled context therefore no
> > > sleeping. The caller would need to guarantee that the page tables are
> > > pre-allocated before the mapping.
> >
> > Sounds like we'd need to differentiate two kinds of sparse regions.
> > One that is really sparse where page tables are not populated (bpf use case)
> > and another where only the pte level might be empty.
> > Only the latter one will be usable for such auto-grow stacks.
> >
> > Months back I played with this idea:
> > https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?&id=ce63949a879f2f26c1c1834303e6dfbfb79d1fbd
> > that
> > "Make vmap_pages_range() allocate page tables down to the last (PTE) level."
> > Essentially pass NULL instead of 'pages' into vmap_pages_range()
> > and it will populate all levels except the last.
>
> Yes, this is what is needed, however, it can be a little simpler with
> kernel stacks:
> given that the first page in the vm_area is mapped when stack is first
> allocated, and that the VA range is aligned to 16K, we actually are
> guaranteed to have all page table levels down to pte pre-allocated
> during that initial mapping. Therefore, we do not need to worry about
> allocating them later during PFs.
Ahh. Found:
stack = __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN, ...
> > Then the page fault handler can service a fault in auto-growing stack
> > area if it has a page stashed in some per-cpu free list.
> > I suspect this is something you might need for
> > "16k stack that is populated on fault",
> > plus a free list of 3 pages per-cpu,
> > and set_pte_at() in pf handler.
>
> Yes, what you described is exactly what I am working on: using 3-pages
> per-cpu to handle kstack page faults. The only thing that is missing
> is that I would like to have the ability to call a non-sleeping
> version of vm_area_map_pages().
vm_area_map_pages() cannot be non-sleepable, since the [start, end)
range will dictate whether mid level allocs and locks are needed.
Instead in alloc_thread_stack_node() you'd need a flavor
of get_vm_area() that can align the range to THREAD_ALIGN.
Then immediately call _sleepable_ vm_area_map_pages() to populate
the first page and later set_pte_at() the other pages on demand
from the fault handler.
next prev parent reply other threads:[~2024-03-06 22:13 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-05 3:05 [PATCH v4 bpf-next 0/2] mm: Enforce ioremap address space and introduce sparse vm_area Alexei Starovoitov
2024-03-05 3:05 ` [PATCH v4 bpf-next 1/2] mm: Enforce VM_IOREMAP flag and range in ioremap_page_range Alexei Starovoitov
[not found] ` <CGME20240308171422eucas1p293895be469655aa618535cf199b0c43a@eucas1p2.samsung.com>
2024-03-08 17:14 ` Marek Szyprowski
2024-03-08 17:21 ` Alexei Starovoitov
2024-03-05 3:05 ` [PATCH v4 bpf-next 2/2] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages() Alexei Starovoitov
2024-03-06 14:19 ` Christoph Hellwig
2024-03-06 17:10 ` Alexei Starovoitov
2024-03-06 21:03 ` Pasha Tatashin
2024-03-06 21:28 ` Alexei Starovoitov
2024-03-06 21:46 ` Pasha Tatashin
2024-03-06 22:12 ` Alexei Starovoitov [this message]
2024-03-06 22:56 ` Pasha Tatashin
2024-03-06 23:11 ` Alexei Starovoitov
2024-03-06 22:57 ` Pasha Tatashin
2024-03-06 18:30 ` [PATCH v4 bpf-next 0/2] mm: Enforce ioremap address space and introduce sparse vm_area patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAADnVQL9h7R0zYyr=P4jm9AFvK27Vx+rrUuPjTuw4QpubNngpw@mail.gmail.com' \
--to=alexei.starovoitov@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andrii@kernel.org \
--cc=boris.ostrovsky@oracle.com \
--cc=bpf@vger.kernel.org \
--cc=brho@google.com \
--cc=daniel@iogearbox.net \
--cc=hannes@cmpxchg.org \
--cc=hch@infradead.org \
--cc=jgross@suse.com \
--cc=kernel-team@fb.com \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=pasha.tatashin@soleen.com \
--cc=rppt@kernel.org \
--cc=sstabellini@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=urezki@gmail.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox