From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B81E2C5475B for ; Wed, 6 Mar 2024 23:12:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3BB0B6B00B3; Wed, 6 Mar 2024 18:12:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36BAE6B00B4; Wed, 6 Mar 2024 18:12:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 232ED6B00B5; Wed, 6 Mar 2024 18:12:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 101B06B00B3 for ; Wed, 6 Mar 2024 18:12:14 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A94C7A08B0 for ; Wed, 6 Mar 2024 23:12:13 +0000 (UTC) X-FDA: 81868164546.18.97937B0 Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com [209.85.221.51]) by imf08.hostedemail.com (Postfix) with ESMTP id F3CA6160003 for ; Wed, 6 Mar 2024 23:12:11 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LDCrBZ4D; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.221.51 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709766732; a=rsa-sha256; cv=none; b=8RZjcszA/sMnFipY9++lRz+mDFKiogYFfxjr3t+7qZR2S2lFwlIBGqDvDb/qzHmc5YPLX1 sbVl/9WIt8waCp5GPUwEux8V/OkLG9ZmVMBrtKJkjjSB5ENf92QCEaLifjlEg4NhPN659y lEQjmtxLqY0QYI9w9MO0fmcR2g5H9V8= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LDCrBZ4D; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.221.51 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709766732; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Jx5iRvChP8NO+/mhuLuLKmdBFsw3uQNbAw3BGNdYR3g=; b=S1idCR+GOHwy5xZ8lVVshb+cG4EHgM2Be9gQoI98/EiphapBHLaydTayAdX3osi5grWV9O gzn84f7+kKiFeCQAj3+0TMUIezeeRGVgNMTlPznSb1hy5vu1sGhCwufE0jlxgDSTmPVxu9 ImfK9z+PB0dnRR87RRttArmEFC4rhrI= Received: by mail-wr1-f51.google.com with SMTP id ffacd0b85a97d-33dd2f0a0c4so87098f8f.0 for ; Wed, 06 Mar 2024 15:12:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709766730; x=1710371530; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Jx5iRvChP8NO+/mhuLuLKmdBFsw3uQNbAw3BGNdYR3g=; b=LDCrBZ4D+nGZ7F8HMrEvPguyzowsKF+I7kMe1I9dUG4QakK3S7GNEzO/6u9bqTP6EE RitcppCpiN8dEM5MJjD87dp2E9ByKuK0eep4Tdo5+XhDjF/IWCmZhfAQmRZoGY4pY1eu /H+9jJfocvN252lmXMCHUKi7OALmInEnSHfunAMBrYd9oIMjV8uJ5MUPIg5wmdsoHIAN 0SvRoIFoUvc0tRvYax0qgfoBQnAfgMXKmJOs4alG6maFhCeu+9yw+X7JadO//Y4t7xcB c3qJkebmlp3BIbguMUSpgYg+qYUZ5nKrgog3u4Ho6pNBrKeB59HSDWe34GhzripNHEdu AWjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709766730; x=1710371530; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Jx5iRvChP8NO+/mhuLuLKmdBFsw3uQNbAw3BGNdYR3g=; b=RZx/kY4cz713NjCKQb111ougX9FTNWcjg3DYzUUV6B5Ed2uWM+dEhx5UjPSWUiQ+XM a7FAORGmBPYTLwRKpmRjHQeGr6kNNfi6YSCe3jcMhnrwtJ3IyaSeAsJn6v2dwqJdci0s gW4ryorZW/q5yVgHCY7QMmKKSi83y/y3xoeKBB2NK+zqYt9ExUIXwdY3imomw5reTRcE 2+DB5b/FbejZkjXpNZEY0dPmclZih3NrVYuiRJr5jSRKeXdbTo7aYkeZeftR4ex+hqoc bJNlTWgbYhKesajxrhLITI8ZH5GMBQ48lUWfrDPvbOkG2arQPRbOSFqYL6Z+pI5OC2X4 Y9TA== X-Forwarded-Encrypted: i=1; AJvYcCWHSl8ABAbL7OzF2Oj0DXxEVzSzw9w4nX4+dh32Z2ExXSUygADfz0Z/AkYyrKBCgvxTgHJn8uxQCc1jnHUlxqClSS0= X-Gm-Message-State: AOJu0YzYG291cVhH1/Y2dRncgxmaFgQJIHKGdC7fl4OTxbPNNWk+ViW6 zs0H9MxpzPR1LRMomR3sq9BszJktf/9KnMyzFhqfu7tgoFKEhlfZ68uGWKrSFyEGtiGVNj/aesq RPiNqUVJ5XzyQg1+6q+KjWYogNxM= X-Google-Smtp-Source: AGHT+IGLGZhZkUQNkXFQULSBWx3zFTB/DY3Hwy4o8Qn6U6VB9fAcd5bv3pWUFD1OuIX/9JAsZj7/H8SpTHn4VXTZWJQ= X-Received: by 2002:a5d:5918:0:b0:33e:1bec:c48f with SMTP id v24-20020a5d5918000000b0033e1becc48fmr11116715wrd.24.1709766729962; Wed, 06 Mar 2024 15:12:09 -0800 (PST) MIME-Version: 1.0 References: <20240305030516.41519-1-alexei.starovoitov@gmail.com> <20240305030516.41519-3-alexei.starovoitov@gmail.com> In-Reply-To: From: Alexei Starovoitov Date: Wed, 6 Mar 2024 15:11:58 -0800 Message-ID: Subject: Re: [PATCH v4 bpf-next 2/2] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages(). To: Pasha Tatashin Cc: bpf , Daniel Borkmann , Andrii Nakryiko , Linus Torvalds , Barret Rhoden , Johannes Weiner , Lorenzo Stoakes , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Mike Rapoport , Boris Ostrovsky , sstabellini@kernel.org, Juergen Gross , linux-mm , xen-devel@lists.xenproject.org, Kernel Team Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: F3CA6160003 X-Stat-Signature: 7j4fc8f38pzbtfn4oynk5jzxxpum4aji X-HE-Tag: 1709766731-548197 X-HE-Meta: U2FsdGVkX186vLgbEBIKb0ffFUq7JVeNLT5WPn+e0Meywr7dogL96GqesblnTVL8xlxae8XEQKo1Usb7eHlu9BZljILmoIi/L3xvDDR74flOytVGblqBSE9dcNg9SrMlqT/Pmt8hc14pMxjsySvoRhaK93k5FrPceRvxMFJ1I42lIfGafMtJtqcwxxqz3Kq0Zc9fhgCH3dYPpokNQjjobZmv42cyoUmihROqfWg62Ex0egaCKfZ/7EqShI22GKKKXwU+WH3qy3OkaKCME0wq2Cnc/DYoLkXAcM0fv2FXr1hXVLvDpCyTD1pM0Zix4Y+n8avSxYPc3JgxXWkehg5eaWUXRqVHutnFudTqzofNMyJzA+lQrPShQntjYl7u3ujXcsaR8vv4o1tcw0Ozgf9AQqtabvwNueDfNE/jTQWh63hoRjmW2SoUZXZ9tq/NYMV7YqvGKQm9DLB4L6vvMUDvbaqSmYWNqcuwbk5nBPK44fqRbiWHo+WkfoUe/POiVPR7nooGc2EP+u71RVD8c/5Z0Lh69L/9YCYV++HlUegxqWK5X1CKgINsm8p1xn8um7xjLDN/szc5orNs3lsmC/a4Ocbf5Js+0d+/3yckElsPoA9pfg9bVmUvNwXv/aLlyMtewaM+5fEYcg0iGdsYBAUZLF5IYMpqxLyzUxHS2b28yVniwzwa4j9qOdyYWqYItrN4wraHrWfh/nIvHXtCaZwF8KzmGWVsVYl3KMcaNrMXudszDaUjJKSjrVtfMXpj+4rf4XpZsBNVndTDrXXO45cT6Fthc6/vawEgbCsDOxefs25e6e+feAxYQ82ggiMunInZknS5c7Dj+/gE8xfrhrjYkPcPQTxMiPgIXgJq8lct/lzDf4kjKRgdtVmLlP7oQh8jRPLeRFG3S1E8OtNuuNmTdlTHX4jcmA6p3wJiLToZmewFROvMDH/1+e5+0JkgfgEhlN3Zj67jQS0c8DNKKbK NVIFfYnX IOQJ2FfqVj9vsWzdjzRsCIa3O2hNXHont/vvTFN8YbcVDO+fPzajp2PcfpwVv1wHLRVcDPwPhK/FBXMmjlJ4Uyr1dVC8yfjKTpOhhurrhErwEYZMGiq+g18q5hw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 6, 2024 at 2:57=E2=80=AFPM Pasha Tatashin wrote: > > On Wed, Mar 6, 2024 at 5:13=E2=80=AFPM Alexei Starovoitov > wrote: > > > > On Wed, Mar 6, 2024 at 1:46=E2=80=AFPM Pasha Tatashin wrote: > > > > > > > > This interface and in general VM_SPARSE would be useful for > > > > > dynamically grown kernel stacks [1]. However, the might_sleep() h= ere > > > > > would be a problem. We would need to be able to handle > > > > > vm_area_map_pages() from interrupt disabled context therefore no > > > > > sleeping. The caller would need to guarantee that the page tables= are > > > > > pre-allocated before the mapping. > > > > > > > > Sounds like we'd need to differentiate two kinds of sparse regions. > > > > One that is really sparse where page tables are not populated (bpf = use case) > > > > and another where only the pte level might be empty. > > > > Only the latter one will be usable for such auto-grow stacks. > > > > > > > > Months back I played with this idea: > > > > https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/= ?&id=3Dce63949a879f2f26c1c1834303e6dfbfb79d1fbd > > > > that > > > > "Make vmap_pages_range() allocate page tables down to the last (PTE= ) level." > > > > Essentially pass NULL instead of 'pages' into vmap_pages_range() > > > > and it will populate all levels except the last. > > > > > > Yes, this is what is needed, however, it can be a little simpler with > > > kernel stacks: > > > given that the first page in the vm_area is mapped when stack is firs= t > > > allocated, and that the VA range is aligned to 16K, we actually are > > > guaranteed to have all page table levels down to pte pre-allocated > > > during that initial mapping. Therefore, we do not need to worry about > > > allocating them later during PFs. > > > > Ahh. Found: > > stack =3D __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN, ... > > > > > > Then the page fault handler can service a fault in auto-growing sta= ck > > > > area if it has a page stashed in some per-cpu free list. > > > > I suspect this is something you might need for > > > > "16k stack that is populated on fault", > > > > plus a free list of 3 pages per-cpu, > > > > and set_pte_at() in pf handler. > > > > > > Yes, what you described is exactly what I am working on: using 3-page= s > > > per-cpu to handle kstack page faults. The only thing that is missing > > > is that I would like to have the ability to call a non-sleeping > > > version of vm_area_map_pages(). > > > > vm_area_map_pages() cannot be non-sleepable, since the [start, end) > > range will dictate whether mid level allocs and locks are needed. > > > > Instead in alloc_thread_stack_node() you'd need a flavor > > of get_vm_area() that can align the range to THREAD_ALIGN. > > Then immediately call _sleepable_ vm_area_map_pages() to populate > > the first page and later set_pte_at() the other pages on demand > > from the fault handler. > > We still need to get to PTE level to use set_pte_at(). So, either > store it in task_struct for faster PF handling, or add another > non-sleeping vmap function that will do something like this: > > vm_area_set_page_at(addr, page) > { > pgd =3D pgd_offset_k(addr) > p4d =3D vunmap_p4d_range(pgd, addr) > pud =3D pud_offset(p4d, addr) > pmd =3D pmd_offset(pud, addr) > pte =3D pte_offset_kernel(pmd, addr) > > set_pte_at(init_mm, addr, pte, mk_pte(page...)); > } Right. There are several flavors of this logic across the tree. What you're proposing is pretty much vmalloc_to_page() that returns pte even if !pte_present, instead of a page. x86 is doing mostly the same in lookup_address() fwiw. Good opportunity to clean all this up and share the code.