From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A88CC48297 for ; Mon, 12 Feb 2024 16:48:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E7306B00A0; Mon, 12 Feb 2024 11:48:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 898736B00A2; Mon, 12 Feb 2024 11:48:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7122C6B00A1; Mon, 12 Feb 2024 11:48:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5F3686B009D for ; Mon, 12 Feb 2024 11:48:45 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 38EEAA0A2C for ; Mon, 12 Feb 2024 16:48:45 +0000 (UTC) X-FDA: 81783735810.21.AFEF623 Received: from mail-vs1-f47.google.com (mail-vs1-f47.google.com [209.85.217.47]) by imf17.hostedemail.com (Postfix) with ESMTP id 454C740017 for ; Mon, 12 Feb 2024 16:48:43 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of dcvernet@gmail.com designates 209.85.217.47 as permitted sender) smtp.mailfrom=dcvernet@gmail.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707756523; a=rsa-sha256; cv=none; b=l4+P3hahe1xwVwxkGRC5eDeoZ2QGbbpGEpKHIxed3aYDalsAvSEY1TIQ2cntIuVb9OscF0 65d4j9WDpk/j4tryAUwE03+gc9KoaTvof/ELi0o3/9Hqmx9HuBxDLBpHPqw5OWGsdEjO7Z /t30hUCBmtx1VeTN4HAtIgvbRLnV/ek= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; spf=pass (imf17.hostedemail.com: domain of dcvernet@gmail.com designates 209.85.217.47 as permitted sender) smtp.mailfrom=dcvernet@gmail.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707756523; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NNwvYyZhOt8RlyAXSRCMIOJBWJwymU5uQqZx/oeimBs=; b=BJv8uPGuy47hHmJzcT1FOPeBgmxhxLjXMh8XE2AqneDJu6L/Kvjk88cIAy9jzYOy7xYjPe rJG7iX873iswanxLj+DSQhjslBU9OUYeGKG2iOwFeUi/SanrEo42FU4rkn0kgQ10SeMU+R hfts/aZ/DnYKnUDUgKrGWVOPk8h5UCo= Received: by mail-vs1-f47.google.com with SMTP id ada2fe7eead31-46eb801f6beso8994137.1 for ; Mon, 12 Feb 2024 08:48:43 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707756522; x=1708361322; h=user-agent:in-reply-to:content-disposition:mime-version:references :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NNwvYyZhOt8RlyAXSRCMIOJBWJwymU5uQqZx/oeimBs=; b=GsZG4lzpaJVu7E/xfxcrX6NzCJpwIvIo3Ny//STjkNy3RncFSihIOkFhQRIWaauFZl OVQKfIgAhpB7RM6nanXOWOsbbXQbSMXCIcpCdJgwzx3/IrcBzBMlavN0f7KB9iI8hk5b vU8z7cgeUhfAjCy/j5YFidy/3xJXFREZXktfIPb8T3r6nV+i+5Z0VrT2rCjU34xEHbpD XlgtiR4kMz6inkX9FrMVkFxSvRpDttIZ/wiJ7ENgTVG9EeX4BJLKwa/w6aOT/mOa5cE1 cHT3YC61e/OwPFl61KXyV2K+JpnPSPkOAjSeNORF7QXZsT/Zqq2tmWUHFdx7bqXS0at5 taWw== X-Gm-Message-State: AOJu0YwUOBXGFVi8FuPd/e99tenVmozGHONe5MawgQgOanncV/GotCdY v198jf5rUWycHsB/e//K5K0nQrFBsSv6VfWLXK/PSWGXzTgRtZV2 X-Google-Smtp-Source: AGHT+IGlWH3ZjRbE+7u7YgELxfQ4mSi4w+MZBq2Ng2KdUcwsJ4j8yz0T8KgnmAyHgo0g/YT6aaSoAg== X-Received: by 2002:a1f:6201:0:b0:4b6:cc19:42dc with SMTP id w1-20020a1f6201000000b004b6cc1942dcmr3539288vkb.11.1707756522255; Mon, 12 Feb 2024 08:48:42 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCWPj+AQ/bo7LwFN6ha5A4xDVj54ziwnKhUv7xPnMVvopVBI0K2F1ls9iheIUNWw/RheM81C1sXxjPhIvGOtfDme1eggL+xMNLZDv/E0gL1GD4mmgjgTYfg3dITfcS/jbn7RZGZ59E5o2CguhCZCBAeFK+zk1hFmZtN+92THrCVi3Tx0odyYb4914s5+NCGWY6sE5G1zogV2tl89BzvCF+rJHXuk3z5yMNzyGhrjDnAzUP5VJVkXB4vwqRrDefpw3cXKiOph8/J1HOwmP63/w7YOLWGmuJnLLDDjLjRACavhMYxiVnY4Rh582Y9H0R47z1wS/PRQVipIWI+mfZI5ik+UlIfPUNfwUixCYp7lGusqhd1CgRo1qZ4Y5A8091kz43ZCI6TUYHoW99tLxrw= Received: from maniforge.lan (c-24-1-27-177.hsd1.il.comcast.net. [24.1.27.177]) by smtp.gmail.com with ESMTPSA id lq7-20020a0562145b8700b0068caf5c5fe5sm320382qvb.77.2024.02.12.08.48.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 08:48:41 -0800 (PST) Date: Mon, 12 Feb 2024 10:48:39 -0600 From: David Vernet To: Alexei Starovoitov Cc: bpf , Daniel Borkmann , Andrii Nakryiko , Kumar Kartikeya Dwivedi , Eddy Z , Tejun Heo , Barret Rhoden , Johannes Weiner , Lorenzo Stoakes , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , linux-mm , Kernel Team Subject: Re: [PATCH v2 bpf-next 17/20] selftests/bpf: Add unit tests for bpf_arena_alloc/free_pages Message-ID: <20240212164839.GB2192269@maniforge.lan> References: <20240209040608.98927-1-alexei.starovoitov@gmail.com> <20240209040608.98927-18-alexei.starovoitov@gmail.com> <20240209231433.GE975217@maniforge.lan> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="7LSYlawjg2+mwi2g" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/2.2.12 (2023-09-09) X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 454C740017 X-Stat-Signature: eidzp37gaqg74tyf599tz57ukhyo6fku X-HE-Tag: 1707756523-740406 X-HE-Meta: U2FsdGVkX19Pr8o92POLySGpzHIVXk2ZMU9zxLPC5tuwaC2oTb+ZswHSDcNSm3XbtFHNa+acRsr/kXQqOjt8bDsEmd5ZAn/X7tPzAUG9TbBI6EfriSVlo9lv4TVsjOcsTL6CUMY0T61bFh0w3YzRlgimL5+UU2Rn3FMQmWqN92UTVyAOZYCArw/EO+ekiTQNwPDkxrq4GqgYKYM44XuERi2dOfVuKPLsbVeVsmMW+i/kV5PlsPYRe4Jw2GL9RY3aq1W/4OirWI39sKsVDRdhnKYabw8DG8/VwFbfGjSy9EqPk0BpC90+Xuz8Cx0H/MgJfyhNSed3X8laImrcvQGKg2+Sj6uwygqqSw6rhNiTbThoP7CwXBhGIR1hHsTNxFbwAEwJu3Wr75DhVtOJn4ij1chSbfhvvV9/nwsZtxCjFNN1biFTM53olDo6MVx93WnSNllAkB/Ner1Y7q1pFFhIk1drp6z0IkhoDfQ9I+c45v89iK5k3F9OmcWDwmutORDmR4HJtjRzv8hIuqZZm3KPiB1w4msMMpK9S3+Zusidh77jurhxjyAaMEdzTjJOjPzKxJGGdVubABgAoMYkLyreI189GKH+ms+ZLlxoqpS68DwC0e+SjewaByalBC+Mg2x4ylxYFY7wQ1tkCDPnDnc4aCJ+glzqzVX9M3xtVeusU7+lhBa4GiRsVeqvBIsS7gjyCbXNO938YvkOBeh6k1KzoLLJHQvb/8PoiWYUB4+BmwY6dV7/KyakrucwvuPEB7yj4gfR+8PjNRkjAAxkEcCEEo2s+ahPhNaGnRurZ3vNeEOI+Acv+Us96WqWh/FrQ/H90LD67BAry/BPSgEPIQGk4oZlBVybIA97A27iwR7lgKRI0lYj+UDB52k3iONOtCCuanJd1+4riuE/21WYhLaRV5Gc4Df85VPPsqpaMDlyudBx4YCYrapzTQj1WNgoo4P1gTlXZil9zRYUfk0N5wc hrpEJK8j 7HhluKXFYJOZoXLOoIivdG7akHHDggAd9d6hDrDVCBIvjvriyGlr/phjtHjQ7XhkF2YiqS9/euGQPXLVXXlIfD1RAueTxzpQu75d1IrPadbI2nVH2dcnuu6TRyxnK8trdd5/iYaoyhg8kQfOd/FxJssFCyrhKE08o2YEIfy4xk2zqO3ogXMpigHrGfZBV8XzTv6797z49uVYw60lvK/c9pw5jiHmyK9xxJ2AqsPBDFLUQloPsUsGHNchOxolzZxslJJlPARrotOOYcWdT+sOke1q/YIZP2BF45bznT9jW67A6B70WftCgWnyV9Ss9FaCgYvpUzfKpUGhB+tQ2D+QLndfx9kMpSuTURdnCMTkpSLflpPQq+yiQqdJoEW5kHTMQm+Yr0F5en0wbhk7hZWUzvvYt9Gz3xL8p9UlACBiMXGKB5LhsXb4XBrkRji39bjqS8FWeE2/L0RBxuuB69UODgz5Sk2jxejsrLMXyES+nJr6QOgJBuF4jpZicC21ALQ4YrFCq3mymzgZICQnUXr7rf/RCh4+nlqB1EAStMrFJR/mpKywZ5UIYdXx7jJoWk6GqYu8GOMHyhKwJ1Fm9KS9J46uU2iVq/nKGqPpYCxcqiy9UbqEGRV6G029iMw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: --7LSYlawjg2+mwi2g Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Feb 09, 2024 at 08:35:01PM -0800, Alexei Starovoitov wrote: > On Fri, Feb 9, 2024 at 3:14=E2=80=AFPM David Vernet = wrote: > > > > > + > > > +#ifndef arena_container_of > > > > Why is this ifndef required if we have a pragma once above? >=20 > Just a habit to check for a macro before defining it. >=20 > > Obviously it's way better for us to actually have arenas in the interim > > so this is fine for now, but UAF bugs could potentially be pretty > > painful until we get proper exception unwinding support. >=20 > Detection that arena access faulted doesn't have to come after > exception unwinding. Exceptions vs cancellable progs are also different. > A record of the line in bpf prog that caused the first fault is probably > good enough for prog debugging. >=20 > > Otherwise, in terms of usability, this looks really good. The only thing > > to bear in mind is that I don't think we can fully get away from kptrs > > that will have some duplicated logic compared to what we can enable in > > an arena. For example, we will have to retain at least some of the > > struct cpumask * kptrs for e.g. copying a struct task_struct's struct > > cpumask *cpus_ptr field. >=20 > I think that's a bit orthogonal. > task->cpus_ptr is a trusted_ptr_to_btf_id access that can be mixed > within a program with arena access. I see, so the idea is that we'd just use regular accesses to query the bits in that cpumask rather than kfuncs? Similar to how we e.g. read a task field such as p->comm with a regular load? Ok, that should work. > > For now, we could iterate over the cpumask and manually set the bits, so > > maybe even just supporting bpf_cpumask_test_cpu() would be enough > > (though donig a bitmap_copy() would be better of course)? This is > > probably fine for most use cases as we'd likely only be doing struct > > cpumask * -> arena copies on slowpaths. But is there any kind of more > > generalized integration we want to have between arenas and kptrs? Not > > sure, can't think of any off the top of my head. >=20 > Hopefully we'll be able to invent a way to store kptr-s inside the arena, > but from a cpumask perspective bpf_cpumask_test_cpu() can be made > polymorphic to work with arena ptrs and kptrs. > Same with bpf_cpumask_and(). Mixed arguments can be allowed. > Args can be either kptr or ptr_to_arena. This would be ideal. And to make sure I understand, many of these wouldn't even be kfuncs, right? We'd just be doing loads on two safe/trusted objects that were both pointers to a bitmap of size NR_CPUS? > I still believe that we can deprecate 'struct bpf_cpumask'. > The cpumask_t will stay, of course, but we won't need to > bpf_obj_new(bpf_cpumask) and carefully track refcnt. > The arena can do the same much faster. Yes, I agree. Any use of struct bpf_cpumask * can just be stored in an arena, and any kfuncs where we were previously passing a struct bpf_cpumask * could instead just take an arena cpumask, or be done entirely using BPF instructions per your point above. > > > + return 7; > > > + page3 =3D bpf_arena_alloc_pages(&arena, NULL, 1, NUMA_NO_NODE, = 0); > > > + if (!page3) > > > + return 8; > > > + *page3 =3D 3; > > > + if (page2 !=3D page3) > > > + return 9; > > > + if (*page1 !=3D 1) > > > + return 10; > > > > Should we also test doing a store after an arena has been freed? >=20 > You mean the whole bpf arena map was freed ? > I don't see how the verifier would allow that. > If you meant a few pages were freed from the arena then such a test is > already in the patches. I meant a negative test that verifies we fail to load a prog that does a write to a freed arena pointer. --7LSYlawjg2+mwi2g Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iHUEARYKAB0WIQRBxU1So5MTLwphjdFZ5LhpZcTzZAUCZcpL5wAKCRBZ5LhpZcTz ZGqGAP9FXyIT26podcWUi4w5oM6m1yzPG0qtlJFkJI/bnGpN2AEAsHASWmqgcXzj 7q2z+BKmvv3HQe3cvdSfl7XeLbIivgY= =O8oq -----END PGP SIGNATURE----- --7LSYlawjg2+mwi2g--