From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFE86C83F1B for ; Thu, 10 Jul 2025 19:13:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45D236B009C; Thu, 10 Jul 2025 15:13:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E70D6B009D; Thu, 10 Jul 2025 15:13:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AF2A6B009E; Thu, 10 Jul 2025 15:13:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 185666B009C for ; Thu, 10 Jul 2025 15:13:38 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B29945D7E9 for ; Thu, 10 Jul 2025 19:13:37 +0000 (UTC) X-FDA: 83649304074.15.5BE3692 Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) by imf06.hostedemail.com (Postfix) with ESMTP id E54AE18000C for ; Thu, 10 Jul 2025 19:13:34 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="V+/kg525"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752174815; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jSDipaTmyknuW2BzZpjf/e/oOZz6aMCp5qK19qTSz6M=; b=8YxDtyne9Q/1j/DB8XLd3N8ZOQAZQti+pUT7DZgwh5yoI1IQ4UrPS8NIqawLKq3WDsFMIA 3Ps/julmwy0soXW6b90IJXev/wESDBHPVYcOfHUBKwyfnB+iEIJtiKNWY0QlaLvThI55w+ HPI5G20KNySj7sN6vFM7j7pFxp8WgxY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="V+/kg525"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.221.53 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752174815; a=rsa-sha256; cv=none; b=6q7G4YmtciQNDwSNlH1tuSxYmrU/4i2G0lAm9ktHT6HsD2AZJtKGdAQZv6BfV5oQlmPQiw JoVe+CkrwTFpBQEtp6olX3ZThe8yZhOrrwlgJopfHkwwcGKAFoK2fjaXGOCX1kveMCnu7Z GfvtMgW8P5yGsRtRW0EQqZDau7LwNFc= Received: by mail-wr1-f53.google.com with SMTP id ffacd0b85a97d-3a4fd1ba177so978931f8f.0 for ; Thu, 10 Jul 2025 12:13:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752174813; x=1752779613; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=jSDipaTmyknuW2BzZpjf/e/oOZz6aMCp5qK19qTSz6M=; b=V+/kg525VytHoycDKnSUrsJ0RRmM3BQktKNJGgy9C17GTDPTQYptNrGHFl57FKfrda j0BWRHINV/ua5XC2od2Ey0tcNtKHw/wxogOP7HuyqThT1DRBrhK2LsKCrBXbffKKTACC MIkTuRMI2nHH42ttNQ4cHZcdULed0Du9vC4enAoAu0xZhg70Mrn7G+fDRFOM0v/+IUBI oxV77v2Ifcwb4FAxvGh7B5VqP8W6+ZEzYjq+Rud1xhDiis7kmLuPAxARcFyXmYWgx+5b zer29D+vBzBwWVq3WZ4xYTWVTWSFVgCEuh32nvfMusVhzoXZjiq6eRKiqmCH9X84W0GU +3xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752174813; x=1752779613; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jSDipaTmyknuW2BzZpjf/e/oOZz6aMCp5qK19qTSz6M=; b=ZAdRNu82AhqONlkMPBU/1LqHpZt62W7qiSP3jVGdJkGhp0UTW/8e5rJLrQcJUmcaA6 xnTck9+9meER47PL2MrwbDtm8rTVpr+7B6ACA+PDEaGZZvtqPa2nV3cBqFTWDBo+e8g4 uIaeCEOFv9p2Nbvo+agKDRWcqboc1KlrRYRlQt+wQlND/04/lMcKFRmeL1/mr577Wdms oad6Cheo2atqycma+vAdAFpbYF74T8iC2K7Nev5XQQfu4rsh9G0iyD2XvCv3mshdb2RN gQwx3EJehuDpWuBP8BrYQjsbjIPL/AMQCmQdGJtXzv4Z+2fVcceV9cu5qlwA9zkclGL0 7jFA== X-Forwarded-Encrypted: i=1; AJvYcCXQ/5t6mgtjZX6ps6vggK69HyrYON7Cf4DwZOiqMs+IFyQl+ZsMyP+3Cv/2TL5JyQs37Dk0uZtYaw==@kvack.org X-Gm-Message-State: AOJu0YxesXmO/0mkdRkGrjpBIfSOhi8xpzkx/D0ss9DCgnw7MQiPe07G WYdmMt9/CZbSeAMYJlgjp0CW2qPhF1cNXruRrSsud8aPaKdGTXlZT/psJ0mWTzZD/TNI3DRPoSg TQ+E8HHGjj2jkYO9T67AhxmpEmKQ37Rk= X-Gm-Gg: ASbGncuA1OnImoZKdKAxuQZrSDskRSRyOlqKnT7wpPXqnmei7vAdTCoZnzUEonWdYll DxzuX8Uk0x5pgpfsfu3rCBxcVM2hlXtQaS5BTC9CVcQZ2qHcCqsqD5dDIs6i4QJmzMqzJfd3XoC 7hHUwiBRXNGyCRxkVfxUp/OLApbHorAHwL5ghyhL4p4jq44F9iCiOII+v/kB3qYp//8IMX0u2p X-Google-Smtp-Source: AGHT+IGKE5Z771xbAvcCin6dH8cGHgjDHQcZF4e1IpfkK6WQQFkYdH1IJv69raJtJDWm7Ml9hcjqfz/jV8pohpsh7ao= X-Received: by 2002:a05:6000:2b07:b0:3b4:9909:6fbd with SMTP id ffacd0b85a97d-3b5f1eb13a9mr242203f8f.29.1752174813061; Thu, 10 Jul 2025 12:13:33 -0700 (PDT) MIME-Version: 1.0 References: <20250709015303.8107-1-alexei.starovoitov@gmail.com> <20250709015303.8107-7-alexei.starovoitov@gmail.com> <683189c3-934e-4398-b970-34584ac70a69@suse.cz> In-Reply-To: From: Alexei Starovoitov Date: Thu, 10 Jul 2025 12:13:20 -0700 X-Gm-Features: Ac12FXxmDklC5eyMf7HqxCceF1R9DFZLy2SVv-PFIEAVwNax10vLZ23Y2MZ3YFQ Message-ID: Subject: Re: [PATCH v2 6/6] slab: Introduce kmalloc_nolock() and kfree_nolock(). To: Vlastimil Babka Cc: Harry Yoo , bpf , linux-mm , Shakeel Butt , Michal Hocko , Sebastian Sewior , Andrii Nakryiko , Kumar Kartikeya Dwivedi , Andrew Morton , Peter Zijlstra , Steven Rostedt , Johannes Weiner Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E54AE18000C X-Stat-Signature: rx7ypicmejkrx7m5xdwgif3jz153s553 X-Rspam-User: X-HE-Tag: 1752174814-552601 X-HE-Meta: U2FsdGVkX18f2VAfeLrEeJSwAHUP9s+VGk6ILBMLlkJh53b2wbBdOiI05Luoxo+VE4xGfRvwT6Jva4de4QtAWOL9I1aCuZ206yJl+bC11mtgOCbIEPvVFGTxIoW2Fbl39PbAt6GjiwvVlZGdMs3u+uFMuhCXB3BTOo3gkfM0NkmS5bTHpE3/yA/66RR1zdIKiF6VWkGchjn26FAK2/5g3aN6Owh84zI+BbsCyOcVeedkWdyPoI95p/a4MV1hH5ZmG54FA7T3p9BcF2b0Uqn4wUhHYuXPZYlsiDRk4uQS9uXZGN2i4ZExaHQMBUu4flLI0yBoXQWhJZV0HHY4+pXhBdI0cQHfIcypblknlcA0ZPDcZmjgaFY37WrsCGa4m5kT7WHjxmvz3CR0WFCfR9nESpTKqjHKX4qhhubFzrpaDWVX+WWUQarbAlSAVoU53Bton81qf99yNQ79gGoTNThbLZupL1u/bAj4XewWqsSDp/HAasLVBZLZy7AtEYaRWcYAgCaDY4glpvU8IJ6Eesf4XYcGgG54WlGH6+MtOQV8eFS6XppJXq993VdE8aPS5j3WjfKzdaMM2mAUevRSnmx05aCbuLDxDrPhZ3ZkwgO221YWMx5lNKICCx4exuchHwe+UYTNmsC5lXFKGOr8yVl2pO+8yDlyBZ6xji5bnR5I//Yg53gHcEJh6sKbmJ/U5Whosyt5zH0MTHsHYbNF3G79NV7ewfETDAVpQ7Gyr4xymrE65RiSIikO66/KoducULE8WB7AKBj5R6ltJGi89iYqiHq2PoauZaNmNUi8zoiPC209URSdsMW4iTyAnHsWOje0yfsQ5lKhzEaB42axFnUChRPi7jRSPFdsyyiQ1QokBFKRJuyGybZtQZNw623isBUWDv/YrpDaSuL+GixnnMqV5Q8i6plS/5zeXNhIbgwKW5d890gvLqFkoiQY7JEs0Tf8MyzpbKoehtlsHeP8fQF mPPGt94p VIyd5wSk7o+xrLmFIZMclPwlNEOVi63KV3JDhRo13Y3k8uwldt5+/82qlDtpFnvpJiHwuqQw5o4Rgiovkanrvp8Uyaju5sHlEH8y2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jul 10, 2025 at 8:05=E2=80=AFAM Vlastimil Babka wr= ote: > > On 7/10/25 12:21, Harry Yoo wrote: > > On Thu, Jul 10, 2025 at 11:36:02AM +0200, Vlastimil Babka wrote: > >> On 7/9/25 03:53, Alexei Starovoitov wrote: > >> > >> Hm but this is leaking the slab we allocated and have in the "slab" > >> variable, we need to free it back in that case. ohh. sorry for the silly mistake. Re-reading the diff again I realized that I made a similar mistake in alloc_single_from_new_slab(). It has this bit: if (!alloc_debug_processing(...)) return NULL; so I assumed that doing: if (!spin_trylock_irqsave(&n->list_lock,..)) return NULL; is ok too. Now I see that !alloc_debug is purposefully leaking memory. Should we add: @@ -2841,6 +2841,7 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, struct slab *slab, * It's not really expected that this would fail on a * freshly allocated slab, but a concurrent memory * corruption in theory could cause that. + * Leak newly allocated slab. */ return NULL; so the next person doesn't make the same mistake? Also help me understand... slab->objects is never equal to 1, right? /proc/slabinfo agrees, but I cannot decipher it through slab init code. Logically it makes sense. If that's the case why alloc_single_from_new_slab() has this part: if (slab->inuse =3D=3D slab->objects) add_full(s, n, slab); else add_partial(n, slab, DEACTIVATE_TO_HEAD); Shouldn't it call add_partial() only ? since slab->inuse =3D=3D 1 and slab->objects !=3D 1 > > > > But it might be a partial slab taken from the list? > > True. > > > Then we need to trylock n->list_lock and if that fails, oh... > > So... since we succeeded taking it from the list and thus the spin_tryloc= k, > it means it's safe to spinlock n->list_lock again - we might be waiting o= n > other cpu to unlock it but we know we didn't NMI on our own cpu having th= e > lock, right? But we'd probably need to convince lockdep about this someho= w, > and also remember if we allocated a new slab or taken on from the partial > list... or just deal with this unlikely situation in another irq work :/ irq_work might be the least mind bending. Good point about partial vs new slab. For partial we can indeed proceed with deactivate_slab() and if I'm reading the code correctly, it won't have new.inuse =3D=3D 0, so it won't go to discard_slab() (which won't be safe in this path) But teaching lockdep that below bit in deactivate_slab() is safe: } else if (new.freelist) { spin_lock_irqsave(&n->list_lock, flags); add_partial(n, slab, tail); is a challenge. Since defer_free_work is there, I'm leaning to reuse it for deactive_slab too. It will process static DEFINE_PER_CPU(struct llist_head, defer_free_objects); and static DEFINE_PER_CPU(struct llist_head, defer_deactivate_slabs); Shouldn't be too ugly. Better ideas?