From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0AADC28B28 for ; Wed, 12 Mar 2025 09:45:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 42E24280004; Wed, 12 Mar 2025 05:45:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DB20280003; Wed, 12 Mar 2025 05:45:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A31E280004; Wed, 12 Mar 2025 05:45:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0DBB3280003 for ; Wed, 12 Mar 2025 05:45:53 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A41EAA956F for ; Wed, 12 Mar 2025 09:45:53 +0000 (UTC) X-FDA: 83212417386.02.A7BEFBA Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf24.hostedemail.com (Postfix) with ESMTP id E1AC8180004 for ; Wed, 12 Mar 2025 09:45:51 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of "SRS0=vfAg=V7=goodmis.org=rostedt@kernel.org" designates 147.75.193.91 as permitted sender) smtp.mailfrom="SRS0=vfAg=V7=goodmis.org=rostedt@kernel.org" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741772752; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mdXacX75cx4u066m5Hw1LsyB1exc+A/sue4o50z17Ag=; b=uSUA/6FDEL2nTuevuSTD1DUyF0Bu431OrMwcCf9Yx+N0l6O+sUEScKsphIg3Nu56tMmojL FOeV1kYP0OloBcD0Zc7bwJLTiKfUokqiP3jExDDkweS6XJhkBn1c8nKaE4s1i/+pvSgTAM qrospRzJO7N3b6/PAa2B8W1oKd3CwdU= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of "SRS0=vfAg=V7=goodmis.org=rostedt@kernel.org" designates 147.75.193.91 as permitted sender) smtp.mailfrom="SRS0=vfAg=V7=goodmis.org=rostedt@kernel.org" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741772752; a=rsa-sha256; cv=none; b=PpxCobbK8NL6crUPtHeW0qjQTuzSk4V9LW8zBr4pK3vu2CBaU1p/iXPooc4sLi3jE1riC6 0NDnaTsMUH5mn1RzoLHEAH16INrB+WklpfE1AAMO9iW1JISgDMo6f0W1fKilEhF+Lk7PoV GoJCRx0YQWaN/KiSrzBkseHTq9tam54= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 1DCD5A46E3A; Wed, 12 Mar 2025 09:40:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8423DC4CEEC; Wed, 12 Mar 2025 09:45:47 +0000 (UTC) Date: Wed, 12 Mar 2025 05:45:45 -0400 From: Steven Rostedt To: Mateusz Guzik Cc: Alexei Starovoitov , Andrew Morton , bpf , Andrii Nakryiko , Kumar Kartikeya Dwivedi , Peter Zijlstra , Vlastimil Babka , Sebastian Sewior , Hou Tao , Johannes Weiner , Shakeel Butt , Michal Hocko , Matthew Wilcox , Thomas Gleixner , Jann Horn , Tejun Heo , linux-mm , Kernel Team Subject: Re: [PATCH bpf-next v9 2/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation Message-ID: <20250312054545.11681338@batman.local.home> In-Reply-To: References: <20250222024427.30294-1-alexei.starovoitov@gmail.com> <20250222024427.30294-3-alexei.starovoitov@gmail.com> <20250310190427.32ce3ba9adb3771198fe2a5c@linux-foundation.org> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: E1AC8180004 X-Rspamd-Server: rspam08 X-Stat-Signature: a4rb7rrqxijew8ic8ajywui6f7fichnt X-HE-Tag: 1741772751-412725 X-HE-Meta: U2FsdGVkX19KwdhmAJfi4dwSF5SFAnN5RlWO5K71eo9K4KrIrdxga3h3g+ZVqVcZIKyx1aSFKNI4lhEJAmR4oDwy6YhAuvh8cY/5c7ItEAlt+Lym7a3limkQrf7QfFptS97y71cCKNWN5G/19fljFyu2VHsFGpPAlHvEbIMdbFRCEMCGn/Um6fcw/Y3IM7V5AUa/KQBVoxPY0WIOoIJDZEk6atcMRPopElJa/8+jAjcRYCNfDwbK702QdZvFwOUmGpT8csuHnQsMnE4MV5ZGDq4kbxWni+XjBg+uSBTEi9+dNbdHkz48/6Z/sbj8IkHdqSpER5v6lvfKAjyDM1iTBURofcPhuoaOv8MlMgCMB8qcNR21XDrOa93KbAcXHVtZtU/YudBbrFvklgi3Z9vJLFFbxVX5f+Qy7WrJd7h94RFBfsAmSyecEKJ0UGyBIgA2pTiVZ+L5+s0lXQgsn8TSZHtKPP7LK+82TlYgmOf8jK0zBvbsf+E8Dr8qgs+TTNVTf+ne9+5Am/Sj2g9dpb37dYQL/IXcJpouKM+xKblOGfHTC4A0FXQfL1KmPqrzTESW2l9dMcIcCywpiPynncz4Pe2WDVG4fh0ViQE31qmYd4XSthMrILO6RrhlNcDA+Nsk3wwyM4BZDfz/DvXuMrX7NgUpn/yzHCDgsIKwKO7Lpsv5qW9JfCNBcQyEHOnDpgiGq9TVG8MPn57bd58SYwl9LcoJN6Stq1VqME0M7Dq68OsCsKBOKRXA0/AuyTOcX7HWXJrjFZewBPUWojuM2L+2B0TKu/SSTZ4hDbbwefcyPo0NCRRhlePP3ngzwJVwyALNcfWOvj3hOX2FAKu24PW04BDaXsWJjuSeAAC7mldFgze0FDFV8ZsV6X/IUZaHLqM0FMf0cdo+kh2GZNASxQa5kmQet1f+YZZ9Hz0F46jWbv1yzTkU7SpnMhIKpAX7UZSzrNeYV0YA2uoMAELnYLx 8lpi0nS6 32V3QYuBPUrr8kgodY2YsB9Q8b9rKLLlGFweE6Y1/T1R/tVgRn4VX1optV52Js+vtqZPlC2jQKrPXhYyqScx2PPbTMPOYzJlCB+FsDTFKOucTaxS2lc+mXAYAw1FFPD5zCz9LXGhdjzMGSM8qRsVAHkO62Deh/XdwqXOSjKjozVQzlof+3zsflS8Ua1otFxxWQVXiBvwQdf3QP3gQjm2qWS6nwwqYakk+5L+RTTCzNsr3gPGgnppAOTStTuP41NS0dQZcaSxgW9uBCRZzjXQsJ7TgpvgTlyp+Lf9DxEzxvf4WCpx35bv3VPOpzPlHCNEJoZ+0W8yMI7wD1rY4lASEpCeaglO9t5baxxM/Z0sboe8LSOj5NCVrEsJ1TqZfsC2lbwyLAKXkVqIL/ew= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 11 Mar 2025 19:04:47 +0100 Mateusz Guzik wrote: > A small bit before that: > if (!spin_trylock_irqsave(&zone->lock, flags)) { > if (unlikely(alloc_flags & ALLOC_TRYLOCK)) > return NULL; > spin_lock_irqsave(&zone->lock, flags); > } > > This is going to perform worse when contested due to an extra access to > the lock. I presume it was done this way to avoid suffering another > branch, with the assumption the trylock is normally going to succeed. What extra access? If a spinlock were to fail, it keeps checking the lock until it's released. If anything, this may actually help with performance when contended. Now, there's some implementations of spinlocks where on failure to secure the lock, some magic is done to spin on another bit instead of the lock to prevent cache bouncing (as locks usually live on the same cache line as the data they protect). When the owner releases the lock, it will also have to tell the spinners that the lock is free again. But this extra trylock is not going to show up outside the noise. -- Steve