From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8CCEC3DA6D for ; Tue, 20 May 2025 17:20:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 190FA6B0092; Tue, 20 May 2025 13:20:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 141216B0093; Tue, 20 May 2025 13:20:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 058306B0098; Tue, 20 May 2025 13:20:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D913C6B0092 for ; Tue, 20 May 2025 13:20:37 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5DBEC1D081F for ; Tue, 20 May 2025 17:20:37 +0000 (UTC) X-FDA: 83463950514.10.0DDDADD Received: from mail-qt1-f180.google.com (mail-qt1-f180.google.com [209.85.160.180]) by imf21.hostedemail.com (Postfix) with ESMTP id 7A8D71C0002 for ; Tue, 20 May 2025 17:20:35 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="nV1Uw/cE"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of surenb@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747761635; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pMz/GawTYm93M6wk1r8Hyis5b46n5tgbqvpVyMnXyKM=; b=ByrUXaJzqXXA0Y4xFbAvdFqnDm6ecpbZ/jRER57BDil5Notn/SDMEVdZA1cAfoqT1uuAwD CFdYGsqxtiurEsfurTb1gshsLsGQv+0IoBRPBqknAW3+w2BuzXpDOM0qZew1wb33RVBSTt cDTfpbwMT1sify0BMk+eTsMM1Y9gFJ4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747761635; a=rsa-sha256; cv=none; b=0t7X2ehWpRiDT9tTbhojCYYaYJpzQujd9gLaAsD3wlYfk7bZCs0hyrZKfiBX1wNRkXrZy2 2VlyeBd0r9DTGe9wTltAYHEZd6XZjltusrEDbEEu9VlOco8OOBqK/WVrttKocBekiPK8bT nE6yq1soGQXmfUo7oRmd5RGhHxNvQ3Y= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="nV1Uw/cE"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of surenb@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=surenb@google.com Received: by mail-qt1-f180.google.com with SMTP id d75a77b69052e-47666573242so1119091cf.0 for ; Tue, 20 May 2025 10:20:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747761634; x=1748366434; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=pMz/GawTYm93M6wk1r8Hyis5b46n5tgbqvpVyMnXyKM=; b=nV1Uw/cE5G1qiy/VhEqo5KplHVm2GDZd9MYkH/UhukITBSQ5rEr3KB+HD3/IJ1KFyh 4J32diOWvFqe1BE8Qf8Vm1sTU233cbMs+w71/8CQ8Z9e5lPwyYcMaJgtKHVVRW4Bgz64 DlDK5hH6UM2OdFEZZeTdclxhbEPSNGBIWMiSlILvCMs6iS6s8jcbkeK6x3sw7geiT+G9 LmAXqphQ87Hi6/gPJ8NtFJ0A2aFYoLHLvOjYVUjCAO1L6YbI6ZjKA3DMahOBgJImm1mD 3DX8oZ7avAmbK9Q0/fJDLcqaD/pTJ0pwwgOD2ol9RrhYfSflEBwKlmuny/+pX+YNY5iu qdGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747761634; x=1748366434; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pMz/GawTYm93M6wk1r8Hyis5b46n5tgbqvpVyMnXyKM=; b=GKol7j6hc4zn03VWpTKDZJe5a3XYj7oRYwlKb/9t8mah6d5zaLc0r5+EjA/FtowAtK WR0ZW4cdxfUu6m8ZcdIKlsG3aXIYazYrVyzPJB8awuEdeLU13j2N6qc2levwLAOSVzfi b4Knm5U803izlSND8MBD0g0CHj694I7HXkHRJNTWRxefoJMx11AT8f/PhH7dr58ugWa4 DWGE0Q2CB8sL99YDkKLArLC5vYIRMKa/QAcK0fR8SE/eo5vCP9vrfTDqVE4LGcsG7lQi QlSzFd3ot8ilw165JiXg1Yrl8lbPG+djyXjKW1ywvarP9KmywcSq2SIBLMIF4uRucAZL nTBA== X-Forwarded-Encrypted: i=1; AJvYcCXExf7OMxq7CNN8mXiY20LlI3QaLDyLDVhb9fVWeBJ2FHGHxh8mcbfC4VErCJas6w9SeMSc85uyDA==@kvack.org X-Gm-Message-State: AOJu0YzrKsr3Y/W1b/883ZrVtLpPF3Clm9y/i91Pt+t9KdRTKBmIFgbH USBK5X0UH1HQvNKunB9FIrfl6EX2LGiCChWsJ584nh7ZJnIGcQne2dzIJYmyRpDhZYJb6R9JhN3 zKs5okItcXaHcVv93hkKD7jxKavAExGs4rxte8wCw X-Gm-Gg: ASbGncsljZmp2zRMUQC+UuhDK50CBgarp6FDlAVRKnmUQ3kg48K6lQNVXD6IYAHYvYj ABQFJHXUFm2UY0qcDFcaEcccZwwP6M5OVnQNCrSWvbaZOG5BKntLDcYK65ljbBgjqwQ4oLl67ei FCDjek9Hhgzb7HtJPhqelVzMaAVG+UggbDCco0TD/NjSgdz3H2aH8NXXGw3Bp6k39fSV8nxwwI/ w== X-Google-Smtp-Source: AGHT+IEN4cy/pozJUBlzjNxeVd+YNzp86jN9uT2YcCkPwp/GNMJOJCbYMQ4W0MBZZmvegZBSiBORzw7uEBE3WptOFSo= X-Received: by 2002:ac8:5ac3:0:b0:467:8416:d99e with SMTP id d75a77b69052e-49601270e7amr11465061cf.21.1747761634113; Tue, 20 May 2025 10:20:34 -0700 (PDT) MIME-Version: 1.0 References: <20250520122547.1317050-1-usamaarif642@gmail.com> <3divtzm4iapcxwbzxlmfmg3gus75n3rqh43vkjnog456jm2k34@f3rpzvcfk3p6> <6d015d91-e74c-48b3-8bc3-480980a74f9b@gmail.com> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 20 May 2025 10:20:23 -0700 X-Gm-Features: AX0GCFs_pF-siV8WacuUIqWISrluJMLCa0d2yYdAA0SX13VUt7rYJEG5VlgtvtQ Message-ID: Subject: Re: [PATCH 1/2] mm: slub: allocate slab object extensions non-contiguously To: Kent Overstreet Cc: Usama Arif , Andrew Morton , hannes@cmpxchg.org, shakeel.butt@linux.dev, vlad.wing@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: ip14zfjznkoxfws67ai1gcuoe9ftcmje X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7A8D71C0002 X-HE-Tag: 1747761635-652898 X-HE-Meta: U2FsdGVkX187UbCvgeHIqYjHLuRxN6BBJov5xdh9ErmG2WrBf2N8BldM2CwvJgMVhSkNCBD2ELvTFjkpEYtOMrkU8/PcxBGrytB4vjvjixg1GN35LBNKnkUjz781kf9SNrWM50D3wJ4l2WPJSMg6Fc4PhU5mKjqnA1C12mQG3jI74F/MxASfDdVjADRCxbZG83c6z+1lM7YWlu7mHwLAJZEyIoesBQi8tqSZ2SsywbYSJc0Zg2TXeZh3ROPj+xBfnLdbhIfaOh/2XCvy3lcZxIqmZTewQlCFiJlrsHdtMkGL1vqqKMuj03yxIlJdrVjiiTNdu51vWAo6y59Bpqs+gV00e4Qk7d/bKjFvALYWcWvh0E4m/lpvWhrhkZ+ZrrlPhWCxA2gRalHnGrmXbAIfzmj170xfokvsih1AhnSQGA45UP3T96AxkxKukiK2mSyqUivc9T93vRmH3fh85tz7JMBG0O3bo6GygdGm0ehN0BeA5/DO0bZ5oj8jqUIk88pVrfKFsZmEe6g5+K149XxxMRZwdoAtRBYwhXnckpkMeeNxUoYnnR2Ev+0HaqvJE/pZAPDXJXcdTRHhP8KzsFvWd1Om0jeYP2PG1f6SejV1VAontOzFK2CD60Xsvot705yBjhkRzcJHAT9R/U+HopJuOcZkJBb3nnu02Vdl5TKwlHyFuAMXRRsxftzjFTgSz2nC2Hv3LvPd6K+uY1M9nqvKwJSPK/vQUeLT6E/OZHop7BPBC+eFYNS5zPXEk4XmfLnQddubPhEi0INFjrZSqK7IpemTYpSMWqcIz4BedeI/otjQxPnooVXOd9JXK7h7hn5Sgub5I/9RXoNKujRz3sS4ySRZN1qMSDfrRbJ2JWyH3Yk/Ak0pYkBg2Bn12rz0YT7Wgq6eva9AioYAXmIeOMf1SpYhlwJXeaNYflBSv0cI7CpBwlgSeddRPbqOkjvx33V4ct5JQb0VDGJ2T+El8wk cSGJRJP7 rueytWE05YTILAi6NBVi3rk//BAVSQHx/SZN8+fWfyjw8qrPwtgrZDwd2+IXYuhPrytaT1yXhNK90DBj6AmBbFtSq8VQ17rmfzEaUosTWzQ6Uu8XcrM0l08Jy8TZFg8Xk4WL/CcaJVOHmaxn2ghet9Kzqn8OwNcqEfec2VDyWUJEnRprUcW0l7IlUeiAJRXEbiqkGtE5jurejgxmXmmK21RMQbwK7GgpKQXeDcw0yoB4BFuzq6g/WCnoWiv/tkeAtzGtcLOtXH/1zGjMZAYCwsplpdAagNwoyEf64hN06kED35JmGtXtuhollHv/A5BQ0VBaLSXgPOkSQ2R492jDWNOKy9RDzmDwtBXsbH1tAlvNlEFODfju4meYrVg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, May 20, 2025 at 9:41=E2=80=AFAM Kent Overstreet wrote: > > On Tue, May 20, 2025 at 08:20:38AM -0700, Suren Baghdasaryan wrote: > > On Tue, May 20, 2025 at 7:13=E2=80=AFAM Usama Arif wrote: > > > > > > > > > > > > On 20/05/2025 14:46, Usama Arif wrote: > > > > > > > > > > > > On 20/05/2025 14:44, Kent Overstreet wrote: > > > >> On Tue, May 20, 2025 at 01:25:46PM +0100, Usama Arif wrote: > > > >>> When memory allocation profiling is running on memory bound servi= ces, > > > >>> allocations greater than order 0 for slab object extensions can f= ail, > > > >>> for e.g. zs_handle zswap slab which will be 512 objsperslab x 16 = bytes > > > >>> per slabobj_ext (order 1 allocation). Use kvcalloc to improve cha= nces > > > >>> of the allocation being successful. > > > >>> > > > >>> Signed-off-by: Usama Arif > > > >>> Reported-by: Vlad Poenaru > > > >>> Closes: https://lore.kernel.org/all/17fab2d6-5a74-4573-bcc3-b7595= 1508f0a@gmail.com/ > > > >>> --- > > > >>> mm/slub.c | 2 +- > > > >>> 1 file changed, 1 insertion(+), 1 deletion(-) > > > >>> > > > >>> diff --git a/mm/slub.c b/mm/slub.c > > > >>> index dc9e729e1d26..bf43c403ead2 100644 > > > >>> --- a/mm/slub.c > > > >>> +++ b/mm/slub.c > > > >>> @@ -1989,7 +1989,7 @@ int alloc_slab_obj_exts(struct slab *slab, = struct kmem_cache *s, > > > >>> gfp &=3D ~OBJCGS_CLEAR_MASK; > > > >>> /* Prevent recursive extension vector allocation */ > > > >>> gfp |=3D __GFP_NO_OBJ_EXT; > > > >>> - vec =3D kcalloc_node(objects, sizeof(struct slabobj_ext), gfp= , > > > >>> + vec =3D kvcalloc_node(objects, sizeof(struct slabobj_ext), gf= p, > > > >>> slab_nid(slab)); > > > >> > > > >> And what's the latency going to be on a vmalloc() allocation when = we're > > > >> low on memory? > > > > > > > > Would it not be better to get the allocation slighly slower than to= not get > > > > it at all? > > > > > > Also a majority of them are less than 1 page. kvmalloc of less than 1= page > > > falls back to kmalloc. So vmalloc will only be on those greater than = 1 page > > > size, which are in the minority (for e.g. zs_handle, request_sock_sub= flow_v6, > > > request_sock_subflow_v4...). > > > > Not just the majority. For all of these kvmalloc allocations kmalloc > > will be tried first and vmalloc will be used only if the former > > failed: https://elixir.bootlin.com/linux/v6.14.7/source/mm/util.c#L665 > > That's why I think this should not regress normal case when slab has > > enough space to satisfy the allocation. > > And you really should consider just letting the extension vector > allocation fail if we're under that much memory pressure. I see your point. One case we would want to use vmalloc is if the allocation is sizable (multiple pages), so failing it does not mean critical memory pressure level yet. I don't think today's extension vectors would be large enough to span multiple pages. That would require a rather large obj_per_slab and in most cases I think this change would not affect current behavior, the allocations will be smaller than PAGE_SIZE and kvmalloc will fail anyway. I guess the question is whether we want to fail if allocation size is higher than PAGE_SIZE but less than PAGE_ALLOC_COSTLY_ORDER. Failing that I think is reasonable and I don't think any extension vector will be large enough to reach PAGE_ALLOC_COSTLY_ORDER. So, I'm ok with dropping this part of the patchset. > > Failing allocations is an important mechanism for load shedding, > otherwise stuff just piles up - it's a big cause of our terrible > behaviour when we're thrashing. > > It's equivalent to bufferbloat in the networking world.