From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA2DEC3ABDD for ; Tue, 20 May 2025 15:20:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 625586B009B; Tue, 20 May 2025 11:20:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 602AA6B009C; Tue, 20 May 2025 11:20:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EBD46B009D; Tue, 20 May 2025 11:20:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2A1E66B009B for ; Tue, 20 May 2025 11:20:53 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B4212120729 for ; Tue, 20 May 2025 15:20:52 +0000 (UTC) X-FDA: 83463648744.12.1DD4569 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) by imf06.hostedemail.com (Postfix) with ESMTP id 13F48180006 for ; Tue, 20 May 2025 15:20:50 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=g5xw4jN3; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of surenb@google.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747754451; a=rsa-sha256; cv=none; b=YxB4TbD7hAsWBJl8FrUI+T7jjLZEVbBS5NGL5f55dPh3s9WmueIXEf6/pDZHFfX/9e3Ugi cLi7dh8b6zkuasX+qUqMVPMEedHcMXDT5SPaLWyGZEs5WSz2Iy1dGiwZNYHCb1qskVbjKo CQ/QpDDy4QaKTl4ji4s0DtZpKp2rpTM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=g5xw4jN3; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of surenb@google.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747754451; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v3MrdtSTY4wKglagIHVXNzkkUsqfR79ur5B+p/qdTJ0=; b=Jrz0tT8rd+yB3RYcz6Kcaix4jH5+ctQLzc4b5bHhZhl/kiJ8DaeoYxc6ySZqzi/F/rhLYl tE6yBCUl1F3qIwpgskM0T+oraK3sgB2Zdh+VPj3TzUWSpEc/xHUvoVCBGGWSFrHMJ2Ca37 C1aRwAr/2w1XK8Z4XQqFaii2Rf0ZvG0= Received: by mail-qt1-f173.google.com with SMTP id d75a77b69052e-47e9fea29easo1247791cf.1 for ; Tue, 20 May 2025 08:20:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747754450; x=1748359250; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=v3MrdtSTY4wKglagIHVXNzkkUsqfR79ur5B+p/qdTJ0=; b=g5xw4jN32Bp2zmwgnRi5eQiTjY4Wt8u/6OTKAPQfICvaRef1Ttym3Dlm6BwINhVgFe bJvT291RYkj5ACXSfE//QrOqm77IQuc+67yHcAjIR+/8q6mC4UMHxPlMAAzt+JgyFpSs vtE0ONgrq5C4dCtfI+PHKpl1dtuTKYEqd0SDwQoMdiLV43D61WTx0jpbwGdNMlpz2Fo5 MKyk71C4JXnLWD7JfQnDDSH/1KloNUywNE+Usn4U7898rBiL20Ka9wFOA7e1aYJO/gGH xSjxo4LNwrHqA0SRzr29J+yf29sYJmI/iRHe+IHyRMsECNtWqG7zRl73rHlvLhTfaXx5 jFfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747754450; x=1748359250; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v3MrdtSTY4wKglagIHVXNzkkUsqfR79ur5B+p/qdTJ0=; b=WpUEnQN5uQL93L/syMqsRrJvvdNuSJb9XdXL5qVqjsa7QMGXnaxxmDR426gFciv2UZ +MFPbbwXDLgjol5rjgOAIi5S+74LUVkv/1y5m10+DKvpNV9ZKkU59ero9Tfoy4/3XQe7 f8q625EuPeisWnpiBFW24mM7nhpJec1d8I32b/iorVDX5aaY4rWI8UJL4e+664BISIzF +rb8GU36+Ycr8F1Us+oMgo5mn0S2sb0INDzawRSZfWxgstDdlYmpQi17Yf9ACXb/YU6f YbKdxKAh6t0X6O6iV6g/BjAEJmR5EX5pkh5GvcY2Z5cahGRvnq1n/RzYrP3glGTDN7Fi 6HQQ== X-Forwarded-Encrypted: i=1; AJvYcCXLLTCEg6zZW/p4iUBW4NMaLHRFxZuJXbA/4vFNwXTHzBsiu13OPcxpk95SZqpWN0KWMNxqbTb9nQ==@kvack.org X-Gm-Message-State: AOJu0Yxf55bhj1lkTtfjJnWCSDEntz+mFVLHcUkHJHDAEN98W22fF7B5 HWpHP0Xo/onPseUGVKBaID12rzMQkSNlxdza+ps8r3AZEfKndQi0ZrbLLqPpGa+3Wvug67+7/8d 95RY25DT/7BQta+4lmHAb3GHIv/LiJiTZHj/AS7Hf X-Gm-Gg: ASbGncu8SRvzZb1gMwmcUUTcsGnlDZEK8e7EK19OQ3JmKZbeoj9TCGUtt3JsfaE+YTZ Ok/7aAl30SlyTgDVPAP9lxd+fevc0v1uXZDJfKazYkeY2JVJWBl7iKE3y4e4PdLbtep23LHdS79 b8ZOA/Aobp4ePVFitRfq6AfvEgUDX9de5cyMdzVI7wiZDSPn4oXeEBl9NzT4jK4+3FMyx5+a7+y /9EIgZHmS0= X-Google-Smtp-Source: AGHT+IF7oSHLctYVPC6f6D7Wt3pmznOFYfEJFyea3DIJ5UOsx7qH8Nkpb8EezW5dKVDCCU5Ue2Kat1rkfVWfxpFZ5Jo= X-Received: by 2002:ac8:57d6:0:b0:471:f34d:1d83 with SMTP id d75a77b69052e-49600b8a5eamr11172951cf.7.1747754449801; Tue, 20 May 2025 08:20:49 -0700 (PDT) MIME-Version: 1.0 References: <20250520122547.1317050-1-usamaarif642@gmail.com> <3divtzm4iapcxwbzxlmfmg3gus75n3rqh43vkjnog456jm2k34@f3rpzvcfk3p6> <6d015d91-e74c-48b3-8bc3-480980a74f9b@gmail.com> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 20 May 2025 08:20:38 -0700 X-Gm-Features: AX0GCFv793lIfMLoXYJzZEQOhHK703DUxeFSvODGjDn1KjZgtwC7v1s2Xb3hFKI Message-ID: Subject: Re: [PATCH 1/2] mm: slub: allocate slab object extensions non-contiguously To: Usama Arif Cc: Kent Overstreet , Andrew Morton , hannes@cmpxchg.org, shakeel.butt@linux.dev, vlad.wing@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: cgyfcuabnueg5hu7bbxyduewp4ww7osg X-Rspam-User: X-Rspamd-Queue-Id: 13F48180006 X-Rspamd-Server: rspam06 X-HE-Tag: 1747754450-647565 X-HE-Meta: U2FsdGVkX1+ld0MIo1ThZg9NyLQLqcElRLPJFkEoTM53ny2fa+PIl/jzifptEf4ETXFl+tpS6bNWbQbQzuuaxiOkYoNabTV8cMpxlwsC22qcwSNpp3VOkORhHbC2ixFdqBrV3GCGXnk8h03xo7CBcXLcmUEBNJxAJDEvHoOtn37lmbzZLpOi7G0++nK9r8OsYaPo3XENqwL9SExRModMlS1GAcU65unGhWaScOzpLkcAdmWdLmcn4DDiYfAtaMb2qK/3rF0s9WON+Ez+AQU2rMBIh/l4PY/X83Zzlq0sicNR8wUxnyTIY7eelazM/0neEIbwahs+bPu2I57nsaTbRD88GFI5cwdzn8b8HYsAZJ0UBupUQZGeF/OpoU5NsbRhW57ajvN4L8RTiU/7lE0ssgJIis5tWe2ylY8WBf24/9ObtUGMJBKqDjdwGa5piYQQNQ6Q0T7yB/J5RNJ7iYU0rJKTE/xaqvnRaQLfXox9mmrXXKwMudGoGeAmxmU9KZo+UXqabuiu8Q5OflAiNgf8OAI7ep6AMItbJCpAyrCuANF7Z2ha0nXqcUk7eAKOZ9+zOpvY23x/4t7jQAuaRLR4kEPcxD+WCmDllawGDmSIS+6QHRQEEN7yMXoAUhbCWfFMWKJecfBimdgUjm6xbkgdoEb+ipy2m+GNogM9oYZ2jWorRsHlDO9rZPvgU64VT9EpHH8V1nSSf94EqtYBfETLg4RHk4gmOk/8o7gS8GR8tmSu/9LarH78u6aDAYmTz4gq7hjUj6pydZgiXZywOBluq3oEGQGCgMK1UPxSdvWOyVbkKT8nphPvEq36mFXgQabelypApGwbYPjlFcl3huL/RVmo+6+v6kVg9tLuOey8XM8eEfbX3mt30scR+iWLML78+JLwhRLFjKCTD+/wt13RzJserOiqfv5y46vaUurUjuR886aHmKyyTWD7pwZqFUqfkXbCeI0Q8hlxj3aqUFa eciRMOPo rKrvk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, May 20, 2025 at 7:13=E2=80=AFAM Usama Arif = wrote: > > > > On 20/05/2025 14:46, Usama Arif wrote: > > > > > > On 20/05/2025 14:44, Kent Overstreet wrote: > >> On Tue, May 20, 2025 at 01:25:46PM +0100, Usama Arif wrote: > >>> When memory allocation profiling is running on memory bound services, > >>> allocations greater than order 0 for slab object extensions can fail, > >>> for e.g. zs_handle zswap slab which will be 512 objsperslab x 16 byte= s > >>> per slabobj_ext (order 1 allocation). Use kvcalloc to improve chances > >>> of the allocation being successful. > >>> > >>> Signed-off-by: Usama Arif > >>> Reported-by: Vlad Poenaru > >>> Closes: https://lore.kernel.org/all/17fab2d6-5a74-4573-bcc3-b75951508= f0a@gmail.com/ > >>> --- > >>> mm/slub.c | 2 +- > >>> 1 file changed, 1 insertion(+), 1 deletion(-) > >>> > >>> diff --git a/mm/slub.c b/mm/slub.c > >>> index dc9e729e1d26..bf43c403ead2 100644 > >>> --- a/mm/slub.c > >>> +++ b/mm/slub.c > >>> @@ -1989,7 +1989,7 @@ int alloc_slab_obj_exts(struct slab *slab, stru= ct kmem_cache *s, > >>> gfp &=3D ~OBJCGS_CLEAR_MASK; > >>> /* Prevent recursive extension vector allocation */ > >>> gfp |=3D __GFP_NO_OBJ_EXT; > >>> - vec =3D kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, > >>> + vec =3D kvcalloc_node(objects, sizeof(struct slabobj_ext), gfp, > >>> slab_nid(slab)); > >> > >> And what's the latency going to be on a vmalloc() allocation when we'r= e > >> low on memory? > > > > Would it not be better to get the allocation slighly slower than to not= get > > it at all? > > Also a majority of them are less than 1 page. kvmalloc of less than 1 pag= e > falls back to kmalloc. So vmalloc will only be on those greater than 1 pa= ge > size, which are in the minority (for e.g. zs_handle, request_sock_subflow= _v6, > request_sock_subflow_v4...). Not just the majority. For all of these kvmalloc allocations kmalloc will be tried first and vmalloc will be used only if the former failed: https://elixir.bootlin.com/linux/v6.14.7/source/mm/util.c#L665 That's why I think this should not regress normal case when slab has enough space to satisfy the allocation.