From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70D16C369CB for ; Wed, 23 Apr 2025 17:14:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 886DC6B002A; Wed, 23 Apr 2025 13:14:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 837666B002B; Wed, 23 Apr 2025 13:14:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D6E66B002C; Wed, 23 Apr 2025 13:14:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 4EA6A6B002A for ; Wed, 23 Apr 2025 13:14:07 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 68E5E161699 for ; Wed, 23 Apr 2025 17:14:08 +0000 (UTC) X-FDA: 83365956576.30.17D6867 Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by imf17.hostedemail.com (Postfix) with ESMTP id 5C4A640010 for ; Wed, 23 Apr 2025 17:14:06 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=IzqzAGuK; spf=pass (imf17.hostedemail.com: domain of surenb@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745428446; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gZlHk0IEwmTveeH9vxcXCwphmflVIjvR9gZOHVKJlFI=; b=e5BTkMYddSI38JsWavrnDn2PfQmTa3uH0HFT3IljwP3XYLe+f9Dn5kIJZuUR3Hxf6T3yWc 53lZzLjPzyZuf7RziCtlI/mO9HBqJfzBWzT9PhgzFEIJ63SLRUJzZDiRIq8s5goZJQYYpi gOa+VRR4rEzDqolVNasddj3cLYtmxx0= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=IzqzAGuK; spf=pass (imf17.hostedemail.com: domain of surenb@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745428446; a=rsa-sha256; cv=none; b=J1Pj//AxV9N9m/qUHDK64OIwEvm7Ss3qSQ56ywfre0vsDS7oQ7NECOz5mMxb0/jlKCBfLf FSXLRvUcdNq3xblTMNDnwlQg2efvH8NX4jtjVlLAhI9vigzlEYGjuN08yiDVcOI8saYZ/h SbfEBYefD3403YI/PI9od0/0FHImgoY= Received: by mail-qt1-f175.google.com with SMTP id d75a77b69052e-4769e30af66so14571cf.1 for ; Wed, 23 Apr 2025 10:14:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1745428445; x=1746033245; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=gZlHk0IEwmTveeH9vxcXCwphmflVIjvR9gZOHVKJlFI=; b=IzqzAGuKGbEMf67ZL6ld+8jQ68ZfKhIN4GiuxZj69LHw5uC9DHBjxNh3v5GzJvHowX hZuaYPAxLb9glN0wHhvfk9Vp9FdZ6MpzICWE/himNiF6NRYa2xPSPhYZ0/fyl3G51C/p T+kKvt7uFh6ewakFKbPqEZiHuhsP12yIiMyrJ9AMNMk5Ica/EKBHUXlek1CdcRwaeBx2 0TgLTrNDBoyIUqzgl2X6CzaooAhYrO3fmVXpOdjctnKtnVSNVEBiHgRcnguAzJ8ZFZtw GqnvZNcU3w69gwWmwO14wXf418I9apJxNO4p3aCgZdHDz5sRePBt2J/ZiWStBgBKyLBn UPbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745428445; x=1746033245; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gZlHk0IEwmTveeH9vxcXCwphmflVIjvR9gZOHVKJlFI=; b=UdxxG7wrVwjArgfMD++T/RyCQIyajrkEsmmdQgohxhid2v6J2WNVpqOp3Vac6pj3Ef ysLaodjJoX4kXDPRaUQ4iftDWTs8G2K+OhXaLMespxBK8x9ddvSbDWXsmWh8Ew7hOneN GkNv+RC3+DK/OamBBJBxecYOy5RVuG4yNBiSgYM6Vt71JP0rZcjYkC7OSN205Z1OSam2 afGLTsRG8EJgh35jTxBWx76hmP98fGgWf/zCIxdENlhXWF+SLMwpaKlYelrOla6zMaVq ctDDcABMOtr3uET0pDgs2rDeRa7KjKJPWtAVV5pVjwUBlTxN6M/tSh5Fe6Wbz0OoWDFG CGGw== X-Forwarded-Encrypted: i=1; AJvYcCWXFyfsml7vC5SN5rLBJq66S4haSXRG/1DPMrByMTtlcIvk4AoaQf1sxQutGsjwQYR3p6WtcIszNg==@kvack.org X-Gm-Message-State: AOJu0YwxKQMLj5R2gF9aT8J+qnV3XAc3HWftTZMMRZ2VGHVpaJtgthnL IrmnkF6cpp1gCmPPIn6ujvF31F13gfjlqumclzS62VPeuWT66EO4uoNnwqOIgHD/iKhhCGx2m4l xqxcoBhtkAao3bHSFvrye1RwGImwEd7a+58r0 X-Gm-Gg: ASbGncsbV3NtX3+IFsfEVbeC8uUknEr2bUceXI+HU8O7BaHEBBNpgJ3sY7H9LYTF8fd Grg/lnB/DVVJeYhwcnDWB2pQatPYJwFQmMydIWWpldaN/sGYftR8Oc7UZHafCy8QEoAufMquycD hHNlhr29my/tQPp19oJPqm2H5cyIfZlZmZ1S3TMrjukrgNzQ1hDlPkfWXDxIvhS1g= X-Google-Smtp-Source: AGHT+IFemTcKUy6fxMlOI4jhn/Hhuu8/huR+owTvyVraA5YSBg1pXC76xUTvyFCR85NVFf7HslnCsGlciVWVETFFKZ4= X-Received: by 2002:a05:622a:228c:b0:47b:4be:8572 with SMTP id d75a77b69052e-47d13ae63f5mr5126871cf.18.1745428445177; Wed, 23 Apr 2025 10:14:05 -0700 (PDT) MIME-Version: 1.0 References: <20250317-slub-percpu-caches-v3-0-9d9884d8b643@suse.cz> <20250317-slub-percpu-caches-v3-4-9d9884d8b643@suse.cz> <7c4fe3af-a38b-4d40-9824-2935b46e1ecd@suse.cz> In-Reply-To: <7c4fe3af-a38b-4d40-9824-2935b46e1ecd@suse.cz> From: Suren Baghdasaryan Date: Wed, 23 Apr 2025 10:13:53 -0700 X-Gm-Features: ATxdqUEzf-fg78GSI15nKo9lP9ML7j0Z7oXB6Y9N1-xuSsNihrXGsraffOmbrPk Message-ID: Subject: Re: [PATCH RFC v3 4/8] slab: sheaf prefilling for guaranteed allocations To: Vlastimil Babka Cc: "Liam R. Howlett" , Christoph Lameter , David Rientjes , Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: x6qzm4em1az9mq4xzfb1nnz16k7o55i3 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 5C4A640010 X-Rspam-User: X-HE-Tag: 1745428446-71453 X-HE-Meta: U2FsdGVkX18ULo2ti9/2QBQhTSoxychVRttQw7eC58aXtkd0IJJbS13XqHgCtpEB2y9PS7LbXdahQz/s+XNUe2/vuHxXomBUSNxcSi17DHg/jWp+VXfSq/+LF+TzxVr85pVse+gAl0ZEBX68w8Foxiw35vzyWkfE4u5h6Y3Tgdo/Kv7FNXc3vLMswnQiNuPgNwUI9NT40Asv8Eb0hZ08s2KuL62AXOXmB1wMoTpGNNeA6+Aoe9iWvnLh4zYrt1IOzeCggyGvj/iSxswiNXyxaQ93TjWCXXzXLTDRr/9RlH4NmGcbHRRU91SOPicVlSe4omd3Ez3S7bNAGDQJK/STbiALJhP10YiuACFnOYgXOA+8ds+1I4d3f6G3OS1RlESWW11b+BgVFgIGUpfqZdKhamHwRfy+4DCfYeDgc/4+pl0rb8gZEX51x9Rf1XZi965LzZ7Llaar3a5A08t5xHOpkMNeM8KcCcQ9KeI1GYVzr2/xai4hFEZ/Le3QXDK9vOOA11SdHM3dJMi4ohagW+AUvTdcee37uMJCRUi9k7kvLytsQR+PW0NXyV/gs8FP1VOl2scYWZmi/KxOzJwUFY4r+l/AOcwMH+fPy0OGjFvith2rZcKB9vP7X+GatD5dWIxOM2DKX/u4a/Qja0lHU6ALwdwQVjy7UpK2E2D5UFop6cMrqivxSEYDId/6Asi6yfwlASrjdlPTuk1gjD3NIqFrjApM6nbmeOuio3b6yCTMAlVdxIz4NXK1zvai0LuZk5CthaExCrfFzmKrcSC+nAnZ6GeiO4BJ7azg4pem0jktROyZm3FjNlfAQitt4oC6zZMHCSQ9poqFCuWS8PI7cO9cuxLKpM/9+ZT/Gd1Y2If4ykioJ+J4dRpwDCGPkNYdBai76zdJwLalrYc6w9WI4vtxzhmug6KIjeekRXmpCg8uBhrC2SaC+LeFVx1d3L12OHTZ2A4j6OINK+znNSD3O88 HOca5XAK U0nh20MD5BPTfZpEUn58zA36yQCyKGtz5sAQfDjIuAqhZO0xN/wHITpxYYRfG1te4ndRLUkr1aHFbfdHhnsaYyAwMdShQGaqfZcS2CsGBBUUCS1zHJUJOUFrb4hy4lFzinUmuse98UHpi0D5SISVJW6xqZN4Qj5vup1PPFdifMm1bUAIM6nQs65kEXrgkm476Kax+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 23, 2025 at 6:06=E2=80=AFAM Vlastimil Babka wr= ote: > > On 4/10/25 22:47, Suren Baghdasaryan wrote: > >> +/* > >> + * refill a sheaf previously returned by kmem_cache_prefill_sheaf to = at least > >> + * the given size > >> + * > >> + * the sheaf might be replaced by a new one when requesting more than > >> + * s->sheaf_capacity objects if such replacement is necessary, but th= e refill > >> + * fails (returning -ENOMEM), the existing sheaf is left intact > >> + * > >> + * In practice we always refill to full sheaf's capacity. > >> + */ > >> +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, > >> + struct slab_sheaf **sheafp, unsigned int s= ize) > > > > nit: Would returning a refilled sheaf be a slightly better API than > > passing pointer to a pointer? > > I'm not sure it would be simpler to use, since we need to be able to > indicate -ENOMEM which would presumably become NULL, so the user would ha= ve > to store the existing sheaf pointer and not just blindly do "sheaf =3D > refill(sheaf)". Ack. > Or the semantics would have to be that in case of failure > the existing sheaf is returned and caller is left with nothing. Liam, wha= t > do you think? That sounds confusing. Compared to that alternative, I would prefer keeping it the way it is now. > > >> +{ > >> + struct slab_sheaf *sheaf; > >> + > >> + /* > >> + * TODO: do we want to support *sheaf =3D=3D NULL to be equiva= lent of > >> + * kmem_cache_prefill_sheaf() ? > >> + */ > >> + if (!sheafp || !(*sheafp)) > >> + return -EINVAL; > >> + > >> + sheaf =3D *sheafp; > >> + if (sheaf->size >=3D size) > >> + return 0; > >> + > >> + if (likely(sheaf->capacity >=3D size)) { > >> + if (likely(sheaf->capacity =3D=3D s->sheaf_capacity)) > >> + return refill_sheaf(s, sheaf, gfp); > >> + > >> + if (!__kmem_cache_alloc_bulk(s, gfp, sheaf->capacity -= sheaf->size, > >> + &sheaf->objects[sheaf->si= ze])) { > >> + return -ENOMEM; > >> + } > >> + sheaf->size =3D sheaf->capacity; > >> + > >> + return 0; > >> + } > >> + > >> + /* > >> + * We had a regular sized sheaf and need an oversize one, or w= e had an > >> + * oversize one already but need a larger one now. > >> + * This should be a very rare path so let's not complicate it. > >> + */ > >> + sheaf =3D kmem_cache_prefill_sheaf(s, gfp, size); > >> + if (!sheaf) > >> + return -ENOMEM; > >> + > >> + kmem_cache_return_sheaf(s, gfp, *sheafp); > >> + *sheafp =3D sheaf; > >> + return 0; > >> +} > >> + > >> +/* > >> + * Allocate from a sheaf obtained by kmem_cache_prefill_sheaf() > >> + * > >> + * Guaranteed not to fail as many allocations as was the requested si= ze. > >> + * After the sheaf is emptied, it fails - no fallback to the slab cac= he itself. > >> + * > >> + * The gfp parameter is meant only to specify __GFP_ZERO or __GFP_ACC= OUNT > >> + * memcg charging is forced over limit if necessary, to avoid failure= . > >> + */ > >> +void * > >> +kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp, > >> + struct slab_sheaf *sheaf) > >> +{ > >> + void *ret =3D NULL; > >> + bool init; > >> + > >> + if (sheaf->size =3D=3D 0) > >> + goto out; > >> + > >> + ret =3D sheaf->objects[--sheaf->size]; > >> + > >> + init =3D slab_want_init_on_alloc(gfp, s); > >> + > >> + /* add __GFP_NOFAIL to force successful memcg charging */ > >> + slab_post_alloc_hook(s, NULL, gfp | __GFP_NOFAIL, 1, &ret, ini= t, s->object_size); > >> +out: > >> + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfp, NUMA_NO_NODE); > >> + > >> + return ret; > >> +} > >> + > >> +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf) > >> +{ > >> + return sheaf->size; > >> +} > >> /* > >> * To avoid unnecessary overhead, we pass through large allocation re= quests > >> * directly to the page allocator. We use __GFP_COMP, because we will= need to > >> > >> -- > >> 2.48.1 > >> >