From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71871C369A2 for ; Thu, 10 Apr 2025 20:47:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DCECE280139; Thu, 10 Apr 2025 16:47:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7E6D280137; Thu, 10 Apr 2025 16:47:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C2076280139; Thu, 10 Apr 2025 16:47:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9E1E2280137 for ; Thu, 10 Apr 2025 16:47:36 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 482EB120299 for ; Thu, 10 Apr 2025 20:47:37 +0000 (UTC) X-FDA: 83319320154.11.D1EE3D5 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf18.hostedemail.com (Postfix) with ESMTP id 6664B1C0004 for ; Thu, 10 Apr 2025 20:47:35 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=1xjgZ2TQ; spf=pass (imf18.hostedemail.com: domain of surenb@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744318055; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yLefylaHNPZjZhbmzwC8AW0JTp+q1WmJULZshjWPSxc=; b=j4GoZ5CKvVdtVrMRPa3qcs4oVnumlvhEXhyVxbnN4DurOqmR+nYgdy5tUzoOSbD4k8Ghsn mZicdEsCiRwooQrnOY2EI5t9KunkNC8VNWjDjVMNSq+X50xJHcJNAXLhyU+VGTBtEmULN0 5p/zIc4Sl83bGGA51ZnJQr1zJKq7Ky8= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=1xjgZ2TQ; spf=pass (imf18.hostedemail.com: domain of surenb@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744318055; a=rsa-sha256; cv=none; b=GekC7SSUHwAG2NhsfQQ35nFe1hGjL7GGxZ1qOaewRMgjyTIxmM3ztJYWBeniXeXI16tbFe oE/Dbhus9Kjs45K8/sVXpNZcphwE8ezzcGf4Xdpy8HS1Iyl4b6xP8cgdLT2NC8g8A2La+0 xGU0YJfGIQYFDzwmOqeN+4+UmDPHPuE= Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-47666573242so105371cf.0 for ; Thu, 10 Apr 2025 13:47:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744318054; x=1744922854; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=yLefylaHNPZjZhbmzwC8AW0JTp+q1WmJULZshjWPSxc=; b=1xjgZ2TQKX68ttAaYzv6cGfLL9SWqwF/T9bCXkAOJPfcFnIv15A+sM39fYMULGqOmV JhHO/YzsOBZGnkmgYRr6uWIhcCZRSnnLUw2sn/3ikQsZu7krlDDUAUCJHgICgD565ZLu 3bYudVEvowQsWBAf6SwbFks7YxPleVW24rIO9D3IPAYnU4ANCukcjsBZALM+OF5ObwZv 2qZLMOlfYlzxZqeZKgIXH5p/rSJ2QPacn7iqc+36nMBzp3XroOTbJuw1BvoxQCgf+NsL EFy7DptmTeIHNTe7mhLaKhhNdYJ05Z9l0B3MmGVHfpsqoa/fsCPYXx2c4tdJWEekdreE ev8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744318054; x=1744922854; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yLefylaHNPZjZhbmzwC8AW0JTp+q1WmJULZshjWPSxc=; b=HvZUUf0lEC+M8trnHWZ24eBJaaHK/SlYk/dif5AV/8I9kTgOiT+6Z6zDljeuYYeDKp KlasbvpBiFxOVgsGsJMm05AzoYz/0IXq/Un59BniSdK4tttgFUFki7ZK1efaA4K0ZO/a riVZxp5044PXMbgnNq94Xf/aMLGZJCO6vgoh3/79UNqKHaGa85UEPPFXnowzk+I+IGCv GdxsAnUO+DfaDAf6yryGtaeRKv4XA3Egxm99noQb6N0yDUnli8nhtAsZD0ZcuMwBVU6+ C39SWV086GIhE/0Mfku6kFFLyaSrHPQve7RwM9jfXbkbc73e8UNP2wHfEWc4Xu3itJO3 2e7g== X-Forwarded-Encrypted: i=1; AJvYcCX65dgmuDR5tmNOH5rnQYM1doAKxR8lmH00DmMVrGVbYLIkPtKocHrcfBaowfxSOizdK/V2OmqalQ==@kvack.org X-Gm-Message-State: AOJu0Yxq3yFUELd/jfT73mPp4JRDBqEVCUBMJsgCu/2urkNaDGU0jww4 B9z1OZ0VEAwRqKilLJU30N3D7yShd0iuU2wWxCk+pm9pG2ZstBKQCDzcdrM7jES4pFn1ING4Lmu VC6unZD3ljI8ztWozpRqdAcixhAm134/SYHS2Xcj7EgQb1dY55kIH X-Gm-Gg: ASbGncsWXm+C5K8ATO3pLpdPg2DdCs5mJtT9fRiNWbmoKEmCfySeRonIKLPx70RzgvY BCfEJcjLUltj5W2t/saldnTgRG7vb1rrUVyDZ9sAHK7ueJsw+nGQCgf3xdu5VfwhewEbfw2iogE qx+6gw60mzgoA3z23mYaOBinbYJaO8Js3gp3cFn95PPCxjUkjvg89a1TqqM1Ppu/4= X-Google-Smtp-Source: AGHT+IHcgKwkYFkHKY2qyse+dZdhrcu9SAgJ27cFDv70UKqs77EJkDrZfJ2+rb5jz7kGB25R/XHmwoWZ1Wq58NnaSPk= X-Received: by 2002:ac8:590f:0:b0:477:86aa:8829 with SMTP id d75a77b69052e-47977841118mr155651cf.3.1744318054232; Thu, 10 Apr 2025 13:47:34 -0700 (PDT) MIME-Version: 1.0 References: <20250317-slub-percpu-caches-v3-0-9d9884d8b643@suse.cz> <20250317-slub-percpu-caches-v3-4-9d9884d8b643@suse.cz> In-Reply-To: <20250317-slub-percpu-caches-v3-4-9d9884d8b643@suse.cz> From: Suren Baghdasaryan Date: Thu, 10 Apr 2025 13:47:23 -0700 X-Gm-Features: ATxdqUF9tqqoLKC0ZroKiagsiac5bqpK2QtSRrzISJYABmkgH2mtqBUVcyYJlUA Message-ID: Subject: Re: [PATCH RFC v3 4/8] slab: sheaf prefilling for guaranteed allocations To: Vlastimil Babka Cc: "Liam R. Howlett" , Christoph Lameter , David Rientjes , Roman Gushchin , Harry Yoo , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rcu@vger.kernel.org, maple-tree@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 6664B1C0004 X-Rspamd-Server: rspam05 X-Rspam-User: X-Stat-Signature: 3c7coxyd65nq7cwir5uefpmuqypjygr3 X-HE-Tag: 1744318055-112 X-HE-Meta: U2FsdGVkX185dh1vwHk5ssoRn5xaDyzAIk7K+NzDPmBY3BUMKa+wTz7XOnAf52+f47mA7gZyR85O26EI55Gx34BlLc14C9mccgj17xf1dMc6Ym37VyITJKeOqUZ0ol5XOt2SQlIn2gQdOLqyrtpLnbkVuWRWxf/SihE+jJLr5E/aTmJLkUHjwsob7QDPnarmW2s9KYqsswTUa/Zh5SLLs7SheBblBYRLJhnMcfxS0jqpk2D2+yygnqoFwo2mArhfBVJUVXXp/qI5JD4lZTy8ReOWmmCoDPtVMZGnGhSAH18a1lrig5TcO/skMDi3wuY6ZZaM9bQlfNu8U6ppDpPxcXFZFZa1p5cItSEWiwhcePaUxEtQWy6IF+bqKW3hsVTf9uaiGq2tyHtdasGY9RuGxPgZJ+gRmpdc3oNm6bh/zDBpIPgGOLuXQqWzeKTTivXbOBX2OkdpjnfucF8Ue4X1WMKws2V4+4alAfX0sDsSgt9XT1TVRTwb1490FyFTJ/MyVU+ITr1LXaPxNy9Fz8GmWQkqr8Ho+s0SbuUJkLMiq24AvD2AaYV/ofwi69gU35TXAphMOfEPMkHlf3pUBK4YpQ0EaFB7++YOeZqx5vFrHj81N3Vurv++tCNZNtibWWgcZLHk7cTElgoEmx1jy5Fj/Ft1/+kLdELNpjJw8Z+6loMxLhPSeZsKOiP3Qkv2gv4FQdwSVdrP+hwp+GGE+8ZofQzvJXEwvIYzdosgltBLio2L1S2GcB1IBdbmBg7xo+B24uD7zSFKXNr8IsxwdajYPQV3see5LMi9nNzP9xjWNPVOQJPao1MjZXcfnIP793feB9Zhz+3xmf6iqhwFVGkVfPxTyRObmXdcUka5ztfU2nbttqe9sgb/xMLbqwcCjwxOLB82fXWvYFvp+yeZuEy8TJGeeTVEAGClfdGVaGwZQCbmTXO4mLU8Ez7u5BxtWGovCXsFWgkmdL/5Xy5xb5N GsN55Dyj 3YFVY1h4rwwRnI9eaLejlFbqVcpT4VvlJYle7hQa6NIV8j3nkyqfkdmP+lWBHEfCY26Mri/V5iFiqQm13IFGxfWHn3mlNXPjgYHqqRcGnKFPV9BvS1jr1E5DpPwYFiYeJ25O/Yn2OcG5DoT9xcYCuxWyFo7EKkhhtUXwpoJ6olACvgdrhI22p2dQArpK787vv3GmnTnul/Lx6yaAF46zMZyPH8A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Mar 17, 2025 at 7:33=E2=80=AFAM Vlastimil Babka wr= ote: > > Add functions for efficient guaranteed allocations e.g. in a critical > section that cannot sleep, when the exact number of allocations is not > known beforehand, but an upper limit can be calculated. > > kmem_cache_prefill_sheaf() returns a sheaf containing at least given > number of objects. > > kmem_cache_alloc_from_sheaf() will allocate an object from the sheaf > and is guaranteed not to fail until depleted. > > kmem_cache_return_sheaf() is for giving the sheaf back to the slab > allocator after the critical section. This will also attempt to refill > it to cache's sheaf capacity for better efficiency of sheaves handling, > but it's not stricly necessary to succeed. > > kmem_cache_refill_sheaf() can be used to refill a previously obtained > sheaf to requested size. If the current size is sufficient, it does > nothing. If the requested size exceeds cache's sheaf_capacity and the > sheaf's current capacity, the sheaf will be replaced with a new one, > hence the indirect pointer parameter. > > kmem_cache_sheaf_size() can be used to query the current size. > > The implementation supports requesting sizes that exceed cache's > sheaf_capacity, but it is not efficient - such sheaves are allocated > fresh in kmem_cache_prefill_sheaf() and flushed and freed immediately by > kmem_cache_return_sheaf(). kmem_cache_refill_sheaf() might be especially > ineffective when replacing a sheaf with a new one of a larger capacity. > It is therefore better to size cache's sheaf_capacity accordingly. > > Signed-off-by: Vlastimil Babka > Reviewed-by: Suren Baghdasaryan > --- > include/linux/slab.h | 16 ++++ > mm/slub.c | 228 +++++++++++++++++++++++++++++++++++++++++++++= ++++++ > 2 files changed, 244 insertions(+) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 0e1b25228c77140d05b5b4433c9d7923de36ec05..dd01b67982e856b1b02f4f0e6= fc557726e7f02a8 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -829,6 +829,22 @@ void *kmem_cache_alloc_node_noprof(struct kmem_cache= *s, gfp_t flags, > int node) __assume_slab_alignment __ma= lloc; > #define kmem_cache_alloc_node(...) alloc_hooks(kmem_cache_alloc_node= _noprof(__VA_ARGS__)) > > +struct slab_sheaf * > +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int s= ize); > + > +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, > + struct slab_sheaf **sheafp, unsigned int size); > + > +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, > + struct slab_sheaf *sheaf); > + > +void *kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *cachep, gfp_= t gfp, > + struct slab_sheaf *sheaf) __assume_slab_alignment= __malloc; > +#define kmem_cache_alloc_from_sheaf(...) \ > + alloc_hooks(kmem_cache_alloc_from_sheaf_noprof(__= VA_ARGS__)) > + > +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf); > + > /* > * These macros allow declaring a kmem_buckets * parameter alongside siz= e, which > * can be compiled out with CONFIG_SLAB_BUCKETS=3Dn so that a large numb= er of call > diff --git a/mm/slub.c b/mm/slub.c > index 83f4395267dccfbc144920baa7d0a85a27fbb1b4..ab3532d5f41045d8268b12ad7= 74541dcd066c4c4 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -443,6 +443,8 @@ struct slab_sheaf { > union { > struct rcu_head rcu_head; > struct list_head barn_list; > + /* only used for prefilled sheafs */ > + unsigned int capacity; > }; > struct kmem_cache *cache; > unsigned int size; > @@ -2748,6 +2750,30 @@ static int barn_put_full_sheaf(struct node_barn *b= arn, struct slab_sheaf *sheaf, > return ret; > } > > +static struct slab_sheaf *barn_get_full_or_empty_sheaf(struct node_barn = *barn) > +{ > + struct slab_sheaf *sheaf =3D NULL; > + unsigned long flags; > + > + spin_lock_irqsave(&barn->lock, flags); > + > + if (barn->nr_full) { > + sheaf =3D list_first_entry(&barn->sheaves_full, struct sl= ab_sheaf, > + barn_list); > + list_del(&sheaf->barn_list); > + barn->nr_full--; > + } else if (barn->nr_empty) { > + sheaf =3D list_first_entry(&barn->sheaves_empty, > + struct slab_sheaf, barn_list); > + list_del(&sheaf->barn_list); > + barn->nr_empty--; > + } > + > + spin_unlock_irqrestore(&barn->lock, flags); > + > + return sheaf; > +} > + > /* > * If a full sheaf is available, return it and put the supplied empty on= e to > * barn. We ignore the limit on empty sheaves as the number of sheaves d= oesn't > @@ -4844,6 +4870,208 @@ void *kmem_cache_alloc_node_noprof(struct kmem_ca= che *s, gfp_t gfpflags, int nod > } > EXPORT_SYMBOL(kmem_cache_alloc_node_noprof); > > +/* > + * returns a sheaf that has least the requested size > + * when prefilling is needed, do so with given gfp flags > + * > + * return NULL if sheaf allocation or prefilling failed > + */ > +struct slab_sheaf * > +kmem_cache_prefill_sheaf(struct kmem_cache *s, gfp_t gfp, unsigned int s= ize) > +{ > + struct slub_percpu_sheaves *pcs; > + struct slab_sheaf *sheaf =3D NULL; > + > + if (unlikely(size > s->sheaf_capacity)) { > + sheaf =3D kzalloc(struct_size(sheaf, objects, size), gfp)= ; > + if (!sheaf) > + return NULL; > + > + sheaf->cache =3D s; > + sheaf->capacity =3D size; > + > + if (!__kmem_cache_alloc_bulk(s, gfp, size, > + &sheaf->objects[0])) { > + kfree(sheaf); > + return NULL; > + } > + > + sheaf->size =3D size; > + > + return sheaf; > + } > + > + localtry_lock(&s->cpu_sheaves->lock); > + pcs =3D this_cpu_ptr(s->cpu_sheaves); > + > + if (pcs->spare) { > + sheaf =3D pcs->spare; > + pcs->spare =3D NULL; > + } > + > + if (!sheaf) > + sheaf =3D barn_get_full_or_empty_sheaf(pcs->barn); > + > + localtry_unlock(&s->cpu_sheaves->lock); > + > + if (!sheaf) > + sheaf =3D alloc_empty_sheaf(s, gfp); > + > + if (sheaf && sheaf->size < size) { > + if (refill_sheaf(s, sheaf, gfp)) { > + sheaf_flush_unused(s, sheaf); > + free_empty_sheaf(s, sheaf); > + sheaf =3D NULL; > + } > + } > + > + if (sheaf) > + sheaf->capacity =3D s->sheaf_capacity; > + > + return sheaf; > +} > + > +/* > + * Use this to return a sheaf obtained by kmem_cache_prefill_sheaf() > + * > + * If the sheaf cannot simply become the percpu spare sheaf, but there's= space > + * for a full sheaf in the barn, we try to refill the sheaf back to the = cache's > + * sheaf_capacity to avoid handling partially full sheaves. > + * > + * If the refill fails because gfp is e.g. GFP_NOWAIT, or the barn is fu= ll, the > + * sheaf is instead flushed and freed. > + */ > +void kmem_cache_return_sheaf(struct kmem_cache *s, gfp_t gfp, > + struct slab_sheaf *sheaf) > +{ > + struct slub_percpu_sheaves *pcs; > + bool refill =3D false; > + struct node_barn *barn; > + > + if (unlikely(sheaf->capacity !=3D s->sheaf_capacity)) { > + sheaf_flush_unused(s, sheaf); > + kfree(sheaf); > + return; > + } > + > + localtry_lock(&s->cpu_sheaves->lock); > + pcs =3D this_cpu_ptr(s->cpu_sheaves); > + > + if (!pcs->spare) { > + pcs->spare =3D sheaf; > + sheaf =3D NULL; > + } else if (data_race(pcs->barn->nr_full) < MAX_FULL_SHEAVES) { > + barn =3D pcs->barn; > + refill =3D true; > + } > + > + localtry_unlock(&s->cpu_sheaves->lock); > + > + if (!sheaf) > + return; > + > + /* > + * if the barn is full of full sheaves or we fail to refill the s= heaf, > + * simply flush and free it > + */ > + if (!refill || refill_sheaf(s, sheaf, gfp)) { > + sheaf_flush_unused(s, sheaf); > + free_empty_sheaf(s, sheaf); > + return; > + } > + > + /* we racily determined the sheaf would fit, so now force it */ > + barn_put_full_sheaf(barn, sheaf, true); > +} > + > +/* > + * refill a sheaf previously returned by kmem_cache_prefill_sheaf to at = least > + * the given size > + * > + * the sheaf might be replaced by a new one when requesting more than > + * s->sheaf_capacity objects if such replacement is necessary, but the r= efill > + * fails (returning -ENOMEM), the existing sheaf is left intact > + * > + * In practice we always refill to full sheaf's capacity. > + */ > +int kmem_cache_refill_sheaf(struct kmem_cache *s, gfp_t gfp, > + struct slab_sheaf **sheafp, unsigned int size= ) nit: Would returning a refilled sheaf be a slightly better API than passing pointer to a pointer? > +{ > + struct slab_sheaf *sheaf; > + > + /* > + * TODO: do we want to support *sheaf =3D=3D NULL to be equivalen= t of > + * kmem_cache_prefill_sheaf() ? > + */ > + if (!sheafp || !(*sheafp)) > + return -EINVAL; > + > + sheaf =3D *sheafp; > + if (sheaf->size >=3D size) > + return 0; > + > + if (likely(sheaf->capacity >=3D size)) { > + if (likely(sheaf->capacity =3D=3D s->sheaf_capacity)) > + return refill_sheaf(s, sheaf, gfp); > + > + if (!__kmem_cache_alloc_bulk(s, gfp, sheaf->capacity - sh= eaf->size, > + &sheaf->objects[sheaf->size]= )) { > + return -ENOMEM; > + } > + sheaf->size =3D sheaf->capacity; > + > + return 0; > + } > + > + /* > + * We had a regular sized sheaf and need an oversize one, or we h= ad an > + * oversize one already but need a larger one now. > + * This should be a very rare path so let's not complicate it. > + */ > + sheaf =3D kmem_cache_prefill_sheaf(s, gfp, size); > + if (!sheaf) > + return -ENOMEM; > + > + kmem_cache_return_sheaf(s, gfp, *sheafp); > + *sheafp =3D sheaf; > + return 0; > +} > + > +/* > + * Allocate from a sheaf obtained by kmem_cache_prefill_sheaf() > + * > + * Guaranteed not to fail as many allocations as was the requested size. > + * After the sheaf is emptied, it fails - no fallback to the slab cache = itself. > + * > + * The gfp parameter is meant only to specify __GFP_ZERO or __GFP_ACCOUN= T > + * memcg charging is forced over limit if necessary, to avoid failure. > + */ > +void * > +kmem_cache_alloc_from_sheaf_noprof(struct kmem_cache *s, gfp_t gfp, > + struct slab_sheaf *sheaf) > +{ > + void *ret =3D NULL; > + bool init; > + > + if (sheaf->size =3D=3D 0) > + goto out; > + > + ret =3D sheaf->objects[--sheaf->size]; > + > + init =3D slab_want_init_on_alloc(gfp, s); > + > + /* add __GFP_NOFAIL to force successful memcg charging */ > + slab_post_alloc_hook(s, NULL, gfp | __GFP_NOFAIL, 1, &ret, init, = s->object_size); > +out: > + trace_kmem_cache_alloc(_RET_IP_, ret, s, gfp, NUMA_NO_NODE); > + > + return ret; > +} > + > +unsigned int kmem_cache_sheaf_size(struct slab_sheaf *sheaf) > +{ > + return sheaf->size; > +} > /* > * To avoid unnecessary overhead, we pass through large allocation reque= sts > * directly to the page allocator. We use __GFP_COMP, because we will ne= ed to > > -- > 2.48.1 >