From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3658EE4993 for ; Mon, 21 Aug 2023 15:12:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41B8D94000B; Mon, 21 Aug 2023 11:12:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A47B8E0006; Mon, 21 Aug 2023 11:12:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21EAD94000B; Mon, 21 Aug 2023 11:12:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0BC398E0006 for ; Mon, 21 Aug 2023 11:12:12 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C6FF1140453 for ; Mon, 21 Aug 2023 15:12:11 +0000 (UTC) X-FDA: 81148452462.01.FD44925 Received: from mail-vk1-f172.google.com (mail-vk1-f172.google.com [209.85.221.172]) by imf14.hostedemail.com (Postfix) with ESMTP id EFAB0100029 for ; Mon, 21 Aug 2023 15:12:09 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=fxUNbRDh; spf=pass (imf14.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.221.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692630730; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T9B1WPg720c/UQRGbMagsKGeB4rOW/mtgVsxIYWp3tA=; b=OlCc5U+B4g8KK3656cft9d0/6suDPJbZa6U8Ba26fww8s5F9y1iZBNqbEEnPZbeSOKB/5T WJ+Odl2Zx8/grw51gi4FGcAye+gVtjYZ88XoPETWsB25ZhC6XWfy46fjIC8tnZmdLzNW6R kw7lJlawrRcuzcSgaOCdFqPjXl5jpqE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692630730; a=rsa-sha256; cv=none; b=wrXteVS1vrEfTCWWD74iQj4l6bVX2LDzSyDQHYdkuU2uZcTwSP036ZjLbU5ltYHTUl1yD6 3jwByHFN2Ix7ONzZIHRUD2/AlaR5DuAVKock2LGZEZEyKjJsFsm1LrNseiZvKD7tWIOGh8 a3Agz0DXSvwGc4N5yA/70w7QzzizhBY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=fxUNbRDh; spf=pass (imf14.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.221.172 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vk1-f172.google.com with SMTP id 71dfb90a1353d-48d0eafcd9bso543554e0c.0 for ; Mon, 21 Aug 2023 08:12:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692630729; x=1693235529; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=T9B1WPg720c/UQRGbMagsKGeB4rOW/mtgVsxIYWp3tA=; b=fxUNbRDhpRpYCBSAiHqbwRz6bNhKuhdlw2QIcmURMGxdCR1bZ+YSXFWUzOynutlUC6 WywY3pVpEXrT4OgvY/IhfHUFfSBPlOIK+PGVQESGGum2M1RPHF5yTH4MjwAT/kDVhQK4 6y44lbQG8h4Xcgwh6Csm+E6D8rge1E2WgXefVV7bqphsnvUDYUbCp4xuK2v4hUsrriTr HXEKp93CbMWvVcOdnFJwE1ezTcPWr4o4F7SAePWC53OSe9JY4lj7uqT5Xi+pzCW6decP LPho8MYH98fCJgCf4cNRlyMyI9V4IFhWuTEYlrwi+VnZtqI23CeivMIDIWoSvF6aAuuk P6pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692630729; x=1693235529; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=T9B1WPg720c/UQRGbMagsKGeB4rOW/mtgVsxIYWp3tA=; b=Tbh4O/5nKhS/+HQq+Dck6wAzUl1yH/HdAuuB79v7s6HpdL0epxanlZHQd8N02t3oQ1 C/8m6Dl82O9Sb9DzWcgAxVRbAhFTVc6ymdw3/EO8BNw9OdmbCVr5COyEa9ayiwNn7qyP cPq9drvMwweWKmKjfDvIPv9ZTBnFf/dZE08tyZytqkHrTvcLIMVyYRLbjoqw/cn/dc+E Eo8fSOnvhmPPIfo1vQi01zb85yoDnQtWnD/Hpq7XlzBpdi8w4aWH5YA8QlvwdFwfadAM fltjjmlRz/OA8EnEXCAL0bE2qXOnpbfznut2XMvHYD+OXOmfaP49OQLlI/b4UMVZYRlT blsA== X-Gm-Message-State: AOJu0YyVKfVoW/OvZGxwdBZvt8eguYtuQBvB1bGY2CrMGGBWqeIgGdTk IC70GFngJiDSX4lD7mmOB3LG0tudbsNhXXdS3SA= X-Google-Smtp-Source: AGHT+IEZA1kBL3yVlQA44VcyLySARltBr4dHnEz7Xo7TabNRoZse8HXfYwb0zXbiKDqJgOQSGz0aoz8SS6dSpdDBE3o= X-Received: by 2002:a1f:66c1:0:b0:48f:87d6:f039 with SMTP id a184-20020a1f66c1000000b0048f87d6f039mr12785vkc.2.1692630728876; Mon, 21 Aug 2023 08:12:08 -0700 (PDT) MIME-Version: 1.0 References: <20230723190906.4082646-1-42.hyeyoo@gmail.com> <20230723190906.4082646-2-42.hyeyoo@gmail.com> <7a94996f-b6f0-c427-eb1e-126bcb97930c@suse.cz> In-Reply-To: <7a94996f-b6f0-c427-eb1e-126bcb97930c@suse.cz> From: Hyeonggon Yoo <42.hyeyoo@gmail.com> Date: Tue, 22 Aug 2023 00:11:56 +0900 Message-ID: Subject: Re: [RFC 1/2] Revert "mm, slub: change percpu partial accounting from objects to pages" To: Vlastimil Babka Cc: Christoph Lameter , Pekka Enberg , Joonsoo Kim , David Rientjes , Andrew Morton , Roman Gushchin , Feng Tang , "Sang, Oliver" , Jay Patel , Binder Makin , aneesh.kumar@linux.ibm.com, tsahu@linux.ibm.com, piyushs@linux.ibm.com, fengwei.yin@intel.com, ying.huang@intel.com, lkp , "oe-lkp@lists.linux.dev" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jesper Dangaard Brouer Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: ncnorcwnhtg5js8bc54qgn69bqqbmhc5 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: EFAB0100029 X-Rspam-User: X-HE-Tag: 1692630729-802719 X-HE-Meta: U2FsdGVkX18d7ncUKM05LY/BZVDgNeoVNPcR+6jkXbBrLpVudCLpo3Rgo4gZzp8eVRc9CD5YaHLr1ujhU5LLojQcdqEFj3ZUPcKDPdvPGZyHIuyf8+m7Yt+YJNKnHRmVdWj78ORKBAZ7CXkaSBMbmyTLLbJiw7kdk8P7JfoXWlKUODtT0ALjBUv75YkdEN1vpj6dJysguo08svNu0Zaxtibpn2ymWqrtEVAVQZSp25ea8mO9+NcjpPsfTMvXIUtZDE1ZFyvWHmx9umVqkldjLLbwv/Rt/WZaOq02AbVwvmtnHXom6qWAkarV86+Bn57QjOsF2Ky3QpKCOA14r82UoXrgee87p5YWj1DnV0c75RRmbLlbILgVjADFIqPZgjFNO1YVYYcgUNuyffjY6f15tH1VTgNLQZmC7x7oHogLrSiFD2hT6T7sgSMkeRn6NCMJtpArVM2XH3OHcVSLrUf7I/zB1a5fC0oRo/oho9fWkFJplvKvk/DcSYaPQzCWaKWzP+bsg2HKNsdqw1nTNLh6gMK4KM9GBe8AGqRMYAhOk5QeZbzpDCL8ChJXYDAC5xy+BhCFBHCzI8F8QBcQJcMYlem1m2A9Etvm0ylt2VdkU3lVwq8T0Dhh74DmoRKMmnk6Xou63vWknI3sAmvN0+NzNPfZr6ZJKfVnlVXrEGGCvUz2V+JN4h0HmVBpsMY6qFo7T7hkQSeY1fZvwc6JggLVdZ0DqdKKFU3nGJ0YZWCoCvlpZp+7NXLIw2EwK/lk6RW3h4TUOBHJCUdqhI5v6i0U3sRf9N7LTpCxzKsBaY2Y568HcI7Xgcknn5ykwrmcNQwuMRQfCfeCpHur8387X104l8CzqDbk4gq/HG1Jt7gDpNxj2aop2YnnEEZcsi1+egXvD0+vmScfNWXnKDE1u6DzkHBp6JRCY+810zzgwe1iL4q92MHRdDb7cEyv3zVZxhtDrk22RWPPYlt8p8sBbPo Ka1uBbpY HHpzqF6sm+rtAh7Df/0SHXW89/ZM7utD/K9/GdohDbdUr+HO1NFmIeI2u2zuNo/HAWPFrODV2pVSK97dpapj0ZHrij8ou6p1VMIa96ar84WrHc/TMj9VckkDGK9huZVSbzTLkVfGlLXEpq+wkmrppx8oyJDAM9iQBELCKayLpwDYGh7Voe72eVq6Fm0AyIUjnwch5K5MQ7MRbSy3WkqVY1nH/v1P/iR67g1McVGgDcfpoQ0w22aG81dEpaClgy84CBICsX3m5lk45oC6UfYHyB0tQKkyTIfGC/Kv2vCiImCXNvj91CrLxxkxAR+M4IwtnpL9/03H0CNjG+IQoy8UFwv4tOjbHiVDoqpDPvY3bZaaPjjf3yg7BkdeM9djA1i5ZhoxiFSWOsnaKKAPQm/b4jAiysbII0iUT325rVs5J4bNhthM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: [ +Cc Jesper - he might have an opinion on this. ] On Wed, Jul 26, 2023 at 7:34=E2=80=AFPM Vlastimil Babka wr= ote: > > Nit: I would change the subject from "Revert: " as it's not a revert > exactly. If we can come up with a good subject that's not very long :) Will do :) > On 7/23/23 21:09, Hyeonggon Yoo wrote: > > This is partial revert of commit b47291ef02b0 ("mm, slub: change percpu > > partial accounting from objects to pages"). and full revert of commit > > 662188c3a20e ("mm/slub: Simplify struct slab slabs field definition"). > > > > While b47291ef02b0 prevents percpu partial slab list becoming too long, > > it assumes that the order of slabs are always oo_order(s->oo). > I think I've considered this possibility, but decided it's not important > because if the system becomes memory pressured in a way that it can't > allocate the oo_order() and has to fallback, we no longer care about > accurate percpu caching, as we're unlikely having optimum performance any= way. But it does not perform any direct reclamation/compaction to allocate high order slabs, so isn't it an easier condition to happen than that? > > The current approach can surprisingly lower the number of objects cache= d > > per cpu when it fails to allocate high order slabs. Instead of accounti= ng > > the number of slabs, change it back to accounting objects, but keep > > the assumption that the slab is always half-full. > > That's a nice solution as that avoids converting the sysfs variable, so I > wouldn't mind going that way even if I doubt the performance benefits in = a > memory pressured system. > But maybe there's a concern that if the system is > really memory pressured and has to fallback to smaller orders, before thi= s > patch it would keep fewer percpu partial slabs than after this patch, whi= ch > would increase the pressure further and thus be counter-productive? You mean SLUB needs to stop per-cpu caching when direct/or indirect reclamation is desired? > > With this change, the number of cached objects per cpu is not surprisin= gly > > decreased even when it fails to allocate high order slabs. It still > > prevents large inaccuracy because it does not account based on the > > number of free objects when taking slabs. > > --- > > include/linux/slub_def.h | 2 -- > > mm/slab.h | 6 ++++++ > > mm/slub.c | 31 ++++++++++++------------------- > > 3 files changed, 18 insertions(+), 21 deletions(-) > > > > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > > index deb90cf4bffb..589ff6a2a23f 100644 > > --- a/include/linux/slub_def.h > > +++ b/include/linux/slub_def.h > > @@ -109,8 +109,6 @@ struct kmem_cache { > > #ifdef CONFIG_SLUB_CPU_PARTIAL > > /* Number of per cpu partial objects to keep around */ > > unsigned int cpu_partial; > > - /* Number of per cpu partial slabs to keep around */ > > - unsigned int cpu_partial_slabs; > > #endif > > struct kmem_cache_order_objects oo; > > > > diff --git a/mm/slab.h b/mm/slab.h > > index 799a315695c6..be38a264df16 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -65,7 +65,13 @@ struct slab { > > #ifdef CONFIG_SLUB_CPU_PARTIAL > > struct { > > struct slab *next; > > +#ifdef CONFIG_64BIT > > int slabs; /* Nr of slabs le= ft */ > > + int pobjects; /* Approximate co= unt */ > > +#else > > + short int slabs; > > + short int pobjects; > > +#endif > > }; > > #endif > > }; > > diff --git a/mm/slub.c b/mm/slub.c > > index f7940048138c..199d3d03d5b9 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -486,18 +486,7 @@ static inline unsigned int oo_objects(struct kmem_= cache_order_objects x) > > #ifdef CONFIG_SLUB_CPU_PARTIAL > > static void slub_set_cpu_partial(struct kmem_cache *s, unsigned int nr= _objects) > > { > > - unsigned int nr_slabs; > > - > > s->cpu_partial =3D nr_objects; > > - > > - /* > > - * We take the number of objects but actually limit the number of > > - * slabs on the per cpu partial list, in order to limit excessive > > - * growth of the list. For simplicity we assume that the slabs wi= ll > > - * be half-full. > > - */ > > - nr_slabs =3D DIV_ROUND_UP(nr_objects * 2, oo_objects(s->oo)); > > - s->cpu_partial_slabs =3D nr_slabs; > > } > > #else > > static inline void > > @@ -2275,7 +2264,7 @@ static void *get_partial_node(struct kmem_cache *= s, struct kmem_cache_node *n, > > struct slab *slab, *slab2; > > void *object =3D NULL; > > unsigned long flags; > > - unsigned int partial_slabs =3D 0; > > + int objects_taken =3D 0; > > > > /* > > * Racy check. If we mistakenly see no partial slabs then we > > @@ -2312,11 +2301,11 @@ static void *get_partial_node(struct kmem_cache= *s, struct kmem_cache_node *n, > > } else { > > put_cpu_partial(s, slab, 0); > > stat(s, CPU_PARTIAL_NODE); > > - partial_slabs++; > > + objects_taken +=3D slab->objects / 2; > > } > > #ifdef CONFIG_SLUB_CPU_PARTIAL > > if (!kmem_cache_has_cpu_partial(s) > > - || partial_slabs > s->cpu_partial_slabs / 2) > > + || objects_taken > s->cpu_partial / 2) > > break; > > #else > > break; > > @@ -2699,13 +2688,14 @@ static void put_cpu_partial(struct kmem_cache *= s, struct slab *slab, int drain) > > struct slab *slab_to_unfreeze =3D NULL; > > unsigned long flags; > > int slabs =3D 0; > > + int pobjects =3D 0; > > > > local_lock_irqsave(&s->cpu_slab->lock, flags); > > > > oldslab =3D this_cpu_read(s->cpu_slab->partial); > > > > if (oldslab) { > > - if (drain && oldslab->slabs >=3D s->cpu_partial_slabs) { > > + if (drain && oldslab->pobjects >=3D s->cpu_partial) { > > /* > > * Partial array is full. Move the existing set t= o the > > * per node partial list. Postpone the actual unf= reezing > > @@ -2714,14 +2704,17 @@ static void put_cpu_partial(struct kmem_cache *= s, struct slab *slab, int drain) > > slab_to_unfreeze =3D oldslab; > > oldslab =3D NULL; > > } else { > > + pobjects =3D oldslab->pobjects; > > slabs =3D oldslab->slabs; > > } > > } > > > > slabs++; > > + pobjects +=3D slab->objects / 2; > > > > slab->slabs =3D slabs; > > slab->next =3D oldslab; > > + slab->pobjects =3D pobjects; > > > > this_cpu_write(s->cpu_slab->partial, slab); > > > > @@ -5653,13 +5646,13 @@ static ssize_t slabs_cpu_partial_show(struct km= em_cache *s, char *buf) > > > > slab =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu= )); > > > > - if (slab) > > + if (slab) { > > slabs +=3D slab->slabs; > > + objects +=3D slab->objects; > > + } > > } > > #endif > > > > - /* Approximate half-full slabs, see slub_set_cpu_partial() */ > > - objects =3D (slabs * oo_objects(s->oo)) / 2; > > len +=3D sysfs_emit_at(buf, len, "%d(%d)", objects, slabs); > > > > #ifdef CONFIG_SLUB_CPU_PARTIAL > > @@ -5669,7 +5662,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem= _cache *s, char *buf) > > slab =3D slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu= )); > > if (slab) { > > slabs =3D READ_ONCE(slab->slabs); > > - objects =3D (slabs * oo_objects(s->oo)) / 2; > > + objects =3D READ_ONCE(slab->pobjects); > > len +=3D sysfs_emit_at(buf, len, " C%d=3D%d(%d)", > > cpu, objects, slabs); > > } >