From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Jay Patel <jaypatel@linux.ibm.com>
Cc: linux-mm@kvack.org, cl@linux.com, penberg@kernel.org,
rientjes@google.com, iamjoonsoo.kim@lge.com,
akpm@linux-foundation.org, vbabka@suse.cz,
aneesh.kumar@linux.ibm.com, tsahu@linux.ibm.com,
piyushs@linux.ibm.com
Subject: Re: [RFC PATCH ] mm/slub: Reducing slub memory wastage
Date: Mon, 12 Jun 2023 19:17:37 +0900 [thread overview]
Message-ID: <ZIbwwUi4yorj8nJL@debian-BULLSEYE-live-builder-AMD64> (raw)
In-Reply-To: <20230612085535.275206-1-jaypatel@linux.ibm.com>
On Mon, Jun 12, 2023 at 02:25:35PM +0530, Jay Patel wrote:
> 3) If the minimum order is less than the slub_max_order, iterate through
> a loop from minimum order to slub_max_order and check if the condition
> (rem <= slab_size / fract_leftover) holds true. Here, slab_size is
> calculated as (PAGE_SIZE << order), rem is (slab_size % object_size),
> and fract_leftover can have values of 16, 8, or 4. If the condition is
> true, select that order for the slab.
>
> However, in point 3, when calculating the fraction left over, it can
> result in a large range of values (like 1 Kb to 256 bytes on 4K page
> size & 4 Kb to 16 Kb on 64K page size with order 0 and goes on
> increasing with higher order) when compared to the remainder (rem). This
> can lead to the selection of an order that results in more memory
> wastage. To mitigate such wastage, we have modified point 3 as follows:
> instead of selecting the first order that satisfies the condition (rem
> <= slab_size / fract_leftover), we iterate through the loop from
> min_order to slub_max_order and choose the order that minimizes memory
> wastage for the slab.
Hi Jay,
If understand correctly, slub currently chooses an order if it
does not waste too much memory, but the order could be sub-optimal
because there can be an order that wastes less memory. right?
Hmm, the new code might choose larger order than before, as SLUB previously
wasted more memory instead of increasing order.
BUT the maximum slub order is still bound by slub_max_order,
so that looks fine to me. If using high order for less fragmentation
becomes a problem, slub_max_order should be changed.
<...snip...>
> I conducted tests on systems with 160 CPUs and 16 CPUs, using 4K and
> 64K page sizes. Through these tests, it was observed that the patch
> successfully reduces the wastage of slab memory without any noticeable
> performance degradation in the hackbench test report. However, it should
> be noted that the patch also increases the total number of objects,
> leading to an overall increase in total slab memory usage.
<...snip...>
Then my question is that, why is this a useful change if total memory
usage is increased?
> Test results are as follows:
> 3) On 16 CPUs with 4K Page size
>
> +-----------------+----------------+------------------+
> | Total wastage in slub memory |
> +-----------------+----------------+------------------+
> | | After Boot | After Hackbench |
> | Normal | 666 Kb | 902 Kb |
> | With Patch | 533 Kb | 694 Kb |
> | Wastage reduce | ~20% | ~23% |
> +-----------------+----------------+------------------+
>
> +-----------------+----------------+----------------+
> | Total slub memory |
> +-----------------+----------------+----------------+
> | | After Boot | After Hackbench|
> | Normal | 82360 | 122532 |
> | With Patch | 87372 | 129180 |
> | Memory increase | ~6% | ~5% |
> +-----------------+----------------+----------------+
>
How should we understand this data?
reducing amount of memory wastage by increasing slab order
might not reduce total SLUB memory usage?
> hackbench-process-sockets
> +-------+----+---------+---------+-----------+
> | Amean | 1 | 1.4983 | 1.4867 | ( 0.78%) |
> | Amean | 4 | 5.6613 | 5.6793 | ( -0.32%) |
> | Amean | 7 | 9.9813 | 9.9873 | ( -0.06%) |
> | Amean | 12 | 17.6963 | 17.8527 | ( -0.88%) |
> | Amean | 21 | 31.2017 | 31.2060 | ( -0.01%) |
> | Amean | 30 | 44.0297 | 44.1750 | ( -0.33%) |
> | Amean | 48 | 70.2073 | 69.6210 | ( 0.84%) |
> | Amean | 64 | 92.3257 | 93.7410 | ( -1.53%) |
> +-------+----+---------+---------+-----------+
--
Hyeonggon Yoo
Undergraduate | Chungnam National University
Dept. Computer Science & Engineering
next prev parent reply other threads:[~2023-06-12 10:17 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-12 8:55 Jay Patel
2023-06-12 10:17 ` Hyeonggon Yoo [this message]
2023-06-13 12:55 ` Jay Patel
2023-06-19 3:25 ` Hyeonggon Yoo
2023-06-28 10:27 ` Jay Patel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZIbwwUi4yorj8nJL@debian-BULLSEYE-live-builder-AMD64 \
--to=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jaypatel@linux.ibm.com \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=piyushs@linux.ibm.com \
--cc=rientjes@google.com \
--cc=tsahu@linux.ibm.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox