* high IRQ latency due to pcp draining
@ 2023-10-09 13:25 Lucas Stach
2023-10-11 15:07 ` Mel Gorman
0 siblings, 1 reply; 2+ messages in thread
From: Lucas Stach @ 2023-10-09 13:25 UTC (permalink / raw)
To: Mel Gorman, linux-mm; +Cc: kernel, dri-devel
Hi all,
recently I've been looking at inconsistent frame times in one of our
graphics workloads and it seems the culprit lies within the MM
subsystem. During workload execution sporadically some graphics
buffers, which are typically single digit megabytes in size, are freed.
The pages are freed via __folio_batch_release from drm_gem_put_pages,
which means they are put on the pcp and drained back into the zone via
free_pcppages_bulk.
As the buffers are quite large even a single free triggers the batching
optimization added in 3b12e7e97938 ("mm/page_alloc: scale the number of
pages that are batch freed"), as there is a huge number of pages that
get freed without any intervening allocations. The pcp for the normal
zone on this system has a high watermark of 614 pages and batch of 63,
which means that the batching optimizations will drive up the number of
pages freed per batch to 551 pages.
As the cost per page free (including tracing overhead, which isn't
negligible on this small ARM system) is around 0.7µs and the batch free
is done with zone spinlock held and IRQs disabled, this leads to
significant IRQ disabled times of upwards of 250µs even in the
production system without tracing. Those long IRQ disabled sections do
interfere with the workload of the system.
As the larger free batching was added on purpose I don't want to rip it
out altogether. But then there are also no tuneables aside from the pcp
high watermark, which may have other unintended side effects.
I'm hoping to get some ideas on how to proceed here. Should we consider
a more conservative maximum of pages for the batching optimization?
Should another tunable be added? Or any other clever ideas to minimize
those critical sections?
Regards,
Lucas
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: high IRQ latency due to pcp draining
2023-10-09 13:25 high IRQ latency due to pcp draining Lucas Stach
@ 2023-10-11 15:07 ` Mel Gorman
0 siblings, 0 replies; 2+ messages in thread
From: Mel Gorman @ 2023-10-11 15:07 UTC (permalink / raw)
To: Lucas Stach; +Cc: linux-mm, kernel, dri-devel
On Mon, Oct 09, 2023 at 03:25:54PM +0200, Lucas Stach wrote:
> Hi all,
>
Hi Lucas,
> recently I've been looking at inconsistent frame times in one of our
> graphics workloads and it seems the culprit lies within the MM
> subsystem. During workload execution sporadically some graphics
> buffers, which are typically single digit megabytes in size, are freed.
> The pages are freed via __folio_batch_release from drm_gem_put_pages,
> which means they are put on the pcp and drained back into the zone via
> free_pcppages_bulk.
>
> As the buffers are quite large even a single free triggers the batching
> optimization added in 3b12e7e97938 ("mm/page_alloc: scale the number of
> pages that are batch freed"), as there is a huge number of pages that
> get freed without any intervening allocations. The pcp for the normal
> zone on this system has a high watermark of 614 pages and batch of 63,
> which means that the batching optimizations will drive up the number of
> pages freed per batch to 551 pages.
> As the cost per page free (including tracing overhead, which isn't
> negligible on this small ARM system) is around 0.7µs and the batch free
> is done with zone spinlock held and IRQs disabled, this leads to
> significant IRQ disabled times of upwards of 250µs even in the
> production system without tracing. Those long IRQ disabled sections do
> interfere with the workload of the system.
>
Ouch.
> As the larger free batching was added on purpose I don't want to rip it
> out altogether. But then there are also no tuneables aside from the pcp
> high watermark, which may have other unintended side effects.
>
> I'm hoping to get some ideas on how to proceed here. Should we consider
> a more conservative maximum of pages for the batching optimization?
Picking a different default on its own will simply end up changing what gets
punished and for different reasons. While it would address your problem,
it might be harder to get merged without being identified as a problem
via bisection.
> Should another tunable be added? Or any other clever ideas to minimize
> those critical sections?
>
I think the first option is to look at the series
https://lore.kernel.org/all/20230920061856.257597-1-ying.huang@intel.com/.
I delayed responding to your mail until I had a chance to look at it properly
but specifically I think
https://lore.kernel.org/all/20230920061856.257597-5-ying.huang@intel.com/
is of interest to you and the comment I added in response had you in
mind https://lore.kernel.org/all/20231011125219.kuoluyuwxzva5q5w@techsingularity.net/
It may be the case that the patch on its own helps with your problem or
at least allows the possibility of making it a runtime configurable or
kconfig option. Which is suitable depends on how and where your kernel
is being deployed. The patches in question have not hit mainline yet and
are only in mm-unstable where they are not guaranteed to be merged, but
there is a strong chance. If the series does not directly help you then
any solution should at least take that series into account.
Even if that patch does not directly help you, patch 3 might accidentally
help you. While there is not a direct correlation between the speed of
a CPU and the size of the cache, there probably is an indirect one in a
lot of cases. If your target CPU has a small cache, your problem may be
accidentally fixed by patches 2+3 from that series.
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2023-10-11 15:14 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-09 13:25 high IRQ latency due to pcp draining Lucas Stach
2023-10-11 15:07 ` Mel Gorman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox