* Greedy kswapd reclaim behavior
@ 2015-03-10 3:15 Lock Free
2015-03-10 20:18 ` Lock Free
0 siblings, 1 reply; 4+ messages in thread
From: Lock Free @ 2015-03-10 3:15 UTC (permalink / raw)
To: linux-mm
[-- Attachment #1: Type: text/plain, Size: 1218 bytes --]
I'm trying to explain swapping out behavior that is causing
unpredictability in our app. We’re running redhat kernel 2.6.32-431 (yes
older) on a host that has 24GB of physical memory and a swap space of 4GB.
Swappiness is set to 10, min_free_kbytes is 90112. Over time free memory
drop down to ~180MB due to filesystem usage over a few hours, which is
immediately followed by 2GB or 4GB of memory being reclaimed. We expect
the free memory to be used by the file system cache, and also expect kswapd
to be triggered when min_free_kbytes is breached. However what was not
expected was the 2-4GB of memory being reclaimed. Our understanding is
once free memory hits high water mark which is 2 x min_free_kbytes, kswapd
duty cycle finishes. 2-3GB is usually the file system cache pages,
however the other 1-2GB are anonymous pages. It’s a issue for us to see
the anonymous pages swapped out because they correspond to a process (JVM)
whose performance is important to us. This process virtual and resident
size is static at 15GB. Why is kswapd so aggressive in reclaiming pages
when it clearly reclaimed more than high water immediately after the FS
cache was flushed? Is this by design?
[-- Attachment #2: Type: text/html, Size: 1271 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Greedy kswapd reclaim behavior
2015-03-10 3:15 Greedy kswapd reclaim behavior Lock Free
@ 2015-03-10 20:18 ` Lock Free
2015-03-11 0:02 ` Krishna Reddy
0 siblings, 1 reply; 4+ messages in thread
From: Lock Free @ 2015-03-10 20:18 UTC (permalink / raw)
To: linux-mm
I should also clarify the true min/low/high watermark thresholds.
min watermark is 11275 * 4096 = 44MB
low watermark is 14093 * 4096 = 55MB
high watermark is 16912 * 4096 = 66MB
Is it expected that kswapd reclaims significantly more pages than the high
watermark?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* RE: Greedy kswapd reclaim behavior
2015-03-10 20:18 ` Lock Free
@ 2015-03-11 0:02 ` Krishna Reddy
2015-03-11 5:39 ` Lock Free
0 siblings, 1 reply; 4+ messages in thread
From: Krishna Reddy @ 2015-03-11 0:02 UTC (permalink / raw)
To: Lock Free, linux-mm
> Is it expected that kswapd reclaims significantly more pages than the high
> watermark?
What zones(DMA, DMA32, Normal, etc.) do you have in the system? You can check
under /proc/zoneinfo.
Each zone has its own Kswapd thread. Whenever a zone's free memory
is less than low watermark, Kswapd thread of that zone is woke up and it tries
to reclaim memory till the zone's high watermark is reached. During reclaim,
pages are swapped out from the zone's lru lists and various caches in kernel.
only swap out from lru lists ensure that pages released belong to the zone,
which kswapd is running for.
Caches shrinking doesn't necessarily release the pages of a particular zone.
The memory reclaimed from caches can belong to other zones and the kswapd
doesn't sleep based on total free memory available.
You need to check the free memory available in the zone, which kswapd is running for.
-Krishna Reddy
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Greedy kswapd reclaim behavior
2015-03-11 0:02 ` Krishna Reddy
@ 2015-03-11 5:39 ` Lock Free
0 siblings, 0 replies; 4+ messages in thread
From: Lock Free @ 2015-03-11 5:39 UTC (permalink / raw)
To: linux-mm
Thanks for the response Krishna,
>> Caches shrinking doesn't necessarily release the pages of a particular zone.
I thought kswap will try and reclaim fs cache first, before trying to page and
potentially swap out pages...
We have 2 numa nodes with the following zones (see below). Every two hours our
available free space reported by /proc/meminfo drops down to ~180MB and then we
see fs cache flushed followed by anonymous pages reclaimed. The total is
~2-3GB. The fs cache accounted for ~2GB. My understanding is kswapd should
stop reclaiming once free pages is above the high water mark, however we see
excessive swapping out freeing pages beyond the high water mark and impacting
the performance of a memory latency sensitive application.
The /proc/zoneinfo below doesn't correspond to the time the issue occurred, just
a example of what our host looks like. Unfortunately we don't have zoneinfo
persisted. We do have the buddyinfo persisted not sure if that would help.
Node 0, zone Normal
pages free 163947
min 11275
low 14093
high 16912
scanned 0
spanned 3145728
present 3102720
Node 1, zone DMA
pages free 3935
min 13
low 16
high 19
scanned 0
spanned 4095
present 3840
Node 1, zone DMA32
pages free 19524
min 3017
low 3771
high 4525
scanned 0
spanned 1044480
present 830385
Node 1, zone Normal
pages free 294707
min 8221
low 10276
high 12331
scanned 0
spanned 2293760
present 2262400
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2015-03-11 5:40 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-10 3:15 Greedy kswapd reclaim behavior Lock Free
2015-03-10 20:18 ` Lock Free
2015-03-11 0:02 ` Krishna Reddy
2015-03-11 5:39 ` Lock Free
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox