linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* debug linux kernel memory management / pressure
@ 2019-03-27 10:56 Stefan Priebe - Profihost AG
  2019-03-29  9:41 ` Stefan Priebe - Profihost AG
  0 siblings, 1 reply; 4+ messages in thread
From: Stefan Priebe - Profihost AG @ 2019-03-27 10:56 UTC (permalink / raw)
  To: linux-mm; +Cc: l.roehrs, Daniel Aberger - Profihost AG, n.fahldieck

Hello list,

i hope this is the right place to ask. If not i would be happy to point
me to something else.

I'm seeing the following behaviour on some of our hosts running a SLES
15 kernel (kernel v4.12 as it's base) but i don't think it's related to
the kernel.

At some "random" interval - mostly 3-6 weeks of uptime. Suddenly mem
pressure rises and the linux cache (Cached: /proc/meminfo) drops from
12G to 3G. After that io pressure rises most probably due to low cache.
But at the same time i've MemFree und MemAvailable at 19-22G.

Why does this happen? How can i debug this situation? I would expect
that the page / file cache never drops if there is so much free mem.

Thanks a lot for your help.

Greets,
Stefan

Not sure whether needed but these are the vm. kernel settings:
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.compact_unevictable_allowed = 1
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
vm.drop_caches = 0
vm.extfrag_threshold = 500
vm.hugepages_treat_as_movable = 0
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256   256     32      1
vm.max_map_count = 65530
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
vm.min_free_kbytes = 393216
vm.min_slab_ratio = 5
vm.min_unmapped_ratio = 1
vm.mmap_min_addr = 65536
vm.mmap_rnd_bits = 28
vm.mmap_rnd_compat_bits = 8
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.nr_overcommit_hugepages = 0
vm.nr_pdflush_threads = 0
vm.numa_zonelist_order = default
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.stat_interval = 1
vm.swappiness = 50
vm.user_reserve_kbytes = 131072
vm.vfs_cache_pressure = 100
vm.watermark_scale_factor = 10
vm.zone_reclaim_mode = 0


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-04-23  6:42 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-27 10:56 debug linux kernel memory management / pressure Stefan Priebe - Profihost AG
2019-03-29  9:41 ` Stefan Priebe - Profihost AG
2019-04-05 10:37   ` Vlastimil Babka
2019-04-23  6:42     ` Stefan Priebe - Profihost AG

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox