linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
To: "linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: l.roehrs@profihost.ag,
	Daniel Aberger - Profihost AG <d.aberger@profihost.ag>,
	"n.fahldieck@profihost.ag" <n.fahldieck@profihost.ag>
Subject: debug linux kernel memory management / pressure
Date: Wed, 27 Mar 2019 11:56:13 +0100	[thread overview]
Message-ID: <36329138-4a6f-9560-b36c-02dc528a8e12@profihost.ag> (raw)

Hello list,

i hope this is the right place to ask. If not i would be happy to point
me to something else.

I'm seeing the following behaviour on some of our hosts running a SLES
15 kernel (kernel v4.12 as it's base) but i don't think it's related to
the kernel.

At some "random" interval - mostly 3-6 weeks of uptime. Suddenly mem
pressure rises and the linux cache (Cached: /proc/meminfo) drops from
12G to 3G. After that io pressure rises most probably due to low cache.
But at the same time i've MemFree und MemAvailable at 19-22G.

Why does this happen? How can i debug this situation? I would expect
that the page / file cache never drops if there is so much free mem.

Thanks a lot for your help.

Greets,
Stefan

Not sure whether needed but these are the vm. kernel settings:
vm.admin_reserve_kbytes = 8192
vm.block_dump = 0
vm.compact_unevictable_allowed = 1
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
vm.drop_caches = 0
vm.extfrag_threshold = 500
vm.hugepages_treat_as_movable = 0
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256   256     32      1
vm.max_map_count = 65530
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
vm.min_free_kbytes = 393216
vm.min_slab_ratio = 5
vm.min_unmapped_ratio = 1
vm.mmap_min_addr = 65536
vm.mmap_rnd_bits = 28
vm.mmap_rnd_compat_bits = 8
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.nr_overcommit_hugepages = 0
vm.nr_pdflush_threads = 0
vm.numa_zonelist_order = default
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.stat_interval = 1
vm.swappiness = 50
vm.user_reserve_kbytes = 131072
vm.vfs_cache_pressure = 100
vm.watermark_scale_factor = 10
vm.zone_reclaim_mode = 0


             reply	other threads:[~2019-03-27 10:56 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-27 10:56 Stefan Priebe - Profihost AG [this message]
2019-03-29  9:41 ` Stefan Priebe - Profihost AG
2019-04-05 10:37   ` Vlastimil Babka
2019-04-23  6:42     ` Stefan Priebe - Profihost AG

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=36329138-4a6f-9560-b36c-02dc528a8e12@profihost.ag \
    --to=s.priebe@profihost.ag \
    --cc=d.aberger@profihost.ag \
    --cc=l.roehrs@profihost.ag \
    --cc=linux-mm@kvack.org \
    --cc=n.fahldieck@profihost.ag \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox