From: Hugh Dickins <hughd@google.com>
To: Alex Shi <alex.shi@linux.alibaba.com>
Cc: akpm@linux-foundation.org, mgorman@techsingularity.net,
tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru,
daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com,
willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org, shakeelb@google.com,
iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com
Subject: Re: [PATCH v12 00/16] per memcg lru lock
Date: Thu, 11 Jun 2020 15:26:14 -0700 (PDT) [thread overview]
Message-ID: <alpine.LSU.2.11.2006111510220.10801@eggly.anvils> (raw)
In-Reply-To: <1591856209-166869-1-git-send-email-alex.shi@linux.alibaba.com>
On Thu, 11 Jun 2020, Alex Shi wrote:
> This is a new version which bases on v5.8,
No, not even v5.8-rc1 has come out yet. v12 applied cleanly on
2dca74a40e1e7ff45079d85fc507769383039b9d but I didn't check the build.
> only change mm/compaction.c
> since mm-compaction-avoid-vm_bug_onpageslab-in-page_mapcount.patch
> removed.
>
> Johannes Weiner has suggested:
> "So here is a crazy idea that may be worth exploring:
>
> Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's
> linked list.
>
> Can we make PageLRU atomic and use it to stabilize the lru_lock
> instead, and then use the lru_lock only serialize list operations?
> ..."
It was well worth exploring, and may help in a few cases;
Johannes's memcg swap simplifications have helped a lot more;
but crashes under rotate_reclaimable_page() show that this series
still does not give enough protection from mem_cgroup_move_account().
I'll send a couple of fixes to compaction bugs in reply to this:
with those in, compaction appears to be solid.
Hugh
>
> With new memcg charge path and this solution, we could isolate
> LRU pages to exclusive visit them in compaction, page migration, reclaim,
> memcg move_accunt, huge page split etc scenarios while keeping pages'
> memcg stable. Then possible to change per node lru locking to per memcg
> lru locking. As to pagevec_lru_move_fn funcs, it would be safe to let
> pages remain on lru list, lru lock could guard them for list integrity.
>
> The patchset includes 3 parts:
> 1, some code cleanup and minimum optimization as a preparation.
> 2, use TestCleanPageLRU as page isolation's precondition
> 3, replace per node lru_lock with per memcg per node lru_lock
>
> The 3rd part moves per node lru_lock into lruvec, thus bring a lru_lock for
> each of memcg per node. So on a large machine, each of memcg don't
> have to suffer from per node pgdat->lru_lock competition. They could go
> fast with their self lru_lock
>
> Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104
> containers on a 2s * 26cores * HT box with a modefied case:
> https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice
>
> With this patchset, the readtwice performance increased about 80%
> in concurrent containers.
>
> Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this
> idea 8 years ago, and others who give comments as well: Daniel Jordan,
> Mel Gorman, Shakeel Butt, Matthew Wilcox etc.
>
> Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu,
> and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks!
>
>
> Alex Shi (14):
> mm/vmscan: remove unnecessary lruvec adding
> mm/page_idle: no unlikely double check for idle page counting
> mm/compaction: correct the comments of compact_defer_shift
> mm/compaction: rename compact_deferred as compact_should_defer
> mm/thp: move lru_add_page_tail func to huge_memory.c
> mm/thp: clean up lru_add_page_tail
> mm/thp: narrow lru locking
> mm/memcg: add debug checking in lock_page_memcg
> mm/lru: introduce TestClearPageLRU
> mm/compaction: do page isolation first in compaction
> mm/mlock: reorder isolation sequence during munlock
> mm/lru: replace pgdat lru_lock with lruvec lock
> mm/lru: introduce the relock_page_lruvec function
> mm/pgdat: remove pgdat lru_lock
>
> Hugh Dickins (2):
> mm/vmscan: use relock for move_pages_to_lru
> mm/lru: revise the comments of lru_lock
>
> Documentation/admin-guide/cgroup-v1/memcg_test.rst | 15 +-
> Documentation/admin-guide/cgroup-v1/memory.rst | 8 +-
> Documentation/trace/events-kmem.rst | 2 +-
> Documentation/vm/unevictable-lru.rst | 22 +--
> include/linux/compaction.h | 4 +-
> include/linux/memcontrol.h | 92 +++++++++++
> include/linux/mm_types.h | 2 +-
> include/linux/mmzone.h | 6 +-
> include/linux/page-flags.h | 1 +
> include/linux/swap.h | 4 +-
> include/trace/events/compaction.h | 2 +-
> mm/compaction.c | 96 +++++++-----
> mm/filemap.c | 4 +-
> mm/huge_memory.c | 51 +++++--
> mm/memcontrol.c | 87 ++++++++++-
> mm/mlock.c | 93 ++++++------
> mm/mmzone.c | 1 +
> mm/page_alloc.c | 1 -
> mm/page_idle.c | 8 -
> mm/rmap.c | 2 +-
> mm/swap.c | 112 ++++----------
> mm/swap_state.c | 6 +-
> mm/vmscan.c | 168 +++++++++++----------
> mm/workingset.c | 4 +-
> 24 files changed, 481 insertions(+), 310 deletions(-)
>
> --
> 1.8.3.1
next prev parent reply other threads:[~2020-06-11 22:26 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-11 6:16 Alex Shi
2020-06-11 6:16 ` [PATCH v12 01/16] mm/vmscan: remove unnecessary lruvec adding Alex Shi
2020-06-11 6:16 ` [PATCH v12 02/16] mm/page_idle: no unlikely double check for idle page counting Alex Shi
2020-06-11 6:16 ` [PATCH v12 03/16] mm/compaction: correct the comments of compact_defer_shift Alex Shi
2020-06-11 6:16 ` [PATCH v12 04/16] mm/compaction: rename compact_deferred as compact_should_defer Alex Shi
2020-06-11 6:16 ` [PATCH v12 05/16] mm/thp: move lru_add_page_tail func to huge_memory.c Alex Shi
2020-06-11 6:16 ` [PATCH v12 06/16] mm/thp: clean up lru_add_page_tail Alex Shi
2020-06-11 6:16 ` [PATCH v12 07/16] mm/thp: narrow lru locking Alex Shi
2020-06-11 6:16 ` [PATCH v12 08/16] mm/memcg: add debug checking in lock_page_memcg Alex Shi
2020-06-11 6:16 ` [PATCH v12 09/16] mm/lru: introduce TestClearPageLRU Alex Shi
2020-06-11 6:16 ` [PATCH v12 10/16] mm/compaction: do page isolation first in compaction Alex Shi
2020-06-12 10:26 ` Alex Shi
2020-06-11 6:16 ` [PATCH v12 11/16] mm/mlock: reorder isolation sequence during munlock Alex Shi
2020-06-11 6:16 ` [PATCH v12 12/16] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi
2020-06-12 10:28 ` Alex Shi
2020-06-11 6:16 ` [PATCH v12 13/16] mm/lru: introduce the relock_page_lruvec function Alex Shi
2020-06-11 6:16 ` [PATCH v12 14/16] mm/vmscan: use relock for move_pages_to_lru Alex Shi
2020-06-11 6:16 ` [PATCH v12 15/16] mm/pgdat: remove pgdat lru_lock Alex Shi
2020-06-11 6:16 ` [PATCH v12 16/16] mm/lru: revise the comments of lru_lock Alex Shi
2020-06-11 22:26 ` Hugh Dickins [this message]
2020-06-12 3:09 ` [PATCH v12 00/16] per memcg lru lock Alex Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.LSU.2.11.2006111510220.10801@eggly.anvils \
--to=hughd@google.com \
--cc=akpm@linux-foundation.org \
--cc=alex.shi@linux.alibaba.com \
--cc=cgroups@vger.kernel.org \
--cc=daniel.m.jordan@oracle.com \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=khlebnikov@yandex-team.ru \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=mgorman@techsingularity.net \
--cc=richard.weiyang@gmail.com \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=willy@infradead.org \
--cc=yang.shi@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox