From: Alex Shi <alex.shi@linux.alibaba.com>
To: willy@infradead.org
Cc: tim.c.chen@linux.intel.com,
Konstantin Khlebnikov <koct9i@gmail.com>,
Hugh Dickins <hughd@google.com>, Yu Zhao <yuzhao@google.com>,
Michal Hocko <mhocko@suse.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [RFC PATCH 0/4] pre sort pages on lruvec in pagevec
Date: Fri, 25 Dec 2020 17:59:46 +0800 [thread overview]
Message-ID: <1608890390-64305-1-git-send-email-alex.shi@linux.alibaba.com> (raw)
In-Reply-To: <20201126155553.GT4327@casper.infradead.org>
This idea was tried on per memcg lru lock patchset v18, and had a good
result, about 5%~20+% performance gain on lru lock busy benchmarks,
like case-lru-file-readtwice.
But on the latest kernel, I can not reproduce the result on my box.
Also I can not reproduce Tim's performance gain too on my box.
So I don't know if it's workable in some scenario, just sent out if
someone has interesting...
Alex Shi (4):
mm/swap.c: pre-sort pages in pagevec for pagevec_lru_move_fn
mm/swap.c: bail out early for no memcg and no numa
mm/swap.c: extend the usage to pagevec_lru_add
mm/swap.c: no sort if all page's lruvec are same
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
mm/swap.c | 118 +++++++++++++++++++++++++++++++++++++++++-------------
1 file changed, 91 insertions(+), 27 deletions(-)
--
2.29.GIT
next prev parent reply other threads:[~2020-12-25 10:02 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-20 8:27 [PATCH next] mm/swap.c: reduce lock contention in lru_cache_add Alex Shi
2020-11-20 23:19 ` Andrew Morton
2020-11-23 4:46 ` Alex Shi
2020-11-25 15:38 ` Vlastimil Babka
2020-11-26 3:12 ` Alex Shi
2020-11-26 11:05 ` Vlastimil Babka
2020-11-26 4:52 ` Yu Zhao
2020-11-26 6:39 ` Alex Shi
2020-11-26 7:24 ` Yu Zhao
2020-11-26 8:09 ` Alex Shi
2020-11-26 11:22 ` Vlastimil Babka
2020-11-26 15:44 ` Vlastimil Babka
2020-11-26 15:55 ` Matthew Wilcox
2020-11-27 3:14 ` Alex Shi
2020-12-01 8:02 ` [PATCH 1/3] mm/swap.c: pre-sort pages in pagevec for pagevec_lru_move_fn Alex Shi
2020-12-01 8:02 ` [PATCH 2/3] mm/swap.c: bail out early for no memcg and no numa Alex Shi
2020-12-01 8:02 ` [PATCH 3/3] mm/swap.c: extend the usage to pagevec_lru_add Alex Shi
2020-12-01 8:10 ` [PATCH 1/3] mm/swap.c: pre-sort pages in pagevec for pagevec_lru_move_fn Michal Hocko
2020-12-01 8:20 ` Alex Shi
2020-12-25 9:59 ` Alex Shi [this message]
2020-12-25 9:59 ` [RFC PATCH 1/4] " Alex Shi
2020-12-25 9:59 ` [RFC PATCH 2/4] mm/swap.c: bail out early for no memcg and no numa Alex Shi
2020-12-25 9:59 ` [RFC PATCH 3/4] mm/swap.c: extend the usage to pagevec_lru_add Alex Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1608890390-64305-1-git-send-email-alex.shi@linux.alibaba.com \
--to=alex.shi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=koct9i@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=tim.c.chen@linux.intel.com \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox