From: Yang Shi <yang.shi@linux.alibaba.com>
To: shakeelb@google.com, vbabka@suse.cz, akpm@linux-foundation.org
Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: [PATCH 2/2] mm: swap: use smp_mb__after_atomic() to order LRU bit set
Date: Sat, 14 Mar 2020 02:34:36 +0800 [thread overview]
Message-ID: <1584124476-76534-2-git-send-email-yang.shi@linux.alibaba.com> (raw)
In-Reply-To: <1584124476-76534-1-git-send-email-yang.shi@linux.alibaba.com>
Memory barrier is needed after setting LRU bit, but smp_mb() is too
strong. Some architectures, i.e. x86, imply memory barrier with atomic
operations, so replacing it with smp_mb__after_atomic() sounds better,
which is nop on strong ordered machines, and full memory barriers on
others. With this change the vm-calability cases would perform better
on x86, I saw total 6% improvement with this patch and previous inline
fix.
The test data (lru-file-readtwice throughput) against v5.6-rc4:
mainline w/ inline fix w/ both (adding this)
150MB 154MB 159MB
Fixes: 9c4e6b1a7027 ("mm, mlock, vmscan: no more skipping pagevecs")
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
---
mm/swap.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index cf39d24..118bac4 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -945,20 +945,20 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
* #0: __pagevec_lru_add_fn #1: clear_page_mlock
*
* SetPageLRU() TestClearPageMlocked()
- * smp_mb() // explicit ordering // above provides strict
+ * MB() // explicit ordering // above provides strict
* // ordering
* PageMlocked() PageLRU()
*
*
* if '#1' does not observe setting of PG_lru by '#0' and fails
* isolation, the explicit barrier will make sure that page_evictable
- * check will put the page in correct LRU. Without smp_mb(), SetPageLRU
+ * check will put the page in correct LRU. Without MB(), SetPageLRU
* can be reordered after PageMlocked check and can make '#1' to fail
* the isolation of the page whose Mlocked bit is cleared (#0 is also
* looking at the same page) and the evictable page will be stranded
* in an unevictable LRU.
*/
- smp_mb();
+ smp_mb__after_atomic();
if (page_evictable(page)) {
lru = page_lru(page);
--
1.8.3.1
next prev parent reply other threads:[~2020-03-13 18:34 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-13 18:34 [PATCH 1/2] mm: swap: make page_evictable() inline Yang Shi
2020-03-13 18:34 ` Yang Shi [this message]
2020-03-16 17:40 ` [PATCH 2/2] mm: swap: use smp_mb__after_atomic() to order LRU bit set Vlastimil Babka
2020-03-16 17:49 ` Yang Shi
2020-03-16 22:18 ` Yang Shi
2020-03-13 19:33 ` [PATCH 1/2] mm: swap: make page_evictable() inline Shakeel Butt
2020-03-13 19:46 ` Yang Shi
2020-03-13 19:50 ` Shakeel Butt
2020-03-13 19:54 ` Yang Shi
2020-03-14 16:01 ` Matthew Wilcox
2020-03-16 16:36 ` Yang Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1584124476-76534-2-git-send-email-yang.shi@linux.alibaba.com \
--to=yang.shi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=shakeelb@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox