linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yang Shi <yang.shi@linux.alibaba.com>
To: Shakeel Butt <shakeelb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/2] mm: swap: make page_evictable() inline
Date: Fri, 13 Mar 2020 12:46:46 -0700	[thread overview]
Message-ID: <520b3295-9fb8-04a7-6215-9bfda4f1a268@linux.alibaba.com> (raw)
In-Reply-To: <CALvZod4W9kkh578Kix7+M9Jkwm1sxx2zvvPG+0Us3R8bEkpEpg@mail.gmail.com>



On 3/13/20 12:33 PM, Shakeel Butt wrote:
> On Fri, Mar 13, 2020 at 11:34 AM Yang Shi <yang.shi@linux.alibaba.com> wrote:
>> When backporting commit 9c4e6b1a7027 ("mm, mlock, vmscan: no more
>> skipping pagevecs") to our 4.9 kernel, our test bench noticed around 10%
>> down with a couple of vm-scalability's test cases (lru-file-readonce,
>> lru-file-readtwice and lru-file-mmap-read).  I didn't see that much down
>> on my VM (32c-64g-2nodes).  It might be caused by the test configuration,
>> which is 32c-256g with NUMA disabled and the tests were run in root memcg,
>> so the tests actually stress only one inactive and active lru.  It
>> sounds not very usual in mordern production environment.
>>
>> That commit did two major changes:
>> 1. Call page_evictable()
>> 2. Use smp_mb to force the PG_lru set visible
>>
>> It looks they contribute the most overhead.  The page_evictable() is a
>> function which does function prologue and epilogue, and that was used by
>> page reclaim path only.  However, lru add is a very hot path, so it
>> sounds better to make it inline.  However, it calls page_mapping() which
>> is not inlined either, but the disassemble shows it doesn't do push and
>> pop operations and it sounds not very straightforward to inline it.
>>
>> Other than this, it sounds smp_mb() is not necessary for x86 since
>> SetPageLRU is atomic which enforces memory barrier already, replace it
>> with smp_mb__after_atomic() in the following patch.
>>
>> With the two fixes applied, the tests can get back around 5% on that
>> test bench and get back normal on my VM.  Since the test bench
>> configuration is not that usual and I also saw around 6% up on the
>> latest upstream, so it sounds good enough IMHO.
>>
>> The below is test data (lru-file-readtwice throughput) against the v5.6-rc4:
>>          mainline        w/ inline fix
>>            150MB            154MB
>>
> What is the test setup for the above experiment? I would like to get a repro.

Just startup a VM with two nodes, then run case-lru-file-readtwice or 
case-lru-file-readonce in vm-scalability in root memcg or with memcg 
disabled.  Then get the average throughput (dd result) from the test. 
Our test bench uses the script from lkp, but I just ran it manually. 
Single node VM should be more obvious showed in my test.

>
>> With this patch the throughput gets 2.67% up.  The data with using
>> smp_mb__after_atomic() is showed in the following patch.
>>
>> Fixes: 9c4e6b1a7027 ("mm, mlock, vmscan: no more skipping pagevecs")
>> Cc: Shakeel Butt <shakeelb@google.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
>> ---
>>   include/linux/swap.h | 24 +++++++++++++++++++++++-
>>   mm/vmscan.c          | 23 -----------------------
>>   2 files changed, 23 insertions(+), 24 deletions(-)
>>
>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>> index 1e99f7a..297eb66 100644
>> --- a/include/linux/swap.h
>> +++ b/include/linux/swap.h
>> @@ -374,7 +374,29 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem,
>>   #define node_reclaim_mode 0
>>   #endif
>>
>> -extern int page_evictable(struct page *page);
>> +/*
>> + * page_evictable - test whether a page is evictable
>> + * @page: the page to test
>> + *
>> + * Test whether page is evictable--i.e., should be placed on active/inactive
>> + * lists vs unevictable list.
>> + *
>> + * Reasons page might not be evictable:
>> + * (1) page's mapping marked unevictable
>> + * (2) page is part of an mlocked VMA
>> + *
>> + */
>> +static inline int page_evictable(struct page *page)
>> +{
>> +       int ret;
>> +
>> +       /* Prevent address_space of inode and swap cache from being freed */
>> +       rcu_read_lock();
>> +       ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
>> +       rcu_read_unlock();
>> +       return ret;
>> +}
>> +
>>   extern void check_move_unevictable_pages(struct pagevec *pvec);
>>
>>   extern int kswapd_run(int nid);
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 8763705..855c395 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -4277,29 +4277,6 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
>>   }
>>   #endif
>>
>> -/*
>> - * page_evictable - test whether a page is evictable
>> - * @page: the page to test
>> - *
>> - * Test whether page is evictable--i.e., should be placed on active/inactive
>> - * lists vs unevictable list.
>> - *
>> - * Reasons page might not be evictable:
>> - * (1) page's mapping marked unevictable
>> - * (2) page is part of an mlocked VMA
>> - *
>> - */
>> -int page_evictable(struct page *page)
>> -{
>> -       int ret;
>> -
>> -       /* Prevent address_space of inode and swap cache from being freed */
>> -       rcu_read_lock();
>> -       ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
>> -       rcu_read_unlock();
>> -       return ret;
>> -}
>> -
>>   /**
>>    * check_move_unevictable_pages - check pages for evictability and move to
>>    * appropriate zone lru list
>> --
>> 1.8.3.1
>>



  reply	other threads:[~2020-03-13 19:46 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-13 18:34 Yang Shi
2020-03-13 18:34 ` [PATCH 2/2] mm: swap: use smp_mb__after_atomic() to order LRU bit set Yang Shi
2020-03-16 17:40   ` Vlastimil Babka
2020-03-16 17:49     ` Yang Shi
2020-03-16 22:18       ` Yang Shi
2020-03-13 19:33 ` [PATCH 1/2] mm: swap: make page_evictable() inline Shakeel Butt
2020-03-13 19:46   ` Yang Shi [this message]
2020-03-13 19:50     ` Shakeel Butt
2020-03-13 19:54       ` Yang Shi
2020-03-14 16:01 ` Matthew Wilcox
2020-03-16 16:36   ` Yang Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=520b3295-9fb8-04a7-6215-9bfda4f1a268@linux.alibaba.com \
    --to=yang.shi@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shakeelb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox