From: Xishi Qiu <qiuxishi@huawei.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Yisheng Xie <xieyisheng1@huawei.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
zhongjiang <zhongjiang@huawei.com>
Subject: Re: [Question] Mlocked count will not be decreased
Date: Wed, 24 May 2017 18:49:41 +0800 [thread overview]
Message-ID: <59256545.9010608@huawei.com> (raw)
In-Reply-To: <93f1b063-6288-d109-117d-d3c1cf152a8e@suse.cz>
On 2017/5/24 18:32, Vlastimil Babka wrote:
> On 05/24/2017 10:32 AM, Yisheng Xie wrote:
>> Hi Kefengi 1/4 ?
>> Could you please try this patch.
>>
>> Thanks
>> Yisheng Xie
>> -------------
>> From a70ae975756e8e97a28d49117ab25684da631689 Mon Sep 17 00:00:00 2001
>> From: Yisheng Xie <xieyisheng1@huawei.com>
>> Date: Wed, 24 May 2017 16:01:24 +0800
>> Subject: [PATCH] mlock: fix mlock count can not decrease in race condition
>>
>> Kefeng reported that when run the follow test the mlock count in meminfo
>> cannot be decreased:
>> [1] testcase
>> linux:~ # cat test_mlockal
>> grep Mlocked /proc/meminfo
>> for j in `seq 0 10`
>> do
>> for i in `seq 4 15`
>> do
>> ./p_mlockall >> log &
>> done
>> sleep 0.2
>> done
>> sleep 5 # wait some time to let mlock decrease
>> grep Mlocked /proc/meminfo
>>
>> linux:~ # cat p_mlockall.c
>> #include <sys/mman.h>
>> #include <stdlib.h>
>> #include <stdio.h>
>>
>> #define SPACE_LEN 4096
>>
>> int main(int argc, char ** argv)
>> {
>> int ret;
>> void *adr = malloc(SPACE_LEN);
>> if (!adr)
>> return -1;
>>
>> ret = mlockall(MCL_CURRENT | MCL_FUTURE);
>> printf("mlcokall ret = %d\n", ret);
>>
>> ret = munlockall();
>> printf("munlcokall ret = %d\n", ret);
>>
>> free(adr);
>> return 0;
>> }
>>
>> When __munlock_pagevec, we ClearPageMlock but isolation_failed in race
>> condition, and we do not count these page into delta_munlocked, which cause mlock
>
> Race condition with what? Who else would isolate our pages?
>
>> counter incorrect for we had Clear the PageMlock and cannot count down
>> the number in the feture.
>>
>> Fix it by count the number of page whoes PageMlock flag is cleared.
>>
>> Reported-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
>
> Weird, I can reproduce the issue on my desktop's 4.11 distro kernel, but
> not in qemu and small kernel build, for some reason. So I couldn't test
> the patch yet. But it's true that before 7225522bb429 ("mm: munlock:
> batch non-THP page isolation and munlock+putback using pagevec") we
> decreased NR_MLOCK for each pages that passed TestClearPageMlocked(),
> and that unintentionally changed with my patch. There should be a Fixes:
> tag for that.
>
Hi Vlastimil,
Why the page has marked Mlocked, but not in lru list?
if (TestClearPageMlocked(page)) {
/*
* We already have pin from follow_page_mask()
* so we can spare the get_page() here.
*/
if (__munlock_isolate_lru_page(page, false))
continue;
else
__munlock_isolation_failed(page); // How this happened?
}
Thanks,
Xishi Qiu
>> ---
>> mm/mlock.c | 7 ++++---
>> 1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/mlock.c b/mm/mlock.c
>> index c483c5c..71ba5cf 100644
>> --- a/mm/mlock.c
>> +++ b/mm/mlock.c
>> @@ -284,7 +284,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
>> {
>> int i;
>> int nr = pagevec_count(pvec);
>> - int delta_munlocked;
>> + int munlocked = 0;
>> struct pagevec pvec_putback;
>> int pgrescued = 0;
>>
>> @@ -296,6 +296,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
>> struct page *page = pvec->pages[i];
>>
>> if (TestClearPageMlocked(page)) {
>> + munlocked --;
>> /*
>> * We already have pin from follow_page_mask()
>> * so we can spare the get_page() here.
>> @@ -315,8 +316,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
>> pagevec_add(&pvec_putback, pvec->pages[i]);
>> pvec->pages[i] = NULL;
>> }
>> - delta_munlocked = -nr + pagevec_count(&pvec_putback);
>> - __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
>> + if (munlocked)
>
> You don't have to if () this, it should be very rare that munlocked will
> be 0, and the code works fine even if it is.
>
>> + __mod_zone_page_state(zone, NR_MLOCK, munlocked);
>> spin_unlock_irq(zone_lru_lock(zone));
>>
>> /* Now we can release pins of pages that we are not munlocking */
>>
>
>
> .
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-05-24 10:52 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-23 14:41 Kefeng Wang
2017-05-23 22:04 ` Tetsuo Handa
2017-05-24 8:32 ` Yisheng Xie
2017-05-24 8:57 ` Kefeng Wang
2017-05-24 10:32 ` Vlastimil Babka
2017-05-24 10:42 ` Vlastimil Babka
2017-05-24 10:49 ` Xishi Qiu [this message]
2017-05-24 11:38 ` Xishi Qiu
2017-05-24 11:52 ` Vlastimil Babka
2017-05-24 12:10 ` Xishi Qiu
2017-05-24 13:16 ` Vlastimil Babka
2017-05-25 1:16 ` Xishi Qiu
2017-05-25 6:12 ` Vlastimil Babka
2017-05-25 1:00 ` Yisheng Xie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=59256545.9010608@huawei.com \
--to=qiuxishi@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=xieyisheng1@huawei.com \
--cc=zhongjiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox