From: Alex Shi <alex.shi@linux.alibaba.com>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Tejun Heo <tj@kernel.org>, Hugh Dickins <hughd@google.com>,
Konstantin Khlebnikov <khlebnikov@yandex-team.ru>,
Daniel Jordan <daniel.m.jordan@oracle.com>,
Yang Shi <yang.shi@linux.alibaba.com>,
Matthew Wilcox <willy@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
kbuild test robot <lkp@intel.com>, linux-mm <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
cgroups@vger.kernel.org, Shakeel Butt <shakeelb@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Wei Yang <richard.weiyang@gmail.com>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
Rong Chen <rong.a.chen@intel.com>
Subject: Re: [PATCH v17 14/21] mm/compaction: do page isolation first in compaction
Date: Thu, 13 Aug 2020 11:52:31 +0800 [thread overview]
Message-ID: <3d224c35-a53d-3daa-4c76-026d1f2b2656@linux.alibaba.com> (raw)
In-Reply-To: <CAKgT0Ud6ZQ4ZTm1cAUKCdb8FMu0fk9vXgf-bnmb0aY5ndDHwyA@mail.gmail.com>
在 2020/8/13 上午10:17, Alexander Duyck 写道:
>> zone lock is probability better. you can try and test.
> So I spent a good chunk of today looking the code over and what I
> realized is that we probably don't even really need to have this code
> protected by the zone lock since the LRU bit in the pageblock should
> do most of the work for us. In addition we can get rid of the test
> portion of this and just make it a set only operation if I am not
> mistaken.
>
>>>>> the LRU flag is cleared then you are creating a situation where
>>>>> multiple processes will be stomping all over each other as you can
>>>>> have each thread essentially take a page via the LRU flag, but only
>>>>> one thread will process a page and it could skip over all other pages
>>>>> that preemptively had their LRU flag cleared.
>>>> It increase a bit crowd here, but lru_lock do reduce some them, and skip_bit
>>>> could stop each other in a array check(bitmap). So compare to whole node
>>>> lru_lock, the net profit is clear in patch 17.
>>> My concern is that what you can end up with is multiple threads all
>>> working over the same pageblock for isolation. With the old code the
>>> LRU lock was used to make certain that test_and_set_skip was being
>>> synchronized on the first page in the pageblock so you would only have
>>> one thread going through and working a single pageblock. However after
>>> your changes it doesn't seem like the test_and_set_skip has that
>>> protection since only one thread will ever be able to successfully
>>> call it for the first page in the pageblock assuming that the LRU flag
>>> is set on the first page in the pageblock block.
>>>
>>>>> If you take a look at the test_and_set_skip the function only acts on
>>>>> the pageblock aligned PFN for a given range. WIth the changes you have
>>>>> in place now that would mean that only one thread would ever actually
>>>>> call this function anyway since the first PFN would take the LRU flag
>>>>> so no other thread could follow through and test or set the bit as
>>>> Is this good for only one process could do test_and_set_skip? is that
>>>> the 'skip' meaning to be?
>>> So only one thread really getting to fully use test_and_set_skip is
>>> good, however the issue is that there is nothing to synchronize the
>>> testing from the other threads. As a result the other threads could
>>> have isolated other pages within the pageblock before the thread that
>>> is calling test_and_set_skip will get to complete the setting of the
>>> skip bit. This will result in isolation failures for the thread that
>>> set the skip bit which may be undesirable behavior.
>>>
>>> With the old code the threads were all synchronized on testing the
>>> first PFN in the pageblock while holding the LRU lock and that is what
>>> we lost. My concern is the cases where skip_on_failure == true are
>>> going to fail much more often now as the threads can easily interfere
>>> with each other.
>> I have a patch to fix this, which is on
>> https://github.com/alexshi/linux.git lrunext
> I don't think that patch helps to address anything. You are now
> failing to set the bit in the case that something modifies the
> pageblock flags while you are attempting to do so. I think it would be
> better to just leave the cmpxchg loop as it is.
It do increae the case-lru-file-mmap-read in vm-scalibity about 3% performance.
Yes, I am glad to see it can be make better.
>
>>>>> well. The expectation before was that all threads would encounter this
>>>>> test and either proceed after setting the bit for the first PFN or
>>>>> abort after testing the first PFN. With you changes only the first
>>>>> thread actually runs this test and then it and the others will likely
>>>>> encounter multiple failures as they are all clearing LRU bits
>>>>> simultaneously and tripping each other up. That is why the skip bit
>>>>> must have a test and set done before you even get to the point of
>>>>> clearing the LRU flag.
>>>> It make the things warse in my machine, would you like to have a try by yourself?
>>> I plan to do that. I have already been working on a few things to
>>> clean up and optimize your patch set further. I will try to submit an
>>> RFC this evening so we can discuss.
>>>
>> Glad to see your new code soon. Would you like do it base on
>> https://github.com/alexshi/linux.git lrunext
> I can rebase off of that tree. It may add another half hour or so. I
> have barely had any time to test my code. When I enabled some of the
> debugging features in the kernel related to using the vm-scalability
> tests the boot time became incredibly slow so I may just make certain
> I can boot and not mess the system up before submitting my patches as
> an RFC. I can probably try testing them more tomorrow.
>
>>>>>>> The point I was getting at with the PageCompound check is that instead
>>>>>>> of needing the LRU lock you should be able to look at PageCompound as
>>>>>>> soon as you call get_page_unless_zero() and preempt the need to set
>>>>>>> the LRU bit again. Instead of trying to rely on the LRU lock to
>>>>>>> guarantee that the page hasn't been merged you could just rely on the
>>>>>>> fact that you are holding a reference to it so it isn't going to
>>>>>>> switch between being compound or order 0 since it cannot be freed. It
>>>>>>> spoils the idea I originally had of combining the logic for
>>>>>>> get_page_unless_zero and TestClearPageLRU into a single function, but
>>>>>>> the advantage is you aren't clearing the LRU flag unless you are
>>>>>>> actually going to pull the page from the LRU list.
>>>>>> Sorry, I still can not follow you here. Compound code part is unchanged
>>>>>> and follow the original logical. So would you like to pose a new code to
>>>>>> see if its works?
>>>>> No there are significant changes as you reordered all of the
>>>>> operations. Prior to your change the LRU bit was checked, but not
>>>>> cleared before testing for PageCompound. Now you are clearing it
>>>>> before you are testing if it is a compound page. So if compaction is
>>>>> running we will be seeing the pages in the LRU stay put, but the
>>>>> compound bit flickering off and on if the compound page is encountered
>>>>> with the wrong or NULL lruvec. What I was suggesting is that the
>>>> The lruvec could be wrong or NULL here, that is the base stone of whole
>>>> patchset.
>>> Sorry I had a typo in my comment as well as it is the LRU bit that
>>> will be flickering, not the compound. The goal here is to avoid
>>> clearing the LRU bit unless we are sure we are going to take the
>>> lruvec lock and pull the page from the list.
>>>
>>>>> PageCompound test probably doesn't need to be concerned with the lock
>>>>> after your changes. You could test it after you call
>>>>> get_page_unless_zero() and before you call
>>>>> __isolate_lru_page_prepare(). Instead of relying on the LRU lock to
>>>>> protect us from the page switching between compound and not we would
>>>>> be relying on the fact that we are holding a reference to the page so
>>>>> it should not be freed and transition between compound or not.
>>>>>
>>>> I have tried the patch as your suggested, it has no clear help on performance
>>>> on above vm-scaliblity case. Maybe it's due to we checked the same thing
>>>> before lock already.
>>>>
>>>> diff --git a/mm/compaction.c b/mm/compaction.c
>>>> index b99c96c4862d..cf2ac5148001 100644
>>>> --- a/mm/compaction.c
>>>> +++ b/mm/compaction.c
>>>> @@ -985,6 +985,16 @@ static bool too_many_isolated(pg_data_t *pgdat)
>>>> if (unlikely(!get_page_unless_zero(page)))
>>>> goto isolate_fail;
>>>>
>>>> + /*
>>>> + * Page become compound since the non-locked check,
>>>> + * and it's on LRU. It can only be a THP so the order
>>>> + * is safe to read and it's 0 for tail pages.
>>>> + */
>>>> + if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
>>>> + low_pfn += compound_nr(page) - 1;
>>>> + goto isolate_fail_put;
>>>> + }
>>>> +
>>>> if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
>>>> goto isolate_fail_put;
>>>>
>>>> @@ -1013,16 +1023,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
>>>> goto isolate_abort;
>>>> }
>>>>
>>>> - /*
>>>> - * Page become compound since the non-locked check,
>>>> - * and it's on LRU. It can only be a THP so the order
>>>> - * is safe to read and it's 0 for tail pages.
>>>> - */
>>>> - if (unlikely(PageCompound(page) && !cc->alloc_contig)) {
>>>> - low_pfn += compound_nr(page) - 1;
>>>> - SetPageLRU(page);
>>>> - goto isolate_fail_put;
>>>> - }
>>>> } else
>>>> rcu_read_unlock();
>>>>
>>> So actually there is more we could do than just this. Specifically a
>>> few lines below the rcu_read_lock there is yet another PageCompound
>>> check that sets low_pfn yet again. So in theory we could combine both
>>> of those and modify the code so you end up with something more like:
>>> @@ -968,6 +974,16 @@ isolate_migratepages_block(struct compact_control
>>> *cc, unsigned long low_pfn,
>>> if (unlikely(!get_page_unless_zero(page)))
>>> goto isolate_fail;
>>>
>>> + if (PageCompound(page)) {
>>> + const unsigned int order = compound_order(page);
>>> +
>>> + if (likely(order < MAX_ORDER))
>>> + low_pfn += (1UL << order) - 1;
>>> +
>>> + if (unlikely(!cc->alloc_contig))
>>> + goto isolate_fail_put;
>>>
>> The current don't check this unless locked changed. But anyway check it
>> every page may have no performance impact.
> Yes and no. The same code is also ran outside the lock and that is why
> I suggested merging the two and creating this block of logic. It will
> be clearer once I have done some initial smoke testing and submitted
> my patch.
>
>> + }
>>> +
>>> if (__isolate_lru_page_prepare(page, isolate_mode) != 0)
>>> goto isolate_fail_put;
>>>
>>> Doing this you would be more likely to skip over the entire compound
>>> page in a single jump should you not be able to either take the LRU
>>> bit or encounter a busy page in __isolate_Lru_page_prepare. I had
>>> copied this bit from an earlier check and modified it as I was not
>>> sure I can guarantee that this is a THP since we haven't taken the LRU
>>> lock yet. However I believe the page cannot be split up while we are
>>> holding the extra reference so the PageCompound flag and order should
>>> not change until we call put_page.
>>>
>> It looks like the lock_page protect this instead of get_page that just works
>> after split func called.
> So I thought that the call to page_ref_freeze that is used in
> functions like split_huge_page_to_list is meant to address this case.
> What it is essentially doing is setting the reference count to zero if
> the count is at the expected value. So with the get_page_unless_zero
> it would either fail because the value is already zero, or the
> page_ref_freeze would fail because the count would be one higher than
> the expected value. Either that or I am still missing another piece in
> the understanding of this.
Uh, the front xa_lock or anon_vma lock guard the -refcount, so long locking path...
Thanks
Alex
next prev parent reply other threads:[~2020-08-13 3:53 UTC|newest]
Thread overview: 102+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-25 12:59 [PATCH v17 00/21] per memcg lru lock Alex Shi
2020-07-25 12:59 ` [PATCH v17 01/21] mm/vmscan: remove unnecessary lruvec adding Alex Shi
2020-08-06 3:47 ` Alex Shi
2020-07-25 12:59 ` [PATCH v17 02/21] mm/page_idle: no unlikely double check for idle page counting Alex Shi
2020-07-25 12:59 ` [PATCH v17 03/21] mm/compaction: correct the comments of compact_defer_shift Alex Shi
2020-07-27 17:29 ` Alexander Duyck
2020-07-28 11:59 ` Alex Shi
2020-07-28 14:17 ` Alexander Duyck
2020-07-25 12:59 ` [PATCH v17 04/21] mm/compaction: rename compact_deferred as compact_should_defer Alex Shi
2020-07-25 12:59 ` [PATCH v17 05/21] mm/thp: move lru_add_page_tail func to huge_memory.c Alex Shi
2020-07-25 12:59 ` [PATCH v17 06/21] mm/thp: clean up lru_add_page_tail Alex Shi
2020-07-25 12:59 ` [PATCH v17 07/21] mm/thp: remove code path which never got into Alex Shi
2020-07-25 12:59 ` [PATCH v17 08/21] mm/thp: narrow lru locking Alex Shi
2020-07-25 12:59 ` [PATCH v17 09/21] mm/memcg: add debug checking in lock_page_memcg Alex Shi
2020-07-25 12:59 ` [PATCH v17 10/21] mm/swap: fold vm event PGROTATED into pagevec_move_tail_fn Alex Shi
2020-07-25 12:59 ` [PATCH v17 11/21] mm/lru: move lru_lock holding in func lru_note_cost_page Alex Shi
2020-08-05 21:18 ` Alexander Duyck
2020-07-25 12:59 ` [PATCH v17 12/21] mm/lru: move lock into lru_note_cost Alex Shi
2020-07-25 12:59 ` [PATCH v17 13/21] mm/lru: introduce TestClearPageLRU Alex Shi
2020-07-29 3:53 ` Alex Shi
2020-08-05 22:43 ` Alexander Duyck
2020-08-06 1:54 ` Alex Shi
2020-08-06 14:41 ` Alexander Duyck
2020-07-25 12:59 ` [PATCH v17 14/21] mm/compaction: do page isolation first in compaction Alex Shi
2020-08-04 21:35 ` Alexander Duyck
2020-08-06 18:38 ` Alexander Duyck
2020-08-07 3:24 ` Alex Shi
2020-08-07 14:51 ` Alexander Duyck
2020-08-10 13:10 ` Alex Shi
2020-08-10 14:41 ` Alexander Duyck
2020-08-11 8:22 ` Alex Shi
2020-08-11 14:47 ` Alexander Duyck
2020-08-12 11:43 ` Alex Shi
2020-08-12 12:16 ` Alex Shi
2020-08-12 16:51 ` Alexander Duyck
2020-08-13 1:46 ` Alex Shi
2020-08-13 2:17 ` Alexander Duyck
2020-08-13 3:52 ` Alex Shi [this message]
2020-08-13 4:02 ` [RFC PATCH 0/3] " Alexander Duyck
2020-08-13 4:02 ` [RFC PATCH 1/3] mm: Drop locked from isolate_migratepages_block Alexander Duyck
2020-08-13 6:56 ` Alex Shi
2020-08-13 14:32 ` Alexander Duyck
2020-08-14 7:25 ` Alex Shi
2020-08-13 7:44 ` Alex Shi
2020-08-13 14:26 ` Alexander Duyck
2020-08-13 4:02 ` [RFC PATCH 2/3] mm: Drop use of test_and_set_skip in favor of just setting skip Alexander Duyck
2020-08-14 7:19 ` Alex Shi
2020-08-14 14:24 ` Alexander Duyck
2020-08-14 21:15 ` Alexander Duyck
2020-08-15 9:49 ` Alex Shi
2020-08-17 15:38 ` Alexander Duyck
2020-08-18 6:50 ` Alex Shi
2020-08-13 4:02 ` [RFC PATCH 3/3] mm: Identify compound pages sooner in isolate_migratepages_block Alexander Duyck
2020-08-14 7:20 ` Alex Shi
2020-08-17 22:58 ` [PATCH v17 14/21] mm/compaction: do page isolation first in compaction Alexander Duyck
2020-07-25 12:59 ` [PATCH v17 15/21] mm/thp: add tail pages into lru anyway in split_huge_page() Alex Shi
2020-07-25 12:59 ` [PATCH v17 16/21] mm/swap: serialize memcg changes in pagevec_lru_move_fn Alex Shi
2020-07-25 12:59 ` [PATCH v17 17/21] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi
2020-07-27 23:34 ` Alexander Duyck
2020-07-28 7:15 ` Alex Shi
2020-07-28 11:19 ` Alex Shi
2020-07-28 14:54 ` Alexander Duyck
2020-07-29 1:00 ` Alex Shi
2020-07-29 1:27 ` Alexander Duyck
2020-07-29 2:27 ` Alex Shi
2020-07-28 15:39 ` Alex Shi
2020-07-28 15:55 ` Alexander Duyck
2020-07-29 0:48 ` Alex Shi
2020-07-29 3:54 ` Alex Shi
2020-08-06 7:41 ` Alex Shi
2020-07-25 12:59 ` [PATCH v17 18/21] mm/lru: introduce the relock_page_lruvec function Alex Shi
2020-07-29 17:52 ` Alexander Duyck
2020-07-30 6:08 ` Alex Shi
2020-07-31 14:20 ` Alexander Duyck
2020-07-31 21:14 ` [PATCH RFC] mm: Add function for testing if the current lruvec lock is valid alexander.h.duyck
2020-07-31 23:54 ` Alex Shi
2020-08-02 18:20 ` Alexander Duyck
2020-08-04 6:13 ` Alex Shi
2020-07-25 12:59 ` [PATCH v17 19/21] mm/vmscan: use relock for move_pages_to_lru Alex Shi
2020-08-03 22:49 ` Alexander Duyck
2020-08-04 6:23 ` Alex Shi
2020-07-25 12:59 ` [PATCH v17 20/21] mm/pgdat: remove pgdat lru_lock Alex Shi
2020-08-03 22:42 ` Alexander Duyck
2020-08-03 22:45 ` Alexander Duyck
2020-08-04 6:22 ` Alex Shi
2020-07-25 12:59 ` [PATCH v17 21/21] mm/lru: revise the comments of lru_lock Alex Shi
2020-08-03 22:37 ` Alexander Duyck
2020-08-04 10:04 ` Alex Shi
2020-08-04 14:29 ` Alexander Duyck
2020-08-06 1:39 ` Alex Shi
2020-08-06 16:27 ` Alexander Duyck
2020-07-27 5:40 ` [PATCH v17 00/21] per memcg lru lock Alex Shi
2020-07-29 14:49 ` Alex Shi
2020-07-29 18:06 ` Hugh Dickins
2020-07-30 2:16 ` Alex Shi
2020-08-03 15:07 ` Michal Hocko
2020-08-04 6:14 ` Alex Shi
2020-07-31 21:31 ` Alexander Duyck
2020-08-04 8:36 ` Alex Shi
2020-08-04 8:36 ` Alex Shi
2020-08-04 8:37 ` Alex Shi
2020-08-04 8:37 ` Alex Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3d224c35-a53d-3daa-4c76-026d1f2b2656@linux.alibaba.com \
--to=alex.shi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.duyck@gmail.com \
--cc=cgroups@vger.kernel.org \
--cc=daniel.m.jordan@oracle.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=khlebnikov@yandex-team.ru \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=mgorman@techsingularity.net \
--cc=richard.weiyang@gmail.com \
--cc=rong.a.chen@intel.com \
--cc=shakeelb@google.com \
--cc=tj@kernel.org \
--cc=willy@infradead.org \
--cc=yang.shi@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox