From: Dev Jain <dev.jain@arm.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: akpm@linux-foundation.org, shuah@kernel.org, david@redhat.com,
willy@infradead.org, ryan.roberts@arm.com,
anshuman.khandual@arm.com, catalin.marinas@arm.com,
cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com,
apopple@nvidia.com, osalvador@suse.de,
baolin.wang@linux.alibaba.com, dave.hansen@linux.intel.com,
will@kernel.org, baohua@kernel.org, ioworker0@gmail.com,
gshan@redhat.com, mark.rutland@arm.com,
kirill.shutemov@linux.intel.com, hughd@google.com,
aneesh.kumar@kernel.org, yang@os.amperecomputing.com,
peterx@redhat.com, broonie@kernel.org,
mgorman@techsingularity.net,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-kselftest@vger.kernel.org
Subject: Re: [PATCH 1/2] mm: Retry migration earlier upon refcount mismatch
Date: Tue, 20 Aug 2024 12:46:22 +0530 [thread overview]
Message-ID: <c2ca1845-7eec-4119-b7b6-f6694e4a7799@arm.com> (raw)
In-Reply-To: <87a5h9hud7.fsf@yhuang6-desk2.ccr.corp.intel.com>
On 8/19/24 12:28, Huang, Ying wrote:
> Dev Jain <dev.jain@arm.com> writes:
>
>> On 8/13/24 12:52, Dev Jain wrote:
>>> On 8/13/24 10:30, Dev Jain wrote:
>>>> On 8/12/24 17:38, Dev Jain wrote:
>>>>> On 8/12/24 13:01, Huang, Ying wrote:
>>>>>> Dev Jain <dev.jain@arm.com> writes:
>>>>>>
>>>>>>> On 8/12/24 11:45, Huang, Ying wrote:
>>>>>>>> Dev Jain <dev.jain@arm.com> writes:
>>>>>>>>
>>>>>>>>> On 8/12/24 11:04, Huang, Ying wrote:
>>>>>>>>>> Hi, Dev,
>>>>>>>>>>
>>>>>>>>>> Dev Jain <dev.jain@arm.com> writes:
>>>>>>>>>>
>>>>>>>>>>> As already being done in __migrate_folio(), wherein we
>>>>>>>>>>> backoff if the
>>>>>>>>>>> folio refcount is wrong, make this check during the
>>>>>>>>>>> unmapping phase, upon
>>>>>>>>>>> the failure of which, the original state of the PTEs will be
>>>>>>>>>>> restored and
>>>>>>>>>>> the folio lock will be dropped via migrate_folio_undo_src(),
>>>>>>>>>>> any racing
>>>>>>>>>>> thread will make progress and migration will be retried.
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>>>>>>>>>> ---
>>>>>>>>>>> mm/migrate.c | 9 +++++++++
>>>>>>>>>>> 1 file changed, 9 insertions(+)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>>>>>>>>> index e7296c0fb5d5..477acf996951 100644
>>>>>>>>>>> --- a/mm/migrate.c
>>>>>>>>>>> +++ b/mm/migrate.c
>>>>>>>>>>> @@ -1250,6 +1250,15 @@ static int
>>>>>>>>>>> migrate_folio_unmap(new_folio_t get_new_folio,
>>>>>>>>>>> }
>>>>>>>>>>> if (!folio_mapped(src)) {
>>>>>>>>>>> + /*
>>>>>>>>>>> + * Someone may have changed the refcount and maybe
>>>>>>>>>>> sleeping
>>>>>>>>>>> + * on the folio lock. In case of refcount mismatch,
>>>>>>>>>>> bail out,
>>>>>>>>>>> + * let the system make progress and retry.
>>>>>>>>>>> + */
>>>>>>>>>>> + struct address_space *mapping = folio_mapping(src);
>>>>>>>>>>> +
>>>>>>>>>>> + if (folio_ref_count(src) !=
>>>>>>>>>>> folio_expected_refs(mapping, src))
>>>>>>>>>>> + goto out;
>>>>>>>>>>> __migrate_folio_record(dst, old_page_state,
>>>>>>>>>>> anon_vma);
>>>>>>>>>>> return MIGRATEPAGE_UNMAP;
>>>>>>>>>>> }
>>>>>>>>>> Do you have some test results for this? For example, after
>>>>>>>>>> applying the
>>>>>>>>>> patch, the migration success rate increased XX%, etc.
>>>>>>>>> I'll get back to you on this.
>>>>>>>>>
>>>>>>>>>> My understanding for this issue is that the migration success
>>>>>>>>>> rate can
>>>>>>>>>> increase if we undo all changes before retrying. This is the
>>>>>>>>>> current
>>>>>>>>>> behavior for sync migration, but not for async migration. If
>>>>>>>>>> so, we can
>>>>>>>>>> use migrate_pages_sync() for async migration too to increase
>>>>>>>>>> success
>>>>>>>>>> rate? Of course, we need to change the function name and
>>>>>>>>>> comments.
>>>>>>>>> As per my understanding, this is not the current behaviour for sync
>>>>>>>>> migration. After successful unmapping, we fail in
>>>>>>>>> migrate_folio_move()
>>>>>>>>> with -EAGAIN, we do not call undo src+dst (rendering the loop
>>>>>>>>> around
>>>>>>>>> migrate_folio_move() futile), we do not push the failed folio
>>>>>>>>> onto the
>>>>>>>>> ret_folios list, therefore, in _sync(), _batch() is never
>>>>>>>>> tried again.
>>>>>>>> In migrate_pages_sync(), migrate_pages_batch(,MIGRATE_ASYNC) will be
>>>>>>>> called first, if failed, the folio will be restored to the original
>>>>>>>> state (unlocked). Then migrate_pages_batch(,_SYNC*) is called
>>>>>>>> again.
>>>>>>>> So, we unlock once. If it's necessary, we can unlock more times via
>>>>>>>> another level of loop.
>>>>>>> Yes, that's my point. We need to undo src+dst and retry.
>>>>>> For sync migration, we undo src+dst and retry now, but only once. You
>>>>>> have shown that more retrying increases success rate.
>>>>>>
>>>>>>> We will have
>>>>>>> to decide where we want this retrying to be; do we want to change the
>>>>>>> return value, end up in the while loop wrapped around _sync(),
>>>>>>> and retry
>>>>>>> there by adding another level of loop, or do we want to make use
>>>>>>> of the
>>>>>>> existing retry loops, one of which is wrapped around _unmap();
>>>>>>> the latter
>>>>>>> is my approach. The utility I see for the former approach is
>>>>>>> that, in case
>>>>>>> of a large number of page migrations (which should usually be
>>>>>>> the case),
>>>>>>> we are giving more time for the folio to get retried. The latter
>>>>>>> does not
>>>>>>> give much time and discards the folio if it did not succeed
>>>>>>> under 7 times.
>>>>>> Because it's a race, I guess that most folios will be migrated
>>>>>> successfully in the first pass.
>>>>>>
>>>>>> My concerns of your method are that it deal with just one case
>>>>>> specially. While retrying after undoing all appears more general.
>>>>>
>>>>> Makes sense. Also, please ignore my "change the return value"
>>>>> thing, I got confused between unmap_folios, ret_folios, etc.
>>>>> Now I think I understood what the lists are doing :)
>>>>>
>>>>>> If it's really important to retry after undoing all, we can either
>>>>>> convert two retying loops of migrate_pages_batch() into one loop, or
>>>>>> remove retry loop in migrate_pages_batch() and retry in its caller
>>>>>> instead.
>>>>> And if I implemented this correctly, the following makes the test
>>>>> pass always:
>>>>> https://www.codedump.xyz/diff/Zrn7EdxzNXmXyNXe
>>>>
>>>> Okay, I did mess up with the implementation, leading to a false
>>>> positive. Let me try again :)
>>>
>>> Hopefully this should do the job:
>>> https://www.codedump.xyz/diff/ZrsIV8JSOPYx5V_u
>>>
>>> But the result is worse than the patch proposed; I rarely hit
>>> a 3 digit number of successes of move_pages(). But, on a
>>> base kernel without any changes, when I apply David's
>>> suggestion to change the test, if I choose 7 as the number
>>> of retries (= NR_MAX_MIGRATE_SYNC_RETRY) in the test, I
>>> can touch even 4 digits. I am puzzled.
>>> We can also try merging the for loops of unmap and move...
>>
>> If people are okay with this change, I guess I can send it as
>> a v2? I concur with your assessment that my initial approach
>> is solving a specific case; the above approach does give me
>> a slight improvement on arm64 and should be an improvement
>> in general, since it makes sense to defer retrying the failed folio
>> as much as we can.
> We need to deal with something else before a formal v2,
>
> - stats need to be fixed, please check result processing for the first
> loop of migrate_pages_sync().
Sorry, can you point out where do they need to be fixed exactly?
The change I did is inside the while(!list_empty(from)) block,
and there is no stat computation being done there already.
>
> - Do we need something similar for async migration.
>
> - Can we add another level of explicit loop for the second loop of
> migrate_pages_sync()? That is to improve code readability. Or, add a
> function to dot that?
>
> - Is it good to remove retry loop in migrate_pages_batch()? And do
> retry in the caller?
I am personally in favour of leaving the retry loop, and async
migration, as it is. Since async version is basically minimal-effort
migration, it won't make sense to "optimize" it, given the code churn
it would create, including the change we will have to then do in
"if (mode == MIGRATE_ASYNC) => migrate_pages_batch(ASYNC)" inside
migrate_pages().
Sorry, what do you mean by "another level of explicit loop"?
>
> --
> Best Regards,
> Huang, Ying
next prev parent reply other threads:[~2024-08-20 7:16 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-09 10:31 [PATCH 0/2] Improve migration by backing off earlier Dev Jain
2024-08-09 10:31 ` [PATCH 1/2] mm: Retry migration earlier upon refcount mismatch Dev Jain
2024-08-09 13:47 ` David Hildenbrand
2024-08-09 21:09 ` Christoph Lameter (Ampere)
2024-08-10 18:42 ` Dev Jain
2024-08-10 18:52 ` David Hildenbrand
2024-08-11 6:06 ` Dev Jain
2024-08-11 9:08 ` David Hildenbrand
2024-08-12 5:35 ` Dev Jain
2024-08-12 9:30 ` David Hildenbrand
2024-08-10 21:05 ` Zi Yan
2024-08-12 5:34 ` Huang, Ying
2024-08-12 6:01 ` Dev Jain
2024-08-12 6:15 ` Huang, Ying
2024-08-12 6:52 ` Dev Jain
2024-08-12 7:31 ` Huang, Ying
2024-08-12 12:08 ` Dev Jain
2024-08-13 5:00 ` Dev Jain
2024-08-13 7:22 ` Dev Jain
2024-08-16 11:31 ` Dev Jain
2024-08-19 6:58 ` Huang, Ying
2024-08-20 7:16 ` Dev Jain [this message]
2024-09-02 6:42 ` Huang, Ying
2024-08-12 6:13 ` Dev Jain
2024-08-12 6:20 ` Huang, Ying
2024-08-12 6:32 ` Dev Jain
2024-08-09 10:31 ` [PATCH 2/2] selftests/mm: Do not fail test for a single migration failure Dev Jain
2024-08-09 17:13 ` Shuah Khan
2024-08-09 21:10 ` Christoph Lameter (Ampere)
2024-08-12 6:19 ` Dev Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c2ca1845-7eec-4119-b7b6-f6694e4a7799@arm.com \
--to=dev.jain@arm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=gshan@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=shuah@kernel.org \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox