From: David Hildenbrand <david@redhat.com>
To: Dev Jain <dev.jain@arm.com>, Will Deacon <will@kernel.org>
Cc: akpm@linux-foundation.org, willy@infradead.org,
ryan.roberts@arm.com, anshuman.khandual@arm.com,
catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz,
mhocko@suse.com, apopple@nvidia.com, osalvador@suse.de,
baolin.wang@linux.alibaba.com, dave.hansen@linux.intel.com,
baohua@kernel.org, ioworker0@gmail.com, gshan@redhat.com,
mark.rutland@arm.com, kirill.shutemov@linux.intel.com,
hughd@google.com, aneesh.kumar@kernel.org,
yang@os.amperecomputing.com, peterx@redhat.com,
broonie@kernel.org, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
mgorman@techsingularity.net
Subject: Re: Race condition observed between page migration and page fault handling on arm64 machines
Date: Fri, 9 Aug 2024 15:26:16 +0200 [thread overview]
Message-ID: <d89ed326-2512-4011-b586-cad886087478@redhat.com> (raw)
In-Reply-To: <79924d5b-b807-440a-a6e8-8f0ae13aa04b@redhat.com>
On 09.08.24 15:23, David Hildenbrand wrote:
> On 07.08.24 14:58, Dev Jain wrote:
>>
>> On 8/7/24 17:09, David Hildenbrand wrote:
>>> On 05.08.24 16:14, Dev Jain wrote:
>>>>
>>>> On 8/5/24 16:16, David Hildenbrand wrote:
>>>>> On 05.08.24 11:51, Dev Jain wrote:
>>>>>>
>>>>>> On 8/1/24 19:18, David Hildenbrand wrote:
>>>>>>> On 01.08.24 15:43, Will Deacon wrote:
>>>>>>>> On Thu, Aug 01, 2024 at 03:26:57PM +0200, David Hildenbrand wrote:
>>>>>>>>> On 01.08.24 15:13, David Hildenbrand wrote:
>>>>>>>>>>>>> To dampen the tradeoff, we could do this in shmem_fault()
>>>>>>>>>>>>> instead? But
>>>>>>>>>>>>> then, this would mean that we do this in all
>>>>>>>>>>>>>
>>>>>>>>>>>>> kinds of vma->vm_ops->fault, only when we discover another
>>>>>>>>>>>>> reference
>>>>>>>>>>>>> count race condition :) Doing this in do_fault()
>>>>>>>>>>>>>
>>>>>>>>>>>>> should solve this once and for all. In fact, do_pte_missing()
>>>>>>>>>>>>> may call
>>>>>>>>>>>>> do_anonymous_page() or do_fault(), and I just
>>>>>>>>>>>>>
>>>>>>>>>>>>> noticed that the former already checks this using
>>>>>>>>>>>>> vmf_pte_changed().
>>>>>>>>>>>>
>>>>>>>>>>>> What I am still missing is why this is (a) arm64 only; and
>>>>>>>>>>>> (b) if
>>>>>>>>>>>> this
>>>>>>>>>>>> is something we should really worry about. There are other
>>>>>>>>>>>> reasons
>>>>>>>>>>>> (e.g., speculative references) why migration could temporarily
>>>>>>>>>>>> fail,
>>>>>>>>>>>> does it happen that often that it is really something we have to
>>>>>>>>>>>> worry
>>>>>>>>>>>> about?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> (a) See discussion at [1]; I guess it passes on x86, which is
>>>>>>>>>>> quite
>>>>>>>>>>> strange since the race is clearly arch-independent.
>>>>>>>>>>
>>>>>>>>>> Yes, I think this is what we have to understand. Is the race
>>>>>>>>>> simply
>>>>>>>>>> less
>>>>>>>>>> likely to trigger on x86?
>>>>>>>>>>
>>>>>>>>>> I would assume that it would trigger on any arch.
>>>>>>>>>>
>>>>>>>>>> I just ran it on a x86 VM with 2 NUMA nodes and it also seems to
>>>>>>>>>> work here.
>>>>>>>>>>
>>>>>>>>>> Is this maybe related to deferred flushing? Such that the other
>>>>>>>>>> CPU
>>>>>>>>>> will
>>>>>>>>>> by accident just observe the !pte_none a little less likely?
>>>>>>>>>>
>>>>>>>>>> But arm64 also usually defers flushes, right? At least unless
>>>>>>>>>> ARM64_WORKAROUND_REPEAT_TLBI is around. With that we never do
>>>>>>>>>> deferred
>>>>>>>>>> flushes.
>>>>>>>>>
>>>>>>>>> Bingo!
>>>>>>>>>
>>>>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>>>>>>> index e51ed44f8b53..ce94b810586b 100644
>>>>>>>>> --- a/mm/rmap.c
>>>>>>>>> +++ b/mm/rmap.c
>>>>>>>>> @@ -718,10 +718,7 @@ static void set_tlb_ubc_flush_pending(struct
>>>>>>>>> mm_struct
>>>>>>>>> *mm, pte_t pteval,
>>>>>>>>> */
>>>>>>>>> static bool should_defer_flush(struct mm_struct *mm, enum
>>>>>>>>> ttu_flags flags)
>>>>>>>>> {
>>>>>>>>> - if (!(flags & TTU_BATCH_FLUSH))
>>>>>>>>> - return false;
>>>>>>>>> -
>>>>>>>>> - return arch_tlbbatch_should_defer(mm);
>>>>>>>>> + return false;
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On x86:
>>>>>>>>>
>>>>>>>>> # ./migration
>>>>>>>>> TAP version 13
>>>>>>>>> 1..1
>>>>>>>>> # Starting 1 tests from 1 test cases.
>>>>>>>>> # RUN migration.shared_anon ...
>>>>>>>>> Didn't migrate 1 pages
>>>>>>>>> # migration.c:170:shared_anon:Expected migrate(ptr, self->n1,
>>>>>>>>> self->n2) (-2)
>>>>>>>>> == 0 (0)
>>>>>>>>> # shared_anon: Test terminated by assertion
>>>>>>>>> # FAIL migration.shared_anon
>>>>>>>>> not ok 1 migration.shared_anon
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> It fails all of the time!
>>>>>>>>
>>>>>>>> Nice work! I suppose that makes sense as, with the eager TLB
>>>>>>>> invalidation, the window between the other CPU faulting and the
>>>>>>>> migration entry being written is fairly wide.
>>>>>>>>
>>>>>>>> Not sure about a fix though :/ It feels a bit overkill to add a new
>>>>>>>> invalid pte encoding just for this.
>>>>>>>
>>>>>>> Something like that might make the test happy in most cases:
>>>>>>>
>>>>>>> diff --git a/tools/testing/selftests/mm/migration.c
>>>>>>> b/tools/testing/selftests/mm/migration.c
>>>>>>> index 6908569ef406..4c18bfc13b94 100644
>>>>>>> --- a/tools/testing/selftests/mm/migration.c
>>>>>>> +++ b/tools/testing/selftests/mm/migration.c
>>>>>>> @@ -65,6 +65,7 @@ int migrate(uint64_t *ptr, int n1, int n2)
>>>>>>> int ret, tmp;
>>>>>>> int status = 0;
>>>>>>> struct timespec ts1, ts2;
>>>>>>> + int errors = 0;
>>>>>>>
>>>>>>> if (clock_gettime(CLOCK_MONOTONIC, &ts1))
>>>>>>> return -1;
>>>>>>> @@ -79,12 +80,17 @@ int migrate(uint64_t *ptr, int n1, int n2)
>>>>>>> ret = move_pages(0, 1, (void **) &ptr, &n2,
>>>>>>> &status,
>>>>>>> MPOL_MF_MOVE_ALL);
>>>>>>> if (ret) {
>>>>>>> - if (ret > 0)
>>>>>>> + if (ret > 0) {
>>>>>>> + if (++errors < 100)
>>>>>>> + continue;
>>>>>>> printf("Didn't migrate %d pages\n",
>>>>>>> ret);
>>>>>>> - else
>>>>>>> + } else {
>>>>>>> perror("Couldn't migrate pages");
>>>>>>> + }
>>>>>>> return -2;
>>>>>>> }
>>>>>>> + /* Progress! */
>>>>>>> + errors = 0;
>>>>>>>
>>>>>>> tmp = n2;
>>>>>>> n2 = n1;
>>>>>>>
>>>>>>>
>>>>>>> [root@localhost mm]# ./migration
>>>>>>> TAP version 13
>>>>>>> 1..1
>>>>>>> # Starting 1 tests from 1 test cases.
>>>>>>> # RUN migration.shared_anon ...
>>>>>>> # OK migration.shared_anon
>>>>>>> ok 1 migration.shared_anon
>>>>>>> # PASSED: 1 / 1 tests passed.
>>>>>>> # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
>>>>>>
>>>>>>
>>>>>> This does make the test pass, to my surprise, since what you are doing
>>>>>> from userspace
>>>>>>
>>>>>> should have been done by the kernel, because it retries folio
>>>>>> unmapping
>>>>>> and moving
>>>>>>
>>>>>> NR_MAX_MIGRATE_(A)SYNC_RETRY times; I had already tested pumping up
>>>>>> these
>>>>>>
>>>>>> macros and the original test was still failing. Now, I digged in more,
>>>>>> and, if the
>>>>>>
>>>>>> following assertion is correct:
>>>>>>
>>>>>>
>>>>>> Any thread having a reference on a folio will end up calling
>>>>>> folio_lock()
>>>>>>
>>>>>
>>>>> Good point. I suspect concurrent things like read/write would also be
>>>>> able to trigger this (did not check, though).
>>>>>
>>>>>>
>>>>>> then it seems to me that the retry for loop wrapped around
>>>>>> migrate_folio_move(), inside
>>>>>>
>>>>>> migrate_pages_batch(), is useless; if migrate_folio_move() fails on
>>>>>> the
>>>>>> first iteration, it is
>>>>>>
>>>>>> going to fail for all iterations since, if I am reading the code path
>>>>>> correctly, the only way it
>>>>>>
>>>>>> fails is when the actual refcount is not equal to expected refcount
>>>>>> (in
>>>>>> folio_migrate_mapping()),
>>>>>>
>>>>>> and there is no way that the extra refcount is going to get released
>>>>>> since the migration path
>>>>>>
>>>>>> has the folio lock.
>>>>>>
>>>>>> And therefore, this begs the question: isn't it logical to assert the
>>>>>> actual refcount against the
>>>>>>
>>>>>> expected refcount, just after we have changed the PTEs, so that if
>>>>>> this
>>>>>> assertion fails, we can
>>>>>>
>>>>>> go to the next iteration of the for loop for migrate_folio_unmap()
>>>>>> inside migrate_pages_batch()
>>>>>>
>>>>>> by calling migrate_folio_undo_src()/dst() to restore the old state?
>>>>>> I am
>>>>>> trying to implement
>>>>>>
>>>>>> this but is not as straightforward as it seemed to me this morning.
>>>>>
>>>>> I agree with your assessment that migration code currently doesn't
>>>>> handle the case well when some other thread does an unconditional
>>>>> folio_lock(). folio_trylock() users would be handled, but that's not
>>>>> what we want with FGP_LOCK I assume.
>>>>>
>>>>> So IIUC, your idea would be to unlock the folio in migration code and
>>>>> try again their. Sounds reasonable, without looking into the details :)
>>>>
>>>>
>>>
>>> BTW, I was trying to find the spot that would do the folio_lock(), but
>>> filemap_fault() does the lock_folio_maybe_drop_mmap() where we do a
>>> folio_trylock().
>>>
>>> Where exactly is the folio_lock() on the fault path that would
>>> prohibit us from making progress?
>>
>> Not filemap_fault(); it enters shmem_fault() which eventually calls
>> shmem_get_folio_gfp(), retrieving the folio from the pagecache, and
>> calling folio_lock().
>
> Ah, thanks!
>
> ... which raises the question if we should handle it similar to
> filemap_fault(), essentially drop the reference and retry using
> VM_FAULT_RETRY. Hmmmmm
... just had another look at lock_folio_maybe_drop_mmap(), and we also
usually end up in __folio_lock() / __folio_lock_killable(),
folio_trylock() is only used for the fast path. So no, that wouldn't
change that much.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-08-09 13:26 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-01 8:16 Dev Jain
2024-08-01 8:42 ` David Hildenbrand
2024-08-01 9:38 ` Dev Jain
2024-08-01 9:41 ` David Hildenbrand
2024-08-01 10:05 ` Dev Jain
2024-08-01 13:13 ` David Hildenbrand
2024-08-01 13:26 ` David Hildenbrand
2024-08-01 13:43 ` Will Deacon
2024-08-01 13:48 ` David Hildenbrand
2024-08-01 14:25 ` Ryan Roberts
2024-08-05 9:51 ` Dev Jain
2024-08-05 10:46 ` David Hildenbrand
2024-08-05 14:14 ` Dev Jain
[not found] ` <a8c813b5-abce-48cf-9d14-2f969d6c8180@redhat.com>
[not found] ` <8158c9d6-cbfe-4767-be8e-dc227b29200c@arm.com>
2024-08-09 13:23 ` David Hildenbrand
2024-08-09 13:26 ` David Hildenbrand [this message]
2024-08-01 10:06 ` Ryan Roberts
2024-08-01 13:15 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d89ed326-2512-4011-b586-cad886087478@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=dave.hansen@linux.intel.com \
--cc=dev.jain@arm.com \
--cc=gshan@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox