linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: yangge1116 <yangge1116@126.com>
To: David Hildenbrand <david@redhat.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	akpm@linux-foundation.org, Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, liuzixing@hygon.cn
Subject: Re: [PATCH] mm/gup: don't check page lru flag before draining it
Date: Sat, 8 Jun 2024 12:38:49 +0800	[thread overview]
Message-ID: <48fb0e58-16d1-7956-cf35-74741826617a@126.com> (raw)
In-Reply-To: <3a368e38-a4cb-413e-a6d9-41c6b3dbd5ae@redhat.com>



在 2024/6/6 下午3:56, David Hildenbrand 写道:
>>> Some random thoughts about some folio_test_lru() users:
>>>
>>> mm/khugepaged.c: skips pages if !folio_test_lru(), but would fail skip
>>> it either way if there is the unexpected reference from the LRU batch!
>>>
>>> mm/compaction.c: skips pages if !folio_test_lru(), but would fail skip
>>> it either way if there is the unexpected reference from the LRU batch!
>>>
>>> mm/memory.c: would love to identify this case and to a lru_add_drain()
>>> to free up that reference.
>>>
>>> mm/huge_memory.c: splitting with the additional reference will fail
>>> already. Maybe we'd want to drain the LRU batch.
>>
>> Agree.
>>
>>>
>>> mm/madvise.c: skips pages if !folio_test_lru(). I wonder what happens if
>>> we have the same page twice in an LRU batch with different target 
>>> goals ...
>>
>> IIUC, LRU batch can ignore this folio since it's LRU flag is cleared by
>> folio_isolate_lru(), then will call folios_put() to frop the reference.
>>
> 
> I think what's interesting to highlight in the current design is that a 
> folio might end up in multiple LRU batches, and whatever the result will 
> be is determined by the sequence of them getting flushed. Doesn't sound 
> quite right but maybe there was a reason for it (which could just have 
> been "simpler implementation").
> 
>>
>>> Some other users (there are not that many that don't use it for sanity
>>> checks though) might likely be a bit different.
> 
> There are also some PageLRU checks, but not that many.
> 
>>
>> mm/page_isolation.c: fail to set pageblock migratetype to isolate if
>> !folio_test_lru(), then alloc_contig_range_noprof() can be failed. But
>> the original code could set pageblock migratetype to isolate, then
>> calling drain_all_pages() in alloc_contig_range_noprof() to drop
>> reference of the LRU batch.
>>
>> mm/vmscan.c: will call lru_add_drain() before calling
>> isolate_lru_folios(), so seems no impact.
> 
> lru_add_drain() will only drain the local CPU. So if the folio would be 
> stuck on another CPU's LRU batch, right now we could isolate it. When 
> processing that LRU batch while the folio is still isolated, it would 
> currently simply skip the operation.
> 
> So right now we can call isolate_lru_folios() even if the folio is stuck 
> on another CPU's LRU batch.
> 
> We cannot really reclaim the folio as long is it is in another CPU's LRU 
> batch, though (unexpected reference).
> 
>>
>> BTW, we also need to look at the usage of folio_isolate_lru().
> 
> Yes.
> 
>>
>> It doesn’t seem to have major obstacles, but there are many details to
>> analyze :)
> 
> Yes, we're only scratching the surface.
> 
> Having a way to identify "this folio is very likely some CPU's LRU 
> batch"  could end up being quite valuable, because likely we don't want 
> to blindly drain the LRU simply because there is some unexpected 
> reference on a folio [as we would in this patch].
> 

Can we add a PG_lru_batch flag to determine whether a page is in lru 
batch? If we can, seems this problem will be easier.



  reply	other threads:[~2024-06-08  4:39 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-04 10:48 yangge1116
2024-06-04 13:47 ` David Hildenbrand
2024-06-05  1:18   ` yangge1116
2024-06-05  9:41     ` David Hildenbrand
2024-06-05  9:53       ` David Hildenbrand
2024-06-05 11:37         ` Baolin Wang
2024-06-05 11:41           ` David Hildenbrand
2024-06-05 12:20             ` David Hildenbrand
2024-06-06  1:57               ` Baolin Wang
2024-06-06  7:56                 ` David Hildenbrand
2024-06-08  4:38                   ` yangge1116 [this message]
2024-06-08 15:15                     ` Matthew Wilcox
2024-06-08 16:03                       ` David Hildenbrand
2024-06-11 11:20                         ` yangge1116
2024-06-12  7:32                           ` David Hildenbrand
2024-06-15 11:44                             ` yangge1116
2024-06-17  9:50                             ` yangge1116
2024-06-17  9:52                               ` David Hildenbrand
2024-06-17 11:22                                 ` yangge1116
2024-06-06  1:35         ` yangge1116
2024-06-06  7:39           ` David Hildenbrand
2024-06-06  8:50             ` yangge1116
  -- strict thread matches above, loose matches on Subject: below --
2024-06-04  8:09 yangge1116
2024-06-04  8:56 ` Baolin Wang
2024-06-04  9:18   ` yangge1116

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48fb0e58-16d1-7956-cf35-74741826617a@126.com \
    --to=yangge1116@126.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liuzixing@hygon.cn \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox