linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	joaodias@google.com, surenb@google.com, cgoldswo@codeaurora.org,
	willy@infradead.org, david@redhat.com, vbabka@suse.cz,
	linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 1/2] mm: disable LRU pagevec during the migration temporarily
Date: Wed, 3 Mar 2021 13:49:36 +0100	[thread overview]
Message-ID: <YD+F4LgPH0zMBDGW@dhcp22.suse.cz> (raw)
In-Reply-To: <20210302210949.2440120-1-minchan@kernel.org>

On Tue 02-03-21 13:09:48, Minchan Kim wrote:
> LRU pagevec holds refcount of pages until the pagevec are drained.
> It could prevent migration since the refcount of the page is greater
> than the expection in migration logic. To mitigate the issue,
> callers of migrate_pages drains LRU pagevec via migrate_prep or
> lru_add_drain_all before migrate_pages call.
> 
> However, it's not enough because pages coming into pagevec after the
> draining call still could stay at the pagevec so it could keep
> preventing page migration. Since some callers of migrate_pages have
> retrial logic with LRU draining, the page would migrate at next trail
> but it is still fragile in that it doesn't close the fundamental race
> between upcoming LRU pages into pagvec and migration so the migration
> failure could cause contiguous memory allocation failure in the end.
> 
> To close the race, this patch disables lru caches(i.e, pagevec)
> during ongoing migration until migrate is done.
> 
> Since it's really hard to reproduce, I measured how many times
> migrate_pages retried with force mode below debug code.
> 
> int migrate_pages(struct list_head *from, new_page_t get_new_page,
> 			..
> 			..
> 
> if (rc && reason == MR_CONTIG_RANGE && pass > 2) {
>        printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc);
>        dump_page(page, "fail to migrate");
> }
> 
> The test was repeating android apps launching with cma allocation
> in background every five seconds. Total cma allocation count was
> about 500 during the testing. With this patch, the dump_page count
> was reduced from 400 to 30.

Have you seen any improvement on the CMA allocation success rate?

> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
> * from RFC - http://lore.kernel.org/linux-mm/20210216170348.1513483-1-minchan@kernel.org
>   * use atomic and lru_add_drain_all for strict ordering - mhocko
>   * lru_cache_disable/enable - mhocko
> 
>  fs/block_dev.c          |  2 +-
>  include/linux/migrate.h |  6 +++--
>  include/linux/swap.h    |  4 ++-
>  mm/compaction.c         |  4 +--
>  mm/fadvise.c            |  2 +-
>  mm/gup.c                |  2 +-
>  mm/khugepaged.c         |  2 +-
>  mm/ksm.c                |  2 +-
>  mm/memcontrol.c         |  4 +--
>  mm/memfd.c              |  2 +-
>  mm/memory-failure.c     |  2 +-
>  mm/memory_hotplug.c     |  2 +-
>  mm/mempolicy.c          |  6 +++++
>  mm/migrate.c            | 15 ++++++-----
>  mm/page_alloc.c         |  5 +++-
>  mm/swap.c               | 55 +++++++++++++++++++++++++++++++++++------
>  16 files changed, 85 insertions(+), 30 deletions(-)

The churn seems to be quite big for something that should have been a
very small change. Have you considered not changing lru_add_drain_all
but rather introduce __lru_add_dain_all that would implement the
enforced flushing?

[...]
> +static atomic_t lru_disable_count = ATOMIC_INIT(0);
> +
> +bool lru_cache_disabled(void)
> +{
> +	return atomic_read(&lru_disable_count);
> +}
> +
> +void lru_cache_disable(void)
> +{
> +	/*
> +	 * lru_add_drain_all's IPI will make sure no new pages are added
> +	 * to the pcp lists and drain them all.
> +	 */
> +	atomic_inc(&lru_disable_count);

As already mentioned in the last review. The IPI reference is more
cryptic than useful. I would go with something like this instead

	/*
	 * lru_add_drain_all in the force mode will schedule draining on
	 * all online CPUs so any calls of lru_cache_disabled wrapped by
	 * local_lock or preemption disabled would be  ordered by that.
	 * The atomic operation doesn't need to have stronger ordering
	 * requirements because that is enforece by the scheduling
	 * guarantees.
	 */
> +
> +	/*
> +	 * Clear the LRU lists so pages can be isolated.
> +	 */
> +	lru_add_drain_all(true);
> +}
-- 
Michal Hocko
SUSE Labs


       reply	other threads:[~2021-03-03 12:49 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20210302210949.2440120-1-minchan@kernel.org>
2021-03-03 12:49 ` Michal Hocko [this message]
2021-03-03 20:23   ` Minchan Kim
2021-03-04  8:07     ` David Hildenbrand
2021-03-04 15:55       ` Minchan Kim
2021-03-05 16:06     ` Michal Hocko
2021-03-05 20:26       ` Minchan Kim
2021-03-03 13:38 ` kernel test robot
2021-03-03 15:11 ` kernel test robot
2021-03-03 18:12 ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YD+F4LgPH0zMBDGW@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgoldswo@codeaurora.org \
    --cc=david@redhat.com \
    --cc=joaodias@google.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox