linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: Shakeel Butt <shakeelb@google.com>
Cc: David Hildenbrand <david@redhat.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	Yang Shi <shy828301@gmail.com>, Zi Yan <ziy@nvidia.com>,
	Matthew Wilcox <willy@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: split thp synchronously on MADV_DONTNEED
Date: Mon, 22 Nov 2021 03:50:47 +0300	[thread overview]
Message-ID: <20211122005047.ufnyvqlqu55c5trt@box> (raw)
In-Reply-To: <20211120201230.920082-1-shakeelb@google.com>

On Sat, Nov 20, 2021 at 12:12:30PM -0800, Shakeel Butt wrote:
> Many applications do sophisticated management of their heap memory for
> better performance but with low cost. We have a bunch of such
> applications running on our production and examples include caching and
> data storage services. These applications keep their hot data on the
> THPs for better performance and release the cold data through
> MADV_DONTNEED to keep the memory cost low.
> 
> The kernel defers the split and release of THPs until there is memory
> pressure. This causes complicates the memory management of these
> sophisticated applications which then needs to look into low level
> kernel handling of THPs to better gauge their headroom for expansion. In
> addition these applications are very latency sensitive and would prefer
> to not face memory reclaim due to non-deterministic nature of reclaim.
> 
> This patch let such applications not worry about the low level handling
> of THPs in the kernel and splits the THPs synchronously on
> MADV_DONTNEED.

Have you considered impact on short-living tasks where paying splitting
tax would hurt performace without any benefits? Maybe a sparete madvise
opration needed? I donno.

> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> ---
>  include/linux/mmzone.h   |  5 ++++
>  include/linux/sched.h    |  4 ++++
>  include/linux/sched/mm.h | 11 +++++++++
>  kernel/fork.c            |  3 +++
>  mm/huge_memory.c         | 50 ++++++++++++++++++++++++++++++++++++++++
>  mm/madvise.c             |  8 +++++++
>  6 files changed, 81 insertions(+)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 58e744b78c2c..7fa0035128b9 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -795,6 +795,11 @@ struct deferred_split {
>  	struct list_head split_queue;
>  	unsigned long split_queue_len;
>  };
> +void split_local_deferred_list(struct list_head *defer_list);
> +#else
> +static inline void split_local_deferred_list(struct list_head *defer_list)
> +{
> +}
>  #endif
>  
>  /*
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 9d27fd0ce5df..a984bb6509d9 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1412,6 +1412,10 @@ struct task_struct {
>  	struct mem_cgroup		*active_memcg;
>  #endif
>  
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +	struct list_head		*deferred_split_list;
> +#endif
> +
>  #ifdef CONFIG_BLK_CGROUP
>  	struct request_queue		*throttle_queue;
>  #endif

It looks dirty. We really don't have options to pass it down?

Maybe passdown the list via zap_details and call a new rmap remove helper
if the list is present?

>  
> +void split_local_deferred_list(struct list_head *defer_list)
> +{
> +	struct list_head *pos, *next;
> +	struct page *page;
> +
> +	/* First iteration for split. */
> +	list_for_each_safe(pos, next, defer_list) {
> +		page = list_entry((void *)pos, struct page, deferred_list);
> +		page = compound_head(page);
> +
> +		if (!trylock_page(page))
> +			continue;
> +
> +		if (split_huge_page(page)) {
> +			unlock_page(page);
> +			continue;
> +		}
> +		/* split_huge_page() removes page from list on success */
> +		unlock_page(page);
> +
> +		/* corresponding get in deferred_split_huge_page. */
> +		put_page(page);
> +	}
> +
> +	/* Second iteration to putback failed pages. */
> +	list_for_each_safe(pos, next, defer_list) {
> +		struct deferred_split *ds_queue;
> +		unsigned long flags;
> +
> +		page = list_entry((void *)pos, struct page, deferred_list);
> +		page = compound_head(page);
> +		ds_queue = get_deferred_split_queue(page);
> +
> +		spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
> +		list_move(page_deferred_list(page), &ds_queue->split_queue);
> +		spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
> +
> +		/* corresponding get in deferred_split_huge_page. */
> +		put_page(page);
> +	}
> +}

Looks like a lot of copy-paste from deferred_split_scan(). Can we get them
consolidated?

-- 
 Kirill A. Shutemov


  parent reply	other threads:[~2021-11-22  0:51 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-20 20:12 Shakeel Butt
2021-11-21  4:35 ` Matthew Wilcox
2021-11-21  5:25   ` Shakeel Butt
2021-11-22  0:50 ` Kirill A. Shutemov [this message]
2021-11-22  3:42   ` Shakeel Butt
2021-11-22  4:56 ` Matthew Wilcox
2021-11-22  9:19   ` David Hildenbrand
2021-12-08 13:23     ` Pankaj Gupta
2021-11-22  8:32 ` David Hildenbrand
2021-11-22 18:40   ` Shakeel Butt
2021-11-22 18:59     ` David Hildenbrand
2021-11-23  1:20       ` Shakeel Butt
2021-11-23 16:56         ` David Hildenbrand
2021-11-23 17:17           ` Shakeel Butt
2021-11-23 17:20             ` David Hildenbrand
2021-11-23 17:24               ` Shakeel Butt
2021-11-23 17:26                 ` David Hildenbrand
2021-11-23 17:28                   ` Shakeel Butt
2021-11-25 10:09                     ` Peter Xu
2021-11-25 17:14                       ` Shakeel Butt
2021-11-26  0:00                         ` Peter Xu
2021-11-25 10:24     ` Peter Xu
2021-11-25 10:32       ` David Hildenbrand
2021-11-26  2:52         ` Peter Xu
2021-11-26  9:04           ` David Hildenbrand
2021-11-29 22:00             ` Yang Shi
2021-11-26  3:21       ` Shakeel Butt
2021-11-26  4:12         ` Peter Xu
2021-11-26  9:16           ` David Hildenbrand
2021-11-26  9:39             ` Peter Xu
2021-11-29 21:32             ` Yang Shi
2022-01-24 18:48           ` David Rientjes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211122005047.ufnyvqlqu55c5trt@box \
    --to=kirill@shutemov.name \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox