linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: yangge1116 <yangge1116@126.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	 linux-kernel@vger.kernel.org, liuzixing@hygon.cn,
	 Johannes Weiner <hannes@cmpxchg.org>,
	Vlastimil Babka <vbabka@suse.cz>, Zi Yan <ziy@nvidia.com>
Subject: Re: [PATCH] mm/page_alloc: skip THP-sized PCP list when allocating non-CMA THP-sized page
Date: Mon, 17 Jun 2024 19:55:21 +0800	[thread overview]
Message-ID: <CAGsJ_4zOOK0-AiLsN0Sw_q3ikPNuk8ofZHsYfV1WkK_6-QsmVw@mail.gmail.com> (raw)
In-Reply-To: <6dc8df31-eb01-4382-8467-c5510f75531e@linux.alibaba.com>

On Mon, Jun 17, 2024 at 7:36 PM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2024/6/17 18:43, Barry Song wrote:
> > On Thu, Jun 6, 2024 at 3:07 PM Baolin Wang
> > <baolin.wang@linux.alibaba.com> wrote:
> >>
> >>
> >>
> >> On 2024/6/4 20:36, yangge1116 wrote:
> >>>
> >>>
> >>> 在 2024/6/4 下午8:01, Baolin Wang 写道:
> >>>> Cc Johannes, Zi and Vlastimil.
> >>>>
> >>>> On 2024/6/4 17:14, yangge1116@126.com wrote:
> >>>>> From: yangge <yangge1116@126.com>
> >>>>>
> >>>>> Since commit 5d0a661d808f ("mm/page_alloc: use only one PCP list for
> >>>>> THP-sized allocations") no longer differentiates the migration type
> >>>>> of pages in THP-sized PCP list, it's possible to get a CMA page from
> >>>>> the list, in some cases, it's not acceptable, for example, allocating
> >>>>> a non-CMA page with PF_MEMALLOC_PIN flag returns a CMA page.
> >>>>>
> >>>>> The patch forbids allocating non-CMA THP-sized page from THP-sized
> >>>>> PCP list to avoid the issue above.
> >>>>>
> >>>>> Fixes: 5d0a661d808f ("mm/page_alloc: use only one PCP list for
> >>>>> THP-sized allocations")
> >>>>> Signed-off-by: yangge <yangge1116@126.com>
> >>>>> ---
> >>>>>    mm/page_alloc.c | 10 ++++++++++
> >>>>>    1 file changed, 10 insertions(+)
> >>>>>
> >>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >>>>> index 2e22ce5..0bdf471 100644
> >>>>> --- a/mm/page_alloc.c
> >>>>> +++ b/mm/page_alloc.c
> >>>>> @@ -2987,10 +2987,20 @@ struct page *rmqueue(struct zone
> >>>>> *preferred_zone,
> >>>>>        WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
> >>>>>        if (likely(pcp_allowed_order(order))) {
> >>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >>>>> +        if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & ALLOC_CMA ||
> >>>>> +                        order != HPAGE_PMD_ORDER) {
> >>>>
> >>>> Seems you will also miss the non-CMA THP from the PCP, so I wonder if
> >>>> we can add a migratetype comparison in __rmqueue_pcplist(), and if
> >>>> it's not suitable, then fallback to buddy?
> >>>
> >>> Yes, we may miss some non-CMA THPs in the PCP. But, if add a migratetype
> >>> comparison in __rmqueue_pcplist(), we may need to compare many times
> >>> because of pcp batch.
> >>
> >> I mean we can only compare once, focusing on CMA pages.
> >>
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index 3734fe7e67c0..960a3b5744d8 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -2973,6 +2973,11 @@ struct page *__rmqueue_pcplist(struct zone *zone,
> >> unsigned int order,
> >>                   }
> >>
> >>                   page = list_first_entry(list, struct page, pcp_list);
> >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >> +               if (order == HPAGE_PMD_ORDER &&
> >> !is_migrate_movable(migratetype) &&
> >> +                   is_migrate_cma(get_pageblock_migratetype(page)))
> >> +                       return NULL;
> >> +#endif
> >
> > This doesn't seem ideal either. It's possible that the PCP still has many
> > non-CMA folios, but due to bad luck, the first entry is "always" CMA.
> > In this case,
> > allocations with is_migrate_movable(migratetype) == false will always lose the
> > chance to use the PCP.   It also appears to incur a PCP spin lock/unlock.
>
> Yes, just some ideas to to mitigate the issue...
>
> >
> > I don't see an ideal solution unless we bring back the CMA PCP :-)
>
> Tend to agree, and adding a CMA PCP seems the overhead can be acceptable?

yes. probably. Hi Ge,

Could we printk the size before and after adding 1 to NR_PCP_LISTS?
Does it increase one cacheline?

struct per_cpu_pages {
spinlock_t lock; /* Protects lists field */
int count; /* number of pages in the list */
int high; /* high watermark, emptying needed */
int high_min; /* min high watermark */
int high_max; /* max high watermark */
int batch; /* chunk size for buddy add/remove */
u8 flags; /* protected by pcp->lock */
u8 alloc_factor; /* batch scaling factor during allocate */
#ifdef CONFIG_NUMA
u8 expire; /* When 0, remote pagesets are drained */
#endif
short free_count; /* consecutive free count */

/* Lists of pages, one per migrate type stored on the pcp-lists */
struct list_head lists[NR_PCP_LISTS];
} ____cacheline_aligned_in_smp;


  reply	other threads:[~2024-06-17 11:55 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-04  9:14 yangge1116
2024-06-04 12:01 ` Baolin Wang
2024-06-04 12:36   ` yangge1116
2024-06-06  3:06     ` Baolin Wang
2024-06-06  9:10       ` yangge1116
2024-06-17 10:43       ` Barry Song
2024-06-17 11:36         ` Baolin Wang
2024-06-17 11:55           ` Barry Song [this message]
2024-06-18  3:31             ` yangge1116
2024-06-17 10:26 ` Barry Song
2024-06-17 12:47   ` yangge1116
2024-06-18  1:34     ` yangge1116
2024-06-18  1:55       ` Barry Song
2024-06-18  3:31         ` yangge1116
2024-06-18  4:10           ` Barry Song
2024-06-18  5:49             ` yangge1116
2024-06-18  6:55             ` yangge1116
2024-06-18  6:58               ` Barry Song
2024-06-18  7:51                 ` yangge1116
2024-06-19  5:34                   ` Ge Yang
2024-06-19  8:20                     ` Barry Song
2024-06-19  8:35                       ` Ge Yang
2024-06-18  3:40         ` yangge1116

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGsJ_4zOOK0-AiLsN0Sw_q3ikPNuk8ofZHsYfV1WkK_6-QsmVw@mail.gmail.com \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liuzixing@hygon.cn \
    --cc=vbabka@suse.cz \
    --cc=yangge1116@126.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox