linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <muchun.song@linux.dev>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>,
	linux-kernel@vger.kernel.org,
	Linux Memory Management List <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Muchun Song <songmuchun@bytedance.com>,
	Matthew Wilcox <willy@infradead.org>,
	Mina Almasry <almasrymina@google.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	hughd@google.com, tsahu@linux.ibm.com, jhubbard@nvidia.com,
	David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH mm-unstable v5 01/10] mm: add folio dtor and order setter functions
Date: Wed, 7 Dec 2022 12:11:56 +0800	[thread overview]
Message-ID: <4161AF1A-9508-4DF8-B756-FEB476EB32B5@linux.dev> (raw)
In-Reply-To: <Y5ALigw0kUO/B3z2@monkey>



> On Dec 7, 2022, at 11:42, Mike Kravetz <mike.kravetz@oracle.com> wrote:
> 
> On 12/07/22 11:34, Muchun Song wrote:
>> 
>> 
>>> On Nov 30, 2022, at 06:50, Sidhartha Kumar <sidhartha.kumar@oracle.com> wrote:
>>> 
>>> Add folio equivalents for set_compound_order() and set_compound_page_dtor().
>>> 
>>> Also remove extra new-lines introduced by mm/hugetlb: convert
>>> move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert
>>> hugetlb_cgroup_uncharge_page() to folios.
>>> 
>>> Suggested-by: Mike Kravetz <mike.kravetz@oracle.com>
>>> Suggested-by: Muchun Song <songmuchun@bytedance.com>
>>> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
>>> ---
>>> include/linux/mm.h | 16 ++++++++++++++++
>>> mm/hugetlb.c       |  4 +---
>>> 2 files changed, 17 insertions(+), 3 deletions(-)
>>> 
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index a48c5ad16a5e..2bdef8a5298a 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -972,6 +972,13 @@ static inline void set_compound_page_dtor(struct page *page,
>>> page[1].compound_dtor = compound_dtor;
>>> }
>>> 
>>> +static inline void folio_set_compound_dtor(struct folio *folio,
>>> + enum compound_dtor_id compound_dtor)
>>> +{
>>> + VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio);
>>> + folio->_folio_dtor = compound_dtor;
>>> +}
>>> +
>>> void destroy_large_folio(struct folio *folio);
>>> 
>>> static inline int head_compound_pincount(struct page *head)
>>> @@ -987,6 +994,15 @@ static inline void set_compound_order(struct page *page, unsigned int order)
>>> #endif
>>> }
>>> 
>>> +static inline void folio_set_compound_order(struct folio *folio,
>>> + unsigned int order)
>>> +{
>>> + folio->_folio_order = order;
>>> +#ifdef CONFIG_64BIT
>>> + folio->_folio_nr_pages = order ? 1U << order : 0;
>> 
>> It seems that you think the user could pass 0 to order. However,
>> ->_folio_nr_pages and ->_folio_order fields are invalid for order-0 pages.
>> You should not touch it. So this should be:
>> 
>> static inline void folio_set_compound_order(struct folio *folio,
>>     unsigned int order)
>> {
>> 	if (!folio_test_large(folio))
>> 		return;
>> 
>> 	folio->_folio_order = order;
>> #ifdef CONFIG_64BIT
>> 	folio->_folio_nr_pages = 1U << order;
>> #endif
>> }
> 
> I believe this was changed to accommodate the code in
> __destroy_compound_gigantic_page().  It is used in a subsequent patch.
> Here is the v6.0 version of the routine.

Thanks for your clarification.

> 
> static void __destroy_compound_gigantic_page(struct page *page,
> unsigned int order, bool demote)
> {
> 	int i;
> 	int nr_pages = 1 << order;
> 	struct page *p = page + 1;
> 
> 	atomic_set(compound_mapcount_ptr(page), 0);
> 	atomic_set(compound_pincount_ptr(page), 0);
> 
> 	for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
> 		p->mapping = NULL;
> 		clear_compound_head(p);
> 		if (!demote)
> 			set_page_refcounted(p);
> 	}
> 
> 	set_compound_order(page, 0);
> #ifdef CONFIG_64BIT
> 	page[1].compound_nr = 0;
> #endif
> 	__ClearPageHead(page);
> }
> 
> 
> Might have been better to change this set_compound_order call to
> folio_set_compound_order in this patch.
> 

Agree. It has confused me a lot. I suggest changing the code to the
followings. The folio_test_large() check is still to avoid unexpected
users for OOB.

static inline void folio_set_compound_order(struct folio *folio,
					    unsigned int order)
{
	VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
	// or
	// if (!folio_test_large(folio))
	// 	return;

	folio->_folio_order = order;
#ifdef CONFIG_64BIT
	folio->_folio_nr_pages = order ? 1U << order : 0;
#endif
}

Thanks.




  reply	other threads:[~2022-12-07  4:12 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-29 22:50 [PATCH mm-unstable v5 00/10] convert core hugetlb functions to folios Sidhartha Kumar
2022-11-29 22:50 ` [PATCH mm-unstable v5 01/10] mm: add folio dtor and order setter functions Sidhartha Kumar
2022-12-07  0:18   ` Mike Kravetz
2022-12-07  3:34   ` Muchun Song
2022-12-07  3:42     ` Mike Kravetz
2022-12-07  4:11       ` Muchun Song [this message]
2022-12-07 18:12         ` Mike Kravetz
2022-12-07 18:49           ` Sidhartha Kumar
2022-12-07 19:05             ` Sidhartha Kumar
2022-12-07 19:25               ` Mike Kravetz
2022-12-08  2:19                 ` Muchun Song
2022-12-08  2:31                   ` John Hubbard
2022-12-08  4:44                     ` Muchun Song
2022-12-12 18:34         ` David Hildenbrand
2022-12-12 18:50           ` Sidhartha Kumar
2022-11-29 22:50 ` [PATCH mm-unstable v5 02/10] mm/hugetlb: convert destroy_compound_gigantic_page() to folios Sidhartha Kumar
2022-12-07  0:32   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 03/10] mm/hugetlb: convert dissolve_free_huge_page() " Sidhartha Kumar
2022-12-07  0:52   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 04/10] mm/hugetlb: convert remove_hugetlb_page() " Sidhartha Kumar
2022-12-07  1:43   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 05/10] mm/hugetlb: convert update_and_free_page() " Sidhartha Kumar
2022-12-07  2:02   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 06/10] mm/hugetlb: convert add_hugetlb_page() to folios and add hugetlb_cma_folio() Sidhartha Kumar
2022-12-07 18:38   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 07/10] mm/hugetlb: convert enqueue_huge_page() to folios Sidhartha Kumar
2022-12-07 18:46   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 08/10] mm/hugetlb: convert free_gigantic_page() " Sidhartha Kumar
2022-12-07 19:04   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 09/10] mm/hugetlb: convert hugetlb prep functions " Sidhartha Kumar
2022-12-07 19:35   ` Mike Kravetz
2022-11-29 22:50 ` [PATCH mm-unstable v5 10/10] mm/hugetlb: change hugetlb allocation functions to return a folio Sidhartha Kumar
2022-12-07 22:01   ` Mike Kravetz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4161AF1A-9508-4DF8-B756-FEB476EB32B5@linux.dev \
    --to=muchun.song@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=jhubbard@nvidia.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=sidhartha.kumar@oracle.com \
    --cc=songmuchun@bytedance.com \
    --cc=tsahu@linux.ibm.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox