linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yunsheng Lin <linyunsheng@huawei.com>
To: Alexander H Duyck <alexander.duyck@gmail.com>,
	<davem@davemloft.net>, <kuba@kernel.org>, <pabeni@redhat.com>
Cc: <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>
Subject: Re: [PATCH net-next v2 09/15] mm: page_frag: reuse MSB of 'size' field for pfmemalloc
Date: Fri, 26 Apr 2024 17:38:04 +0800	[thread overview]
Message-ID: <efd21f1d-8c67-b060-5ad2-0d500fac2ba6@huawei.com> (raw)
In-Reply-To: <c45fdd75-44be-82a6-8e47-42bbc5ee4795@huawei.com>

On 2024/4/18 17:39, Yunsheng Lin wrote:

...

> 
>> combining the pagecnt_bias with the va. I'm wondering if it wouldn't
>> make more sense to look at putting together the structure something
>> like:
>>
>> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>> typedef u16 page_frag_bias_t;
>> #else
>> typedef u32 page_frag_bias_t;
>> #endif
>>
>> struct page_frag_cache {
>> 	/* page address and offset */
>> 	void *va;
> 
> Generally I am agreed with combining the virtual address with the
> offset for the reason you mentioned below.
> 
>> 	page_frag_bias_t pagecnt_bias;
>> 	u8 pfmemalloc;
>> 	u8 page_frag_order;
>> }
> 
> The issue with the 'page_frag_order' I see is that we might need to do
> a 'PAGE << page_frag_order' to get actual size, and we might also need
> to do 'size - 1' to get the size_mask if we want to mask out the offset
> from the 'va'.
> 
> For page_frag_order, we need to:
> size = PAGE << page_frag_order
> size_mask = size - 1
> 
> For size_mask, it seem we only need to do:
> size = size_mask + 1
> 
> And as PAGE_FRAG_CACHE_MAX_SIZE = 32K, which can be fitted into 15 bits
> if we use size_mask instead of size.
> 
> Does it make sense to use below, so that we only need to use bitfield
> for SIZE < PAGE_FRAG_CACHE_MAX_SIZE in 32 bits system? And 'struct
> page_frag' is using a similar '(BITS_PER_LONG > 32)' checking trick.
> 
> struct page_frag_cache {
> 	/* page address and offset */
> 	void *va;
> 
> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
> 	u16 pagecnt_bias;
> 	u16 size_mask:15;
> 	u16 pfmemalloc:1;
> #else
> 	u32 pagecnt_bias;
> 	u16 size_mask;
> 	u16 pfmemalloc;
> #endif
> };
> 

After considering a few different layouts for 'struct page_frag_cache',
it seems the below is more optimized:

struct page_frag_cache {
	/* page address & pfmemalloc & order */
	void *va;

#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
	u16 pagecnt_bias;
	u16 size;
#else
	u32 pagecnt_bias;
	u32 size;
#endif
}

The lower bits of 'va' is or'ed with the page order & pfmemalloc instead
of offset or pagecnt_bias, so that we don't have to add more checking
for handling the problem of not having enough space for offset or
pagecnt_bias for PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE and 32 bits system.
And page address & pfmemalloc & order is unchanged for the same page
in the same 'page_frag_cache' instance, it makes sense to fit them
together.

Also, it seems it is better to replace 'offset' with 'size', which indicates
the remaining size for the cache in a 'page_frag_cache' instance, and we
might be able to do a single 'size >= fragsz' checking for the case of cache
being enough, which should be the fast path if we ensure size is zoro when
'va' == NULL.

Something like below:

#define PAGE_FRAG_CACHE_ORDER_MASK	GENMASK(1, 0)
#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT	BIT(2)

struct page_frag_cache {
	/* page address & pfmemalloc & order */
	void *va;

#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
	u16 pagecnt_bias;
	u16 size;
#else
	u32 pagecnt_bias;
	u32 size;
#endif
};


static void *__page_frag_cache_refill(struct page_frag_cache *nc,
				      unsigned int fragsz, gfp_t gfp_mask,
				      unsigned int align_mask)
{
	gfp_t gfp = gfp_mask;
	struct page *page;
	void *va;

#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
	/* Ensure free_unref_page() can be used to free the page fragment */
	BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_ALLOC_COSTLY_ORDER);

	gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) |  __GFP_COMP |
		   __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
	page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
				PAGE_FRAG_CACHE_MAX_ORDER);
	if (likely(page)) {
		nc->size = PAGE_FRAG_CACHE_MAX_SIZE - fragsz;
		va = page_address(page);
		nc->va = (void *)((unsigned long)va |
				  PAGE_FRAG_CACHE_MAX_ORDER |
				  (page_is_pfmemalloc(page) ?
				   PAGE_FRAG_CACHE_PFMEMALLOC_BIT : 0));
		page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE;
		return va;
	}
#endif
	page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
	if (likely(page)) {
		nc->size = PAGE_SIZE - fragsz;
		va = page_address(page);
		nc->va = (void *)((unsigned long)va |
				  (page_is_pfmemalloc(page) ?
				   PAGE_FRAG_CACHE_PFMEMALLOC_BIT : 0));
		page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE;
		return va;
	}

	nc->va = NULL;
	nc->size = 0;
	return NULL;
}

void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
				 unsigned int fragsz, gfp_t gfp_mask,
				 unsigned int align_mask)
{
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
	unsigned long page_order;
#endif
	unsigned long page_size;
	unsigned long size;
	struct page *page;
	void *va;

	size = nc->size & align_mask;
	va = nc->va;
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
	page_order = (unsigned long)va & PAGE_FRAG_CACHE_ORDER_MASK;
	page_size = PAGE_SIZE << page_order;
#else
	page_size = PAGE_SIZE;
#endif

	if (unlikely(fragsz > size)) {
		if (unlikely(!va))
			return __page_frag_cache_refill(nc, fragsz, gfp_mask,
							align_mask);

		/* fragsz is not supposed to be bigger than PAGE_SIZE as we are
		 * allowing order 3 page allocation to fail easily under low
		 * memory condition.
		 */
		if (WARN_ON_ONCE(fragsz > PAGE_SIZE))
			return NULL;

		page = virt_to_page(va);
		if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
			return __page_frag_cache_refill(nc, fragsz, gfp_mask,
							align_mask);

		if (unlikely((unsigned long)va &
			     PAGE_FRAG_CACHE_PFMEMALLOC_BIT)) {
			free_unref_page(page, compound_order(page));
			return __page_frag_cache_refill(nc, fragsz, gfp_mask,
							align_mask);
		}

		/* OK, page count is 0, we can safely set it */
		set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);

		/* reset page count bias and offset to start of new frag */
		nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
		size = page_size;
	}

	va = (void *)((unsigned long)va & PAGE_MASK);
	va = va + (page_size - size);
	nc->size = size - fragsz;
	nc->pagecnt_bias--;

	return va;
}
EXPORT_SYMBOL(__page_frag_alloc_va_align);


  reply	other threads:[~2024-04-26  9:38 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20240415131941.51153-1-linyunsheng@huawei.com>
2024-04-15 13:19 ` [PATCH net-next v2 01/15] mm: page_frag: add a test module for page_frag Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 03/15] mm: page_frag: use free_unref_page() to free page fragment Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 04/15] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 05/15] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
2024-04-15 23:55   ` Alexander H Duyck
2024-04-16 13:11     ` Yunsheng Lin
2024-04-16 15:51       ` Alexander H Duyck
2024-04-17 13:17         ` Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 06/15] mm: page_frag: change page_frag_alloc_* API to accept align param Yunsheng Lin
2024-04-16 16:08   ` Alexander Duyck
2024-04-17 13:18     ` Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 07/15] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin
2024-04-16 16:12   ` Alexander H Duyck
2024-04-17 13:18     ` Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 08/15] mm: page_frag: add two inline helper for " Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 09/15] mm: page_frag: reuse MSB of 'size' field for pfmemalloc Yunsheng Lin
2024-04-16 16:22   ` Alexander H Duyck
2024-04-17 13:19     ` Yunsheng Lin
2024-04-17 15:11       ` Alexander H Duyck
2024-04-18  9:39         ` Yunsheng Lin
2024-04-26  9:38           ` Yunsheng Lin [this message]
2024-04-29 14:49             ` Alexander Duyck
2024-04-30 12:05               ` Yunsheng Lin
2024-04-30 14:54                 ` Alexander Duyck
2024-05-06 12:33                   ` Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 10/15] mm: page_frag: reuse existing bit field of 'va' for pagecnt_bias Yunsheng Lin
2024-04-16 16:33   ` Alexander H Duyck
2024-04-17 13:23     ` Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 12/15] mm: page_frag: introduce prepare/commit API for page_frag Yunsheng Lin
2024-04-15 13:19 ` [PATCH net-next v2 14/15] mm: page_frag: update documentation " Yunsheng Lin
2024-04-16  6:13   ` Bagas Sanjaya
2024-04-16 13:11     ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=efd21f1d-8c67-b060-5ad2-0d500fac2ba6@huawei.com \
    --to=linyunsheng@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.duyck@gmail.com \
    --cc=davem@davemloft.net \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox