linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Stoakes <lstoakes@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	linux-mm@kvack.org, linux-perf-users@vger.kernel.org,
	mingo@redhat.com, acme@kernel.org, urezki@gmail.com,
	hch@infradead.org
Subject: Re: [RFC PATCH 1/4] perf: Convert perf_mmap_(alloc,free)_page to folios
Date: Fri, 22 Sep 2023 21:19:43 +0100	[thread overview]
Message-ID: <0d8c508d-91c5-435e-9880-7eedebce2219@lucifer.local> (raw)
In-Reply-To: <20230913133105.GF22758@noisy.programming.kicks-ass.net>

On Wed, Sep 13, 2023 at 03:31:05PM +0200, Peter Zijlstra wrote:
> On Sun, Aug 27, 2023 at 08:33:39AM +0100, Lorenzo Stoakes wrote:
>
> > > @@ -785,25 +785,23 @@ __perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
> > >  	return virt_to_page(rb->data_pages[pgoff - 1]);
> > >  }
> > >
> > > -static void *perf_mmap_alloc_page(int cpu)
> > > +static void *perf_mmap_alloc_page(int node)
> >
> > Nitty point but since we're dealing with folios here maybe rename to
> > perf_mmap_alloc_folio()?
>
> Since it's an explicit order-0 allocation, does that really make sense?
>

True, it does ultimately yield a single page and doesn't reference its
metadata so it's not the end of the world to keep it as-is (it was very
nitty after all!)

> > >  {
> > > -	struct page *page;
> > > -	int node;
> > > +	struct folio *folio;
> > >
> > > -	node = (cpu == -1) ? cpu : cpu_to_node(cpu);
> > > -	page = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0);
> > > -	if (!page)
> > > +	folio = __folio_alloc_node(GFP_KERNEL | __GFP_ZERO, 0, node);
> > > +	if (!folio)
> > >  		return NULL;
> > >
> > > -	return page_address(page);
> > > +	return folio_address(folio);
> > >  }
>


  reply	other threads:[~2023-09-22 20:19 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-21 20:20 [RFC PATCH 0/4] Convert perf ringbuffer " Matthew Wilcox (Oracle)
2023-08-21 20:20 ` [RFC PATCH 1/4] perf: Convert perf_mmap_(alloc,free)_page " Matthew Wilcox (Oracle)
2023-08-23  7:28   ` Yin Fengwei
2023-08-27  7:33   ` Lorenzo Stoakes
2023-09-13 13:31     ` Peter Zijlstra
2023-09-22 20:19       ` Lorenzo Stoakes [this message]
2023-08-21 20:20 ` [RFC PATCH 2/4] mm: Add vmalloc_user_node() Matthew Wilcox (Oracle)
2023-09-13 13:32   ` Peter Zijlstra
2023-08-21 20:20 ` [RFC PATCH 3/4] perf: Use vmalloc_to_folio() Matthew Wilcox (Oracle)
2023-08-22  7:54   ` Zhu Yanjun
2023-08-23  7:30   ` Yin Fengwei
2023-08-23 12:16     ` Matthew Wilcox
2023-08-27  7:22   ` Lorenzo Stoakes
2023-08-21 20:20 ` [RFC PATCH 4/4] perf: Use folios for the aux ringbuffer & pagefault path Matthew Wilcox (Oracle)
2023-08-23  7:38   ` Yin Fengwei
2023-08-23 12:23     ` Matthew Wilcox
2023-08-23 12:45       ` Yin, Fengwei
2023-08-27  8:01   ` Lorenzo Stoakes
2023-08-27 11:50     ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0d8c508d-91c5-435e-9880-7eedebce2219@lucifer.local \
    --to=lstoakes@gmail.com \
    --cc=acme@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=urezki@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox