From: David Hildenbrand <david@redhat.com>
To: David Howells <dhowells@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, John Hubbard <jhubbard@nvidia.com>,
Al Viro <viro@zeniv.linux.org.uk>,
Christoph Hellwig <hch@infradead.org>,
Matthew Wilcox <willy@infradead.org>, Jan Kara <jack@suse.cz>,
Jason Gunthorpe <jgg@nvidia.com>,
Logan Gunthorpe <logang@deltatee.com>,
Jeff Layton <jlayton@kernel.org>,
linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [GIT PULL] iov_iter: Improve page extraction (pin or just list)
Date: Tue, 31 Jan 2023 14:48:47 +0100 [thread overview]
Message-ID: <88d50843-9aa6-7930-433d-9b488857dc14@redhat.com> (raw)
In-Reply-To: <3791872.1675172490@warthog.procyon.org.uk>
On 31.01.23 14:41, David Howells wrote:
> David Hildenbrand <david@redhat.com> wrote:
>
>>>> percpu counters maybe - add them up at the point of viewing?
>>> They are percpu, see my last email. But for every 108 changes (on
>>> my system), they will do two atomic_long_adds(). So not very
>>> useful for anything but low frequency modifications.
>>>
>>
>> Can we just treat the whole acquired/released accounting as a debug mechanism
>> to detect missing releases and do it only for debug kernels?
>>
>>
>> The pcpu counter is an s8, so we have to flush on a regular basis and cannot
>> really defer it any longer ... but I'm curious if it would be of any help to
>> only have a single PINNED counter that goes into both directions (inc/dec on
>> pin/release), to reduce the flushing.
>>
>> Of course, once we pin/release more than ~108 pages in one go or we switch
>> CPUs frequently it won't be that much of a help ...
>
> What are the stats actually used for? Is it just debugging, or do we actually
> have users for them (control groups spring to mind)?
As it's really just "how many pinning events" vs. "how many unpinning
events", I assume it's only for debugging.
For example, if you pin the same page twice it would not get accounted
as "a single page is pinned".
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2023-01-31 13:48 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-30 11:14 David Howells
2023-01-30 21:33 ` Jens Axboe
2023-01-30 21:55 ` Jens Axboe
2023-01-30 21:57 ` Jens Axboe
2023-01-30 22:02 ` John Hubbard
2023-01-30 22:11 ` Jens Axboe
2023-01-30 22:12 ` David Howells
2023-01-30 22:15 ` Jens Axboe
2023-01-31 8:32 ` David Hildenbrand
2023-01-31 12:28 ` Jan Kara
2023-01-31 17:54 ` John Hubbard
2023-01-31 13:41 ` David Howells
2023-01-31 13:48 ` David Hildenbrand [this message]
2023-01-31 14:50 ` Jens Axboe
2023-01-31 15:02 ` David Hildenbrand
2023-01-31 15:04 ` Jens Axboe
2023-01-31 15:10 ` David Hildenbrand
2023-01-31 15:15 ` Jens Axboe
2023-01-30 21:52 ` David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=88d50843-9aa6-7930-433d-9b488857dc14@redhat.com \
--to=david@redhat.com \
--cc=axboe@kernel.dk \
--cc=dhowells@redhat.com \
--cc=hch@infradead.org \
--cc=jack@suse.cz \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=jlayton@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=logang@deltatee.com \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox