linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Alexander Atanasov <alexander.atanasov@virtuozzo.com>
Cc: Jonathan Corbet <corbet@lwn.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	kernel@openvz.org, David Hildenbrand <david@redhat.com>,
	Wei Liu <wei.liu@kernel.org>, Nadav Amit <namit@vmware.com>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v1 1/2] Enable balloon drivers to report inflated memory
Date: Wed, 10 Aug 2022 05:16:30 -0400	[thread overview]
Message-ID: <20220810051331-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <b43de131-24dc-192c-f5f6-09bacee52a00@virtuozzo.com>

On Wed, Aug 10, 2022 at 10:50:10AM +0300, Alexander Atanasov wrote:
> Hello,
> 
> On 10.08.22 9:05, Michael S. Tsirkin wrote:
> > On Wed, Aug 10, 2022 at 08:54:52AM +0300, Alexander Atanasov wrote:
> > > On 9.08.22 13:32, Michael S. Tsirkin wrote:
> > > > On Tue, Aug 09, 2022 at 12:49:32PM +0300, Alexander Atanasov wrote:
> > > > > @@ -153,6 +156,14 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > >    		    global_zone_page_state(NR_FREE_CMA_PAGES));
> > > > >    #endif
> > > > > +#ifdef CONFIG_MEMORY_BALLOON
> > > > > +	inflated_kb = atomic_long_read(&mem_balloon_inflated_kb);
> > > > > +	if (inflated_kb >= 0)
> > > > > +		seq_printf(m,  "Inflated(total): %8ld kB\n", inflated_kb);
> > > > > +	else
> > > > > +		seq_printf(m,  "Inflated(free): %8ld kB\n", -inflated_kb);
> > > > > +#endif
> > > > > +
> > > > >    	hugetlb_report_meminfo(m);
> > > > >    	arch_report_meminfo(m);
> > > > 
> > > > 
> > > > This seems too baroque for my taste.
> > > > Why not just have two counters for the two pruposes?
> > > 
> > > I agree it is not good but it reflects the current situation.
> > > Dirvers account in only one way - either used or total - which i don't like.
> > > So to save space and to avoid the possibility that some driver starts to use
> > > both at the same time. I suggest to be only one value.
> > 
> > I don't see what would be wrong if some driver used both
> > at some point.
> 
> If you don't see what's wrong with using both, i might as well add
> Cached and Buffers - next hypervisor might want to use them or any other by
> its discretion leaving the fun to figure it out to the userspace?

Assuming you document what these mean, sure.

> Single definitive value is much better and clear from user prespective and
> meminfo is exactly for the users.

Not really, the negative value trick is anything but clear.

> If a driver for some wierd reason needs to do both it is a whole new topic
> that i don't like to go into. Good news is that currently no such driver
> exists.
>
>
> > 
> > > 
> > > > And is there any value in having this atomic?
> > > > We want a consistent value but just READ_ONCE seems sufficient ...
> > > 
> > > I do not see this as only a value that is going to be displayed.
> > > I tried to be defensive here and to avoid premature optimization.
> > > One possible scenario is OOM killer(using the value) vs balloon deflate on
> > > oom will need it. But any other user of that value will likely need it
> > > atomic too. Drivers use spin_locks for calculations they might find a way to
> > > reduce the spin lock usage and use the atomic.
> > > While making it a long could only bring bugs without benefits.
> > > It is not on a fast path too so i prefer to be safe.
> > 
> > Well we do not normally spread atomics around just because we
> > can, it does not magically make the code safe.
> > If this needs atomics we need to document why.
> 
> Of course it does not. In one of your comments to my other patches you said
> you do not like patches that add one line then remove it in the next patch.
> To avoid that i put an atomic - if at one point it is clear it is not
> required i would be happy to change it but it is more likely to be need than
> not. So i will probably have to document it instead.
> 
> At this point the decision if it should be or should not be in the meminfo
> is more important - if general opinion is positive i will address the
> technical details.

Not up to me, you need ack from linux-mm guys for that.

> -- 
> Regards,
> Alexander Atanasov



  reply	other threads:[~2022-08-10  9:16 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <7bfac48d-2e50-641b-6523-662ea4df0240@virtuozzo.com>
2022-08-09  9:49 ` Alexander Atanasov
2022-08-09 10:32   ` Michael S. Tsirkin
2022-08-10  5:54     ` Alexander Atanasov
2022-08-10  6:05       ` Michael S. Tsirkin
2022-08-10  7:50         ` Alexander Atanasov
2022-08-10  9:16           ` Michael S. Tsirkin [this message]
2022-08-10  3:05   ` Muchun Song
2022-08-10  5:14     ` Alexander Atanasov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220810051331-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.atanasov@virtuozzo.com \
    --cc=corbet@lwn.net \
    --cc=david@redhat.com \
    --cc=kernel@openvz.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=namit@vmware.com \
    --cc=wei.liu@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox