From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 055D0C35247 for ; Wed, 5 Feb 2020 21:45:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 73FB32072B for ; Wed, 5 Feb 2020 21:45:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BSfY42ZS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 73FB32072B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BF9B96B0003; Wed, 5 Feb 2020 16:45:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BAA4A6B000A; Wed, 5 Feb 2020 16:45:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A98BD6B000C; Wed, 5 Feb 2020 16:45:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 8CF596B0003 for ; Wed, 5 Feb 2020 16:45:04 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2ACB1181AEF1A for ; Wed, 5 Feb 2020 21:45:04 +0000 (UTC) X-FDA: 76457404128.23.stage29_f124ebcb2218 X-HE-Tag: stage29_f124ebcb2218 X-Filterd-Recvd-Size: 25654 Received: from mail-vs1-f48.google.com (mail-vs1-f48.google.com [209.85.217.48]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Wed, 5 Feb 2020 21:45:03 +0000 (UTC) Received: by mail-vs1-f48.google.com with SMTP id g23so2402777vsr.7 for ; Wed, 05 Feb 2020 13:45:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=6Hqm3WroVQWwtRWU1MA/DCuGuFpURPROE2AKeFp2ZXU=; b=BSfY42ZS2ex6c8LfBYhsd4E3Zln/uy/djBnj2kQfbCqcgF/tV6hT5rJnnAm/Xnt8f1 bAO7qtFLCdEJ4eVy3mroLOk8T5eM4+zqVLcuTwoonZjlcMKJBEPqwvKiHavi4w7oZs5f FfAt5U2YjnZHqG9R3iODlnAxjjS/FMUabjTZsSwOXvyv+9H6UNkzePWH2Vw5obPUCjbq l6hHfnFibxxC2fywwBPY4VJ31h3j3tT0h3LlPvnKjjl3NQ/Ye3/xDvsxaTQUrX6P4KZN GKhrxg70bwJ6nijg9sakif1fDrS4QHTD6BuaJsLAGK8fDGGOwcW79Wvu/g4rBLgSkaTn bg7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6Hqm3WroVQWwtRWU1MA/DCuGuFpURPROE2AKeFp2ZXU=; b=Q0FHCXMteLCKaZSJPJpb1NxC/Z6f4Bfm3Lvt6zCRfowShNz87zXuMUKknKZTpkAJ+H /bY40fkqX4VVburVp9pABB/Z7OtPTa+hGM3Ei/xnbwc6jHI4pR0ACX1uYetSEiVwr9BI KGMeHOWKIB6XlucLeLyNZiB/3yLS3glOrGj4AsMA/BMWbcxlHU+pS33pQEg6CwiHWj7q zhbSKeHvSKNUuXu6y1lsnls1JyNLs+WSS2fQ1bljlt8QhWM/Z4jZ2pXO7vQ/ycprP/qB fjUZD5c6omLzeCXmuWTUBJtfMN8BewjwcbCDwJPf1u2Ir+eVYabczf2r1r6HQAlAYvWr wehg== X-Gm-Message-State: APjAAAUBce/CazFGC8BOgVwOk+5KjzbjJ8BAMmFgZOSVMBhOx2sisGpU ADjvhQNW4A24/ubTr9Pkw/id4I7xTqY/SqB5j3ZoeA== X-Google-Smtp-Source: APXvYqwIY/Z8D+tvJ5Wm5hg2uuians4lj59yM/tAgReKfAkiuOcObKmTtUqKfgrHYD7SKFEeZ4sHE4ydTt5ZOPIZvlo= X-Received: by 2002:a05:6102:2159:: with SMTP id h25mr22967690vsg.160.1580939102608; Wed, 05 Feb 2020 13:45:02 -0800 (PST) MIME-Version: 1.0 References: <20200203080520-mutt-send-email-mst@kernel.org> <5ac131de8e3b7fc1fafd05a61feb5f6889aeb917.camel@linux.intel.com> <20200203120225-mutt-send-email-mst@kernel.org> <74cc25a6-cefb-c580-8e59-5b76fb680bf4@redhat.com> <20200205015057-mutt-send-email-mst@kernel.org> In-Reply-To: From: Tyler Sanderson Date: Wed, 5 Feb 2020 13:44:49 -0800 Message-ID: Subject: Re: Balloon pressuring page cache To: Alexander Duyck Cc: "Michael S. Tsirkin" , David Hildenbrand , "Wang, Wei W" , "virtualization@lists.linux-foundation.org" , David Rientjes , "linux-mm@kvack.org" , Michal Hocko , namit@vmware.com Content-Type: multipart/alternative; boundary="0000000000009fc7ca059ddb1185" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --0000000000009fc7ca059ddb1185 Content-Type: text/plain; charset="UTF-8" On Wed, Feb 5, 2020 at 11:22 AM Alexander Duyck < alexander.h.duyck@linux.intel.com> wrote: > On Wed, 2020-02-05 at 11:01 -0800, Tyler Sanderson wrote: > > > > On Tue, Feb 4, 2020 at 10:57 PM Michael S. Tsirkin wrote: > > On Tue, Feb 04, 2020 at 03:58:51PM -0800, Tyler Sanderson wrote: > > > > > > > > 1. It is last-resort, which means the system has already > gone through > > > > heroics to prevent OOM. Those heroic reclaim efforts > are expensive > > > > and impact application performance. > > > > > > That's *exactly* what "deflate on OOM" suggests. > > > > > > > > > It seems there are some use cases where "deflate on OOM" is > desired and > > > others where "deflate on pressure" is desired. > > > This suggests adding a new feature bit "DEFLATE_ON_PRESSURE" that > > > registers the shrinker, and reverting DEFLATE_ON_OOM to use the OOM > > > notifier callback. > > > > > > This lets users configure the balloon for their use case. > > > > You want the old behavior back, so why should we introduce a new > one? Or > > am I missing something? (you did want us to revert to old handling, > no?) > > > > Reverting actually doesn't help me because this has been the behavior > since > > Linux 4.19 which is already widely in use. So my device implementation > needs to > > handle the shrinker behavior anyways. I started this conversation to ask > what > > the intended device implementation was. > > > > I think there are reasonable device implementations that would prefer the > > shrinker behavior (it turns out that mine doesn't). > > For example, an implementation that slowly inflates the balloon for the > purpose > > of memory overcommit. It might leave the balloon inflated and expect any > memory > > pressure (including page cache usage) to deflate the balloon as a way to > > dynamically right-size the balloon. > > So just to make sure we understand, what exactly does your > implementation do? > > My implementation is for the purposes of opportunistic memory overcommit. > We always want to give balloon memory back to the guest rather than causing > an OOM, so we use DEFLATE_ON_OOM. > We leave the balloon at size 0 while monitoring memory statistics reported > on the stats queue. When we see there is an opportunity for significant > savings then we inflate the balloon to a desired size (possibly including > pressuring the page cache), and then immediately deflate back to size 0. > The host pages backing the guest pages are unbacked during the inflation > process, so the memory footprint of the guest is smaller after this > inflate/deflate cycle. > > > This sounds a lot like free page reporting, except I haven't decided on > the best way to exert the pressure yet. > As you mention below, the advantage of free page reporting is that it doesn't trigger the OOM path. So I'd strongly advocate that the corresponding mechanism to shrink page cache should also not trigger the OOM path. That suggests something like the the drop_caches API we talked about earlier in the thread. > > You might want to take a look at my patch set here: > > https://lore.kernel.org/lkml/20200122173040.6142.39116.stgit@localhost.localdomain/ > Yes, I'm strongly in favor of your patch set's goals. > > Instead of inflating a balloon all it is doing is identifying what pages > are currently free and have not already been reported to the host and > reports those via the balloon driver. The advantage is that we can do the > reporting without causing any sort of OOM errors in most cases since we are > just pulling and reporting a small set of patches at a time. > > > > > Two reasons I didn't go with the above implementation: > > 1. I need to support guests before Linux 4.19 which don't have the > shrinker > > behavior. > > 2. Memory in the balloon does not appear as "available" in /proc/meminfo > even > > though it is freeable. This is confusing to users, but isn't a deal > breaker. > > > > If we added a DEFLATE_ON_PRESSURE feature bit that indicated shrinker API > > support then that would resolve reason #1 (ideally we would backport the > bit to > > 4.19). > > We could declare lack of pagecache pressure with DEFLATE_ON_OOM a > regression and backport the revert but not I think the new > DEFLATE_ON_PRESSURE. > > To be clear, the page cache can still be pressured. When the balloon > driver allocates memory and causes reclaim, some of that memory comes from > the balloon (bad) but some of that comes from the page cache (good). > > > I think the issue is that you aren't able to maintain the page cache > pressure > Right. My implementation can shrink the page cache to whatever size is desired. It just takes a lot more (10x) time and CPU on guests using the shrinker API because of this back and forth. > because your balloon is deflating as well which in turn is relieving the > pressure. Ideally we would want to have some way of putting the pressure on > the page cache without having to put enough stress on the memory though to > get to the point of encountering OOM which is one of the reasons why I > suspect the balloon driver does the allocation with things in place so that > it will stop when it cannot fulfill the allocation and is willing to wait > on other threads to trigger the reclaim. > > > > > In any case, the shrinker behavior when pressuring page cache is more of > an > > inefficiency than a bug. It's not clear to me that it necessitates > reverting. > > If there were/are reasons to be on the shrinker interface then I think > those > > carry similar weight as the problem itself. > > > > > > > > I consider virtio-balloon to this very day a big hack. And I don't > see > > it getting better with new config knobs. Having that said, the > > technologies that are candidates to replace it (free page reporting, > > taming the guest page cache, etc.) are still not ready - so we'll > have > > to stick with it for now :( . > > > > > > > > I'm actually not sure how you would safely do memory overcommit > without > > > DEFLATE_ON_OOM. So I think it unlocks a huge use case. > > > > Using better suited technologies that are not ready yet (well, some > form > > of free page reporting is available under IBM z already but in a > > proprietary form) ;) Anyhow, I remember that DEFLATE_ON_OOM only > makes > > it less likely to crash your guest, but not that you are safe to > squeeze > > the last bit out of your guest VM. > > > > Can you elaborate on the danger of DEFLATE_ON_OOM? I haven't seen any > problems > > in testing but I'd really like to know about the dangers. > > Is there a difference in safety between the OOM notifier callback and the > > shrinker API? > > It's not about dangers as such. It's just that when linux hits OOM > all kind of error paths are being hit, latent bugs start triggering, > latency goes up drastically. > > Doesn't this suggest that the shrinker is preferable to the OOM notifier > in the case that we're actually OOMing (with DEFLATE_ON_OOM)? > > > I think it all depends on the use case. For the use case you describe > going to the shrinker might be preferable as you are wanting to exert just > a light bit of pressure to start some page cache reclaim. However if you > are wanting to make the deflation a last resort sort of thing then I would > think the OOM would make more sense. > I agree that the desired behavior depends on the use case. But even for the case that deflation is a last resort, it seems like we'd like to use the shrinker API rather than the OOM notifier since the OOM notifier is more likely to have bugs/errors. The shrinker API doesn't support this functionality yet, but you could imagine configuring the API so that the balloon is reclaimed from less frequently or only when shrinking other sources is becoming difficult. That way we're not actually in the error prone OOM path. > At a minimum I would think that the code needs to be reworked so that you > either have the balloon inflating or deflating, not both at the same time. > DEFLATE_ON_OOM necessarily causes deflate activity regardless of whether the device wants to continue inflating the balloon. Blocking the deflate activity would cause an OOM in the guest. > I think that is really what is at the heart of the issue for the current > shrinker based approach since you can end up with the balloon driver > essentially cycling pages as it is allocating them and freeing them at the > same time. > --0000000000009fc7ca059ddb1185 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Wed, Feb 5, 2020 at 11:22 AM Alexa= nder Duyck <alexand= er.h.duyck@linux.intel.com> wrote:
On Wed, 2020-02-05 at 11:01 -0800, Tyler Sanderson wrote:

=
On Tue= , Feb 4, 2020 at 10:57 PM Michael S. Tsirkin <mst@redhat.com> wrote:
On Tue, Feb 04, 2020 at 03:58:51PM -0800, T= yler Sanderson wrote:
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 1. It is last-re= sort, which means the system has already gone=C2=A0 =C2=A0 =C2=A0through >=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0her= oics to prevent OOM. Those heroic reclaim efforts are=C2=A0 =C2=A0 =C2=A0ex= pensive
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0and= impact application performance.
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0>=C2=A0 =C2=A0 =C2=A0That's *exactly* what &= quot;deflate on OOM" suggests.
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> It seems there are some use cases where "= deflate on OOM" is desired and
>=C2=A0 =C2=A0 =C2=A0> others where "deflate on pressure" i= s desired.
>=C2=A0 =C2=A0 =C2=A0> This suggests adding a new feature bit "D= EFLATE_ON_PRESSURE" that
>=C2=A0 =C2=A0 =C2=A0> registers the shrinker, and reverting DEFLATE_= ON_OOM to use the OOM
>=C2=A0 =C2=A0 =C2=A0> notifier callback.
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> This lets users configure the balloon for thei= r use case.
>
>=C2=A0 =C2=A0 =C2=A0You want the old behavior back, so why should we in= troduce a new one? Or
>=C2=A0 =C2=A0 =C2=A0am I missing something? (you did want us to revert = to old handling, no?)
>
> Reverting actually doesn't help me because this has been the behav= ior since
> Linux 4.19 which is already widely in use. So my device implementation= needs to
> handle the shrinker behavior anyways. I started this conversation to a= sk what
> the intended device implementation was.
>
> I think there are reasonable device implementations that would prefer = the
> shrinker=C2=A0behavior (it turns out that mine doesn't).
> For example, an implementation that slowly inflates the balloon for th= e purpose
> of memory overcommit. It might leave the balloon inflated and expect a= ny memory
> pressure (including page cache usage) to deflate the balloon as a way = to
> dynamically right-size the balloon.

So just to make sure we understand, what exactly does your
implementation do?
My implementation is for the purpos= es of opportunistic memory overcommit. We always want to give balloon memor= y back to the guest rather than causing an OOM, so we use DEFLATE_ON_OOM.
We leave the balloon at size 0 while monitoring memory statistics = reported on the stats queue. When we see there is an opportunity for signif= icant savings then we inflate the balloon to a desired size (possibly inclu= ding pressuring the page cache), and then immediately deflate back to size = 0.
The host pages backing the guest pages are unbacked during the= inflation process, so the memory footprint of the guest is smaller after t= his inflate/deflate cycle.

This sounds a lot like free page reporting, except I haven't decided = on the best way to exert the pressure yet.
As = you mention below, the advantage of free page reporting is that it doesn= 9;t trigger the OOM path. So I'd strongly advocate that the correspondi= ng mechanism to shrink page cache should also not trigger the OOM path. Tha= t suggests something like the the drop_caches API we talked about earlier i= n the thread.
Yes, I'm strongly= in favor of your patch set's goals.
=C2=A0

Instead of inflating a balloon all it is doing = is identifying what pages are currently free and have not already been repo= rted to the host and reports those via the balloon driver. The advantage is= that we can do the reporting without causing any sort of OOM errors in mos= t cases since we are just pulling and reporting a small set of patches at a= time.



> Two reasons I didn't go with the above implementation:
> 1. I need to support guests before Linux 4.19 which don't have the= shrinker
> behavior.
> 2. Memory in the balloon does not appear as "available" in /= proc/meminfo even
> though it is freeable. This is confusing to users, but isn't a dea= l breaker.
>
> If we added a DEFLATE_ON_PRESSURE feature bit that indicated shrinker = API
> support then that would resolve reason=C2=A0#1 (ideally we would backp= ort the bit to
> 4.19).

We could declare lack of pagecache pressure with DEFLATE_ON_OOM a
regression and backport the revert but not I think the new
DEFLATE_ON_PRESSURE.
To be clear, the page cache can s= till be pressured. When the balloon driver allocates memory and causes recl= aim, some of that memory comes from the balloon (bad) but some of that come= s from the page cache (good).

=
I think the issue is that you aren't able to maintain the page cac= he pressure
Right. My implementation can shrin= k the page cache to whatever size is desired. It just takes a lot more (10x= ) time and CPU=C2=A0on guests using the shrinker API because of this back a= nd forth.
=C2=A0
because your ball= oon is deflating as well which in turn is relieving the pressure. Ideally w= e would want to have some way of putting the pressure on the page cache wit= hout having to put enough stress on the memory though to get to the point o= f encountering OOM which is one of the reasons why I suspect the balloon dr= iver does the allocation with things in place so that it will stop when it = cannot fulfill the allocation and is willing to wait on other threads to tr= igger the reclaim.



> In any case, the shrinker=C2=A0behavior when pressuring page cache is = more of an
> inefficiency than a bug. It's not clear to me that it necessitates= reverting.
> If there were/are reasons to be on the shrinker=C2=A0interface then I = think those
> carry similar weight as the problem itself.
> =C2=A0
>
>
>=C2=A0 =C2=A0 =C2=A0I consider virtio-balloon to this very day a big ha= ck. And I don't see
>=C2=A0 =C2=A0 =C2=A0it getting better with new config knobs. Having tha= t said, the
>=C2=A0 =C2=A0 =C2=A0technologies that are candidates to replace it (fre= e page reporting,
>=C2=A0 =C2=A0 =C2=A0taming the guest page cache, etc.) are still not re= ady - so we'll have
>=C2=A0 =C2=A0 =C2=A0to stick with it for now :( .
>
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> I'm actually not sure how you would safely= do memory overcommit without
>=C2=A0 =C2=A0 =C2=A0> DEFLATE_ON_OOM. So I think it unlocks a huge u= se case.
>
>=C2=A0 =C2=A0 =C2=A0Using better suited technologies that are not ready= yet (well, some form
>=C2=A0 =C2=A0 =C2=A0of free page reporting is available under IBM z alr= eady but in a
>=C2=A0 =C2=A0 =C2=A0proprietary form) ;) Anyhow, I remember that DEFLAT= E_ON_OOM only makes
>=C2=A0 =C2=A0 =C2=A0it less likely to crash your guest, but not that yo= u are safe to squeeze
>=C2=A0 =C2=A0 =C2=A0the last bit out of your guest VM.
>
> Can you elaborate on the danger of DEFLATE_ON_OOM? I haven't seen = any problems
> in testing but I'd really like to know about the dangers.
> Is there a difference in safety between the OOM notifier callback and = the
> shrinker API?

It's not about dangers as such. It's just that when linux hits OOM<= br> all kind of error paths are being hit, latent bugs start triggering,
latency goes up drastically.
Doesn't this suggest = that the shrinker is preferable to the OOM notifier in the case that we'= ;re actually OOMing (with DEFLATE_ON_OOM)?

I think it all depends on the use case. For the use case = you describe going to the shrinker might be preferable as you are wanting t= o exert just a light bit of pressure to start some page cache reclaim. Howe= ver if you are wanting to make the deflation a last resort sort of thing th= en I would think the OOM would make more sense.
I agree that the desired behavior depends on the use case. But even for t= he case that deflation is a last resort, it seems like we'd like to use= the shrinker API rather than the OOM notifier since the OOM notifier is mo= re likely to have bugs/errors. The shrinker API doesn't support this fu= nctionality yet, but you could imagine configuring the API so that the ball= oon is reclaimed from less frequently or only when shrinking other sources = is becoming difficult. That way we're not actually in the error prone O= OM path.


At a m= inimum I would think that the code needs to be reworked so that you either = have the balloon inflating or deflating, not both at the same time.
DEFLATE_ON_OOM necessarily causes deflate activity re= gardless of whether the device wants to continue inflating the balloon. Blo= cking the deflate activity would cause an OOM in the guest.
=C2= =A0
I think that is really what is at the he= art of the issue for the current shrinker based approach since you can end = up with the balloon driver essentially cycling pages as it is allocating th= em and freeing them at the same time.
--0000000000009fc7ca059ddb1185--