linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: 贺中坤 <hezhongkun.hzk@bytedance.com>
Cc: minchan@kernel.org, senozhatsky@chromium.org, david@redhat.com,
	yosryahmed@google.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [External] Re: [RFC PATCH 0/2] zram: objects charge to mem_cgroup
Date: Mon, 10 Jul 2023 12:41:17 +0200	[thread overview]
Message-ID: <ZKvgTTqwTnUXiY3m@dhcp22.suse.cz> (raw)
In-Reply-To: <CACSyD1NX7sfPu2Wi1ep0gJ-wt1O8-+++321Uhw4YK1Uz4rxj-g@mail.gmail.com>

On Mon 10-07-23 17:35:07, 贺中坤 wrote:
> On Fri, Jul 7, 2023 at 10:44 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > Why do we want/need that?
> 
> Applications can currently escape their cgroup memory containment when
> zram is enabled regardless of indirect(swapfile) or direct usage(disk) .
> This patch adds memcg accounting to fix it.
> 
> Zram and zswap have the same problem,please refer to the patch
> corresponding to zswap[1].
> 
> [1] https://lore.kernel.org/all/20220510152847.230957-7-hannes@cmpxchg.org/
> 
> >
> > > summarize the previous discussion:
> > > [1] As I can see, Michal's concern is that the charges are going to fail
> > > and swapout would fail.
> > >
> > > The indirect use of zram is in the context of PF_MEMALLOC, so
> > > the charge must be successful.
> >
> > No, this was not my concern. Please read through that more carefully. My
> > concern was that the hard limit reclaim would fail. PF_MEMALLOC will not
> > help in that case as this is not a global reclaim path.
> >
> 
> Sorry for my expression. I mean the hard limit reclaim would fail.
> As i can see, the PF_MEMALLOC is not only used in global reclaim path
> but the mem_cgroup reclaim.
> 
> try_charge_memcg
>   try_to_free_mem_cgroup_pages
>      noreclaim_flag = memalloc_noreclaim_save();
>      nr_reclaimed = do_try_to_free_pages(zonelist, &sc);
>      memalloc_noreclaim_restore(noreclaim_flag);

My bad, I have overlooked this. I forgot about 89a2848381b5f. 

> > Also let's assume you allow swapout charges to succeed similar to
> > PF_MEMALLOC. That would mean breaching the limit in an unbounded way,
> > no?
> > --
> 
> Chage compressed page once, mean a page will be freed. the size of compressed
> page is less than or equal to the page to be freed. So not an unbounded way.

OK, this is an important detail to mention. Also have tried to get some
numbers of how much excess is happening for a mixed bag of compressible
memory?
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2023-07-10 10:41 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-07  4:46 Zhongkun He
2023-07-07  7:57 ` Michal Hocko
2023-07-07 14:25   ` [External] " 贺中坤
2023-07-07 14:44     ` Michal Hocko
2023-07-10  9:35       ` 贺中坤
2023-07-10 10:41         ` Michal Hocko [this message]
2023-07-10 13:16           ` 贺中坤
2023-07-10 13:34             ` Michal Hocko
2023-07-10 15:02               ` 贺中坤

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZKvgTTqwTnUXiY3m@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=david@redhat.com \
    --cc=hezhongkun.hzk@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=senozhatsky@chromium.org \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox