linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: Zhongkun He <hezhongkun.hzk@bytedance.com>
Cc: Yu Zhao <yuzhao@google.com>,
	minchan@kernel.org, senozhatsky@chromium.org,  mhocko@suse.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 Andrea Arcangeli <aarcange@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	 Fabian Deutsch <fdeutsch@redhat.com>
Subject: Re: [RFC PATCH 1/3] zram: charge the compressed RAM to the page's memcgroup
Date: Thu, 15 Jun 2023 18:39:28 -0700	[thread overview]
Message-ID: <CAJD7tka-=ppaheVxn2-f6u0egBp8kOHYAK0bBudC62SKkZPk5w@mail.gmail.com> (raw)
In-Reply-To: <CA+PVUaTqNTSYkTy9yCFF=Y+xkimgM+3YQRF7EYr1UruesQnJrg@mail.gmail.com>

On Thu, Jun 15, 2023 at 1:57 AM Fabian Deutsch <fdeutsch@redhat.com> wrote:
>
>
> On Thu, Jun 15, 2023 at 6:59 AM Yu Zhao <yuzhao@google.com> wrote:
>>
>> On Wed, Jun 14, 2023 at 9:48 PM Zhongkun He
>> <hezhongkun.hzk@bytedance.com> wrote:
>> >
>> > The compressed RAM is currently charged to kernel, not to
>> > any memory cgroup, which is not satisfy our usage scenario.
>> > if the memory of a task is limited by memcgroup, it will
>> > swap out the memory to zram swap device when the memory
>> > is insufficient. In that case, the memory limit will have
>> > no effect.
>> >
>> > So, it should makes sense to charge the compressed RAM to
>> > the page's memory cgroup.
>
>
> While looking at this in the past weeks, I believe that there are two distinct problems:
> 1. Direct zram usage by process within a cg ie. a process writing to a zram device
> 2. Indirect zram usage by a process within a cg via swap (described above)
>
> Both of them probably require different solutions.
> In order to fix #1, accounting a zram device should be accounted towards a cgroup. IMHO this is something that should be fixed.
>
> Yu Zhao and Yosry are probably much more familiar with the solution to #2.
> WRT per-cgrou-swapfile, to me this is addressing #2, but I agree with Yu Zhao, that there are probably better solutions to this.

Thanks Fabian for tagging me.

I am not familiar with #1, so I will speak to #2. Zhongkun, There are
a few parts that I do not understand -- hopefully you can help me out
here:

(1) If I understand correctly in this patch we set the active memcg
trying to charge any pages allocated in a zspage to the current memcg,
yet that zspage will contain multiple compressed object slots, not
just the one used by this memcg. Aren't we overcharging the memcg?
Basically the first memcg that happens to allocate the zspage will pay
for all the objects in this zspage, even after it stops using the
zspage completely?

(2) Patch 3 seems to be charging the compressed slots to the memcgs,
yet this patch is trying to charge the entire zspage. Aren't we double
charging the zspage? I am guessing this isn't happening because (as
Michal pointed out) we are not using __GFP_ACCOUNT here anyway, so
this patch may be NOP, and the actual charging is coming from patch 3
only.

(3) Zswap recently implemented per-memcg charging of compressed
objects in a much simpler way. If your main interest is #2 (which is
what I understand from the commit log), it seems like zswap might be
providing this already? Why can't you use zswap? Is it the fact that
zswap requires a backing swapfile?

Thanks!

>
> Lastly, this patchset, while it will possibly not address the swap issue (#2) completely, is it satisfying the needs of #1?
>
> - fabian
>
>>
>> We used to do this a long time ago, but we had per-memcg swapfiles [1[
>> to prevent compressed pages from different memcgs from sharing the
>> same zspage.
>>
>> Does this patchset alone suffer from the same problem, i.e., memcgs
>> sharing zspages?
>>
>> [1] https://lwn.net/Articles/592923/
>>


  parent reply	other threads:[~2023-06-16  1:40 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-15  3:48 Zhongkun He
2023-06-15  4:59 ` Yu Zhao
2023-06-15  8:57   ` Fabian Deutsch
2023-06-15 10:00     ` [External] " 贺中坤
2023-06-15 12:14       ` Fabian Deutsch
2023-06-16  1:39     ` Yosry Ahmed [this message]
2023-06-16  4:40       ` 贺中坤
2023-06-16  7:37         ` Yosry Ahmed
2023-06-16  7:57           ` David Hildenbrand
2023-06-16  8:04             ` Yosry Ahmed
2023-06-16  8:37               ` David Hildenbrand
2023-06-16  8:39                 ` Yosry Ahmed
2023-06-15  9:32   ` Fabian Deutsch
2023-06-15  9:41   ` [External] " 贺中坤
2023-06-15  9:27 ` David Hildenbrand
2023-06-15 11:15   ` [External] " 贺中坤
2023-06-15 11:19     ` David Hildenbrand
2023-06-15 12:19       ` 贺中坤
2023-06-15 12:56         ` David Hildenbrand
2023-06-15 13:40           ` 贺中坤
2023-06-15 14:46             ` David Hildenbrand
2023-06-16  3:44               ` 贺中坤
2023-06-15  9:35 ` Michal Hocko
2023-06-15 11:58   ` [External] " 贺中坤
2023-06-15 12:16     ` Michal Hocko
2023-06-15 13:09       ` 贺中坤
2023-06-15 13:27         ` Michal Hocko
2023-06-15 14:13           ` 贺中坤
2023-06-15 14:20             ` Michal Hocko
2023-06-16  3:31               ` 贺中坤
2023-06-16  6:40                 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJD7tka-=ppaheVxn2-f6u0egBp8kOHYAK0bBudC62SKkZPk5w@mail.gmail.com' \
    --to=yosryahmed@google.com \
    --cc=aarcange@redhat.com \
    --cc=david@redhat.com \
    --cc=fdeutsch@redhat.com \
    --cc=hezhongkun.hzk@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=senozhatsky@chromium.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox