linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: "Yosry Ahmed" <yosryahmed@google.com>,
	贺中坤 <hezhongkun.hzk@bytedance.com>
Cc: Yu Zhao <yuzhao@google.com>,
	minchan@kernel.org, senozhatsky@chromium.org, mhocko@suse.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Andrea Arcangeli <aarcange@redhat.com>,
	Fabian Deutsch <fdeutsch@redhat.com>
Subject: Re: [External] Re: [RFC PATCH 1/3] zram: charge the compressed RAM to the page's memcgroup
Date: Fri, 16 Jun 2023 09:57:19 +0200	[thread overview]
Message-ID: <576b7ba6-4dcd-48c9-3917-4e2a25aaa823@redhat.com> (raw)
In-Reply-To: <CAJD7tka4Uc1DhNzKbrj71vGyVVA12bJivPUQU7P0DOinunLgGg@mail.gmail.com>

On 16.06.23 09:37, Yosry Ahmed wrote:
> On Thu, Jun 15, 2023 at 9:41 PM 贺中坤 <hezhongkun.hzk@bytedance.com> wrote:
>>
>>> Thanks Fabian for tagging me.
>>>
>>> I am not familiar with #1, so I will speak to #2. Zhongkun, There are
>>> a few parts that I do not understand -- hopefully you can help me out
>>> here:
>>>
>>> (1) If I understand correctly in this patch we set the active memcg
>>> trying to charge any pages allocated in a zspage to the current memcg,
>>> yet that zspage will contain multiple compressed object slots, not
>>> just the one used by this memcg. Aren't we overcharging the memcg?
>>> Basically the first memcg that happens to allocate the zspage will pay
>>> for all the objects in this zspage, even after it stops using the
>>> zspage completely?
>>
>> It will not overcharge.  As you said below, we are not using
>> __GFP_ACCOUNT and charging the compressed slots to the memcgs.
>>
>>>
>>> (2) Patch 3 seems to be charging the compressed slots to the memcgs,
>>> yet this patch is trying to charge the entire zspage. Aren't we double
>>> charging the zspage? I am guessing this isn't happening because (as
>>> Michal pointed out) we are not using __GFP_ACCOUNT here anyway, so
>>> this patch may be NOP, and the actual charging is coming from patch 3
>>> only.
>>
>> YES, the actual charging is coming from patch 3. This patch just
>> delivers the BIO page's  memcg to the current task which is not the
>> consumer.
>>
>>>
>>> (3) Zswap recently implemented per-memcg charging of compressed
>>> objects in a much simpler way. If your main interest is #2 (which is
>>> what I understand from the commit log), it seems like zswap might be
>>> providing this already? Why can't you use zswap? Is it the fact that
>>> zswap requires a backing swapfile?
>>
>> Thanks for your reply and review. Yes, the zswap requires a backing
>> swapfile. The I/O path is very complex, sometimes it will throttle the
>> whole system if some resources are short , so we hope to use zram.
> 
> Is the only problem with zswap for you the requirement of a backing swapfile?
> 
> If yes, I am in the early stages of developing a solution to make
> zswap work without a backing swapfile. This was discussed in LSF/MM
> [1]. Would this make zswap usable in for your use case?

Out of curiosity, are there any other known pros/cons when using 
zswap-without-swap instead of zram?

I know that zram requires sizing (size of the virtual block device) and 
consumes metadata, zswap doesn't.

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2023-06-16  7:57 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-15  3:48 Zhongkun He
2023-06-15  4:59 ` Yu Zhao
2023-06-15  8:57   ` Fabian Deutsch
2023-06-15 10:00     ` [External] " 贺中坤
2023-06-15 12:14       ` Fabian Deutsch
2023-06-16  1:39     ` Yosry Ahmed
2023-06-16  4:40       ` [External] " 贺中坤
2023-06-16  7:37         ` Yosry Ahmed
2023-06-16  7:57           ` David Hildenbrand [this message]
2023-06-16  8:04             ` Yosry Ahmed
2023-06-16  8:37               ` David Hildenbrand
2023-06-16  8:39                 ` Yosry Ahmed
2023-06-15  9:32   ` Fabian Deutsch
2023-06-15  9:41   ` [External] " 贺中坤
2023-06-15  9:27 ` David Hildenbrand
2023-06-15 11:15   ` [External] " 贺中坤
2023-06-15 11:19     ` David Hildenbrand
2023-06-15 12:19       ` 贺中坤
2023-06-15 12:56         ` David Hildenbrand
2023-06-15 13:40           ` 贺中坤
2023-06-15 14:46             ` David Hildenbrand
2023-06-16  3:44               ` 贺中坤
2023-06-15  9:35 ` Michal Hocko
2023-06-15 11:58   ` [External] " 贺中坤
2023-06-15 12:16     ` Michal Hocko
2023-06-15 13:09       ` 贺中坤
2023-06-15 13:27         ` Michal Hocko
2023-06-15 14:13           ` 贺中坤
2023-06-15 14:20             ` Michal Hocko
2023-06-16  3:31               ` 贺中坤
2023-06-16  6:40                 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=576b7ba6-4dcd-48c9-3917-4e2a25aaa823@redhat.com \
    --to=david@redhat.com \
    --cc=aarcange@redhat.com \
    --cc=fdeutsch@redhat.com \
    --cc=hezhongkun.hzk@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=senozhatsky@chromium.org \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox