From: Nitin Gupta <ngupta@vflare.org>
To: Dan Magenheimer <dan.magenheimer@oracle.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>,
Hugh Dickins <hugh.dickins@tiscali.co.uk>,
Andrew Morton <akpm@linux-foundation.org>,
Greg KH <greg@kroah.com>, Rik van Riel <riel@redhat.com>,
Avi Kivity <avi@redhat.com>,
Christoph Hellwig <hch@infradead.org>,
Minchan Kim <minchan.kim@gmail.com>,
Konrad Wilk <konrad.wilk@oracle.com>,
linux-mm <linux-mm@kvack.org>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 0/8] zcache: page cache compression support
Date: Wed, 21 Jul 2010 09:57:26 +0530 [thread overview]
Message-ID: <4C46772E.3000500@vflare.org> (raw)
In-Reply-To: <9e4cae1f-c102-43ea-9ba0-611c8ad68c9b@default>
On 07/20/2010 07:58 PM, Dan Magenheimer wrote:
>> On 07/20/2010 01:27 AM, Dan Magenheimer wrote:
>>>> We only keep pages that compress to PAGE_SIZE/2 or less. Compressed
>>>> chunks are
>>>> stored using xvmalloc memory allocator which is already being used
>> by
>>>> zram
>>>> driver for the same purpose. Zero-filled pages are checked and no
>>>> memory is
>>>> allocated for them.
>>>
>>> I'm curious about this policy choice. I can see why one
>>> would want to ensure that the average page is compressed
>>> to less than PAGE_SIZE/2, and preferably PAGE_SIZE/2
>>> minus the overhead of the data structures necessary to
>>> track the page. And I see that this makes no difference
>>> when the reclamation algorithm is random (as it is for
>>> now). But once there is some better reclamation logic,
>>> I'd hope that this compression factor restriction would
>>> be lifted and replaced with something much higher. IIRC,
>>> compression is much more expensive than decompression
>>> so there's no CPU-overhead argument here either,
>>> correct?
>>
>> Its true that we waste CPU cycles for every incompressible page
>> encountered but still we can't keep such pages in RAM since this
>> is what host wanted to reclaim and we can't help since compression
>> failed. Compressed caching makes sense only when we keep highly
>> compressible pages in RAM, regardless of reclaim scheme.
>>
>> Keeping (nearly) incompressible pages in RAM probably makes sense
>> for Xen's case where cleancache provider runs *inside* a VM, sending
>> pages to host. So, if VM is limited to say 512M and host has 64G RAM,
>> caching guest pages, with or without compression, will help.
>
> I agree that the use model is a bit different, but PAGE_SIZE/2
> still seems like an unnecessarily strict threshold. For
> example, saving 3000 clean pages in 2000*PAGE_SIZE of RAM
> still seems like a considerable space savings. And as
> long as the _average_ is less than some threshold, saving
> a few slightly-less-than-ideally-compressible pages doesn't
> seem like it would be a problem. For example, IMHO, saving two
> pages when one compresses to 2047 bytes and the other compresses
> to 2049 bytes seems just as reasonable as saving two pages that
> both compress to 2048 bytes.
>
> Maybe the best solution is to make the threshold a sysfs
> settable? Or maybe BOTH the single-page threshold and
> the average threshold as two different sysfs settables?
> E.g. throw away a put page if either it compresses poorly
> or adding it to the pool would push the average over.
>
Considering overall compression average instead of bothering about
individual page compressibility seems like a good point. Still, I think
storing completely incompressible pages isn't desirable.
So, I agree with the idea of separate sysfs tunables for average and single-page
compression thresholds with defaults conservatively set to 50% and PAGE_SIZE/2
respectively. I will include these in "v2" patches.
Thanks,
Nitin
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-07-21 4:27 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-16 12:37 Nitin Gupta
2010-07-16 12:37 ` [PATCH 1/8] Allow sharing xvmalloc for zram and zcache Nitin Gupta
2010-07-17 18:10 ` Rik van Riel
2010-07-16 12:37 ` [PATCH 2/8] Basic zcache functionality Nitin Gupta
2010-07-18 8:14 ` Pekka Enberg
2010-07-18 9:45 ` Nitin Gupta
2010-07-18 8:27 ` Pekka Enberg
2010-07-18 8:44 ` Eric Dumazet
2010-07-18 9:51 ` Nitin Gupta
2010-07-16 12:37 ` [PATCH 3/8] Create sysfs nodes and export basic statistics Nitin Gupta
2010-07-16 12:37 ` [PATCH 4/8] Shrink zcache based on memlimit Nitin Gupta
2010-07-20 23:03 ` Minchan Kim
2010-07-21 4:52 ` Nitin Gupta
2010-07-21 11:32 ` Ed Tomlinson
2010-07-23 19:23 ` Nitin Gupta
2010-07-16 12:37 ` [PATCH 5/8] Eliminate zero-filled pages Nitin Gupta
2010-07-16 12:37 ` [PATCH 6/8] Compress pages using LZO Nitin Gupta
2010-07-16 12:37 ` [PATCH 7/8] Use xvmalloc to store compressed chunks Nitin Gupta
2010-07-18 7:53 ` Pekka Enberg
2010-07-18 8:21 ` Nitin Gupta
2010-07-19 4:36 ` Minchan Kim
2010-07-19 6:48 ` Nitin Gupta
2010-07-16 12:37 ` [PATCH 8/8] Document sysfs entries Nitin Gupta
2010-07-17 21:13 ` [PATCH 0/8] zcache: page cache compression support Ed Tomlinson
2010-07-18 2:23 ` Nitin Gupta
2010-07-18 7:50 ` Pekka Enberg
2010-07-18 8:12 ` Nitin Gupta
2010-07-19 19:57 ` Dan Magenheimer
2010-07-20 13:50 ` Nitin Gupta
2010-07-20 14:28 ` Dan Magenheimer
2010-07-21 4:27 ` Nitin Gupta [this message]
2010-07-21 17:37 ` Dan Magenheimer
2010-07-22 19:14 ` Greg KH
2010-07-22 19:54 ` Dan Magenheimer
2010-07-22 21:00 ` Greg KH
2011-01-10 13:16 ` Kirill A. Shutemov
2011-01-18 17:53 ` Dan Magenheimer
2011-01-20 12:33 ` Nitin Gupta
2011-01-20 12:47 ` Christoph Hellwig
2011-01-20 13:16 ` Pekka Enberg
2011-01-20 13:58 ` Nitin Gupta
[not found] <575348163.1113381279906498028.JavaMail.root@zmail06.collab.prod.int.phx2.redhat.com>
2010-07-23 17:36 ` caiqian
2010-07-23 17:41 ` CAI Qian
2010-07-23 18:02 ` CAI Qian
2010-07-24 14:41 ` Valdis.Kletnieks
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C46772E.3000500@vflare.org \
--to=ngupta@vflare.org \
--cc=akpm@linux-foundation.org \
--cc=avi@redhat.com \
--cc=dan.magenheimer@oracle.com \
--cc=greg@kroah.com \
--cc=hch@infradead.org \
--cc=hugh.dickins@tiscali.co.uk \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan.kim@gmail.com \
--cc=penberg@cs.helsinki.fi \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox