linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dan Streetman <ddstreet@ieee.org>
To: Vitaly Wool <vitalywool@gmail.com>
Cc: Linux-MM <linux-mm@kvack.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Seth Jennings <sjenning@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH v4] z3fold: the 3-fold allocator for compressed pages
Date: Fri, 27 May 2016 07:10:20 -0400	[thread overview]
Message-ID: <CALZtONBtW0k5v-bJkZ2T=GuACYLvuHn0jzxRxTRSammw4QxJJQ@mail.gmail.com> (raw)
In-Reply-To: <CAMJBoFNdpJ0Ra-93rAkH8sjAO1nYkbHUdaZx09juZKRck4o-bg@mail.gmail.com>

On Fri, May 27, 2016 at 6:55 AM, Vitaly Wool <vitalywool@gmail.com> wrote:
> Hello Dan,
>
> On Fri, May 20, 2016 at 2:39 PM, Dan Streetman <ddstreet@ieee.org> wrote:
>
> <snip>
>>> +static int z3fold_compact_page(struct z3fold_header *zhdr)
>>> +{
>>> +       struct page *page = virt_to_page(zhdr);
>>> +       void *beg = zhdr;
>>> +
>>> +
>>> +       if (!test_bit(MIDDLE_CHUNK_MAPPED, &page->private) &&
>>> +           zhdr->middle_chunks != 0 &&
>>> +           zhdr->first_chunks == 0 && zhdr->last_chunks == 0) {
>>> +               memmove(beg + ZHDR_SIZE_ALIGNED,
>>> +                       beg + (zhdr->start_middle << CHUNK_SHIFT),
>>> +                       zhdr->middle_chunks << CHUNK_SHIFT);
>>> +               zhdr->first_chunks = zhdr->middle_chunks;
>>> +               zhdr->middle_chunks = 0;
>>> +               zhdr->start_middle = 0;
>>> +               zhdr->first_num++;
>>> +               return 1;
>>> +       }
>>
>> what about the case of only first and middle, or only middle and last?
>>  you can still optimize space in those cases.
>
> That's right, but I ran some performance tests, too, and with those
> extra optimizations
> z3fold is on par with zsmalloc, while the existing implementation
> gives 10-30% better
> numbers. OTOH, the gain in space according to my measurements is 5-10% so I am
> not that eager to trade performance gain for it. I was thinking of
> having theoe extra
> optimizations under a module parameter/flag and I'll come up with the
> patch soon, but
> I'm going to have that disabled by default anyway.

sounds good.

>
> <snip>
>>> +static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
>>> +{
>>> +       struct z3fold_header *zhdr;
>>> +       int freechunks;
>>> +       struct page *page;
>>> +       enum buddy bud;
>>> +
>>> +       spin_lock(&pool->lock);
>>> +       zhdr = handle_to_z3fold_header(handle);
>>> +       page = virt_to_page(zhdr);
>>> +
>>> +       if (test_bit(PAGE_HEADLESS, &page->private)) {
>>> +               /* HEADLESS page stored */
>>> +               bud = HEADLESS;
>>> +       } else {
>>> +               bud = (handle - zhdr->first_num) & BUDDY_MASK;
>>
>> this should use handle_to_buddy()
>
> Thanks, will fix that.
>
> <snip>
>>> +static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
>>> +{
>>> +       int i, ret = 0, freechunks;
>>> +       struct z3fold_header *zhdr;
>>> +       struct page *page;
>>> +       unsigned long first_handle = 0, middle_handle = 0, last_handle = 0;
>>> +
>>> +       spin_lock(&pool->lock);
>>> +       if (!pool->ops || !pool->ops->evict || list_empty(&pool->lru) ||
>>> +                       retries == 0) {
>>> +               spin_unlock(&pool->lock);
>>> +               return -EINVAL;
>>> +       }
>>> +       for (i = 0; i < retries; i++) {
>>> +               page = list_last_entry(&pool->lru, struct page, lru);
>>> +               list_del(&page->lru);
>>> +
>>> +               /* Protect z3fold page against free */
>>> +               set_bit(UNDER_RECLAIM, &page->private);
>>> +               zhdr = page_address(page);
>>> +               if (!test_bit(PAGE_HEADLESS, &page->private)) {
>>> +                       list_del(&zhdr->buddy);
>>> +                       /*
>>> +                        * We need encode the handles before unlocking, since
>>> +                        * we can race with free that will set
>>> +                        * (first|last)_chunks to 0
>>> +                        */
>>> +                       first_handle = 0;
>>> +                       last_handle = 0;
>>> +                       middle_handle = 0;
>>> +                       if (zhdr->first_chunks)
>>> +                               first_handle = encode_handle(zhdr, FIRST);
>>> +                       if (zhdr->middle_chunks)
>>> +                               middle_handle = encode_handle(zhdr, MIDDLE);
>>> +                       if (zhdr->last_chunks)
>>> +                               last_handle = encode_handle(zhdr, LAST);
>>> +               } else {
>>> +                       first_handle = encode_handle(zhdr, HEADLESS);
>>> +                       last_handle = middle_handle = 0;
>>> +               }
>>> +
>>> +               spin_unlock(&pool->lock);
>>> +
>>> +               /* Issue the eviction callback(s) */
>>> +               if (middle_handle) {
>>> +                       ret = pool->ops->evict(pool, middle_handle);
>>> +                       if (ret)
>>> +                               goto next;
>>> +               }
>>> +               if (first_handle) {
>>> +                       ret = pool->ops->evict(pool, first_handle);
>>> +                       if (ret)
>>> +                               goto next;
>>> +               }
>>> +               if (last_handle) {
>>> +                       ret = pool->ops->evict(pool, last_handle);
>>> +                       if (ret)
>>> +                               goto next;
>>> +               }
>>> +next:
>>> +               spin_lock(&pool->lock);
>>> +               clear_bit(UNDER_RECLAIM, &page->private);
>>> +               if ((test_bit(PAGE_HEADLESS, &page->private) && ret == 0) ||
>>> +                   (zhdr->first_chunks == 0 && zhdr->last_chunks == 0 &&
>>> +                    zhdr->middle_chunks == 0)) {
>>> +                       /*
>>> +                        * All buddies are now free, free the z3fold page and
>>> +                        * return success.
>>> +                        */
>>> +                       clear_bit(PAGE_HEADLESS, &page->private);
>>> +                       free_z3fold_page(zhdr);
>>> +                       pool->pages_nr--;
>>> +                       spin_unlock(&pool->lock);
>>> +                       return 0;
>>> +               } else if (zhdr->first_chunks != 0 &&
>>> +                          zhdr->last_chunks != 0 && zhdr->middle_chunks != 0) {
>>
>> if this is a HEADLESS page and the reclaim failed, this else-if will
>> be checked which isn't good, since the zhdr data doesn't exist for
>> headless pages.
>
> Thanks, and that too.
>
> <snip>
>>> +static void z3fold_unmap(struct z3fold_pool *pool, unsigned long handle)
>>> +{
>>> +       struct z3fold_header *zhdr;
>>> +       struct page *page;
>>> +       enum buddy buddy;
>>> +
>>> +       spin_lock(&pool->lock);
>>> +       zhdr = handle_to_z3fold_header(handle);
>>> +       page = virt_to_page(zhdr);
>>> +
>>> +       if (test_bit(PAGE_HEADLESS, &page->private)) {
>>> +               spin_unlock(&pool->lock);
>>> +               return;
>>> +       }
>>> +
>>> +       buddy = handle_to_buddy(handle);
>>> +       if (buddy == MIDDLE)
>>> +               clear_bit(MIDDLE_CHUNK_MAPPED, &page->private);
>>
>> maybe it should be compacted here, in case a compaction was missed
>> while the middle chunk was mapped?
>
> I was thinking about that but decided not to go for it at least right
> now because
> I want map() and unmap() functions to be as fast as possible. I am
> considering adding a worker thread that would run compaction in an async
> mode and here would be one of the places to trigger it then. Does that sound
> reasonable to you?

well you could register as a shrinker instead of managing your own
worker, but either way sounds ok.  And being a shrinker isn't exactly
a good match currently, as the compaction doesn't actually free any
pages up, just makes a bit more space on the compacted pages.
Although, you could think about how to actually migrate chunks to
different pages, which would let the shrinker free up pages.



>
> ~vitaly

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      reply	other threads:[~2016-05-27 11:11 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-09 13:17 Vitaly Wool
2016-05-20 12:39 ` Dan Streetman
2016-05-27 10:55   ` Vitaly Wool
2016-05-27 11:10     ` Dan Streetman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALZtONBtW0k5v-bJkZ2T=GuACYLvuHn0jzxRxTRSammw4QxJJQ@mail.gmail.com' \
    --to=ddstreet@ieee.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=sjenning@redhat.com \
    --cc=vbabka@suse.cz \
    --cc=vitalywool@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox