linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Hillf Danton <hillf.zj@alibaba-inc.com>,
	'Andrew Morton' <akpm@linux-foundation.org>
Cc: dave.hansen@intel.com, andi.kleen@intel.com, aaron.lu@intel.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	'Huang Ying' <ying.huang@intel.com>,
	'Hugh Dickins' <hughd@google.com>, 'Shaohua Li' <shli@kernel.org>,
	'Minchan Kim' <minchan@kernel.org>,
	'Rik van Riel' <riel@redhat.com>,
	'Andrea Arcangeli' <aarcange@redhat.com>,
	"'Kirill A . Shutemov'" <kirill.shutemov@linux.intel.com>,
	'Vladimir Davydov' <vdavydov@virtuozzo.com>,
	'Johannes Weiner' <hannes@cmpxchg.org>,
	'Michal Hocko' <mhocko@kernel.org>
Subject: Re: [PATCH 7/8] mm/swap: Add cache for swap slots allocation
Date: Thu, 29 Sep 2016 09:50:36 -0700	[thread overview]
Message-ID: <1475167836.3916.270.camel@linux.intel.com> (raw)
In-Reply-To: <008401d21a1f$aa29a510$fe7cef30$@alibaba-inc.com>

On Thu, 2016-09-29 at 15:04 +0800, Hillf Danton wrote:
> On Wednesday, September 28, 2016 1:19 AM Tim Chen wrote
> [...]
> > 
> > +
> > +static int alloc_swap_slot_cache(int cpu)
> > +{
> > +	struct swap_slots_cache *cache;
> > +
> > +	cache = &per_cpu(swp_slots, cpu);
> > +	mutex_init(&cache->alloc_lock);
> > +	spin_lock_init(&cache->free_lock);
> > +	cache->nr = 0;
> > +	cache->cur = 0;
> > +	cache->n_ret = 0;
> > +	cache->slots = vzalloc(sizeof(swp_entry_t) * SWAP_SLOTS_CACHE_SIZE);
> > +	if (!cache->slots) {
> > +		swap_slot_cache_enabled = false;
> > +		return -ENOMEM;
> > +	}
> > +	cache->slots_ret = vzalloc(sizeof(swp_entry_t) * SWAP_SLOTS_CACHE_SIZE);
> > +	if (!cache->slots_ret) {
> > +		vfree(cache->slots);
> > +		swap_slot_cache_enabled = false;
> > +		return -ENOMEM;
> > +	}
> > +	return 0;
> > +}
> > +
> [...]
> > 
> > +
> > +static void free_slot_cache(int cpu)
> > +{
> > +	struct swap_slots_cache *cache;
> > +
> > +	mutex_lock(&swap_slots_cache_mutex);
> > +	drain_slots_cache_cpu(cpu, SLOTS_CACHE | SLOTS_CACHE_RET);
> > +	cache = &per_cpu(swp_slots, cpu);
> > +	cache->nr = 0;
> > +	cache->cur = 0;
> > +	cache->n_ret = 0;
> > +	vfree(cache->slots);
> Also free cache->slots_ret?

Good point. A Should free cache->slots_ret here.

Tim

>A 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      reply	other threads:[~2016-09-29 16:50 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-27 17:18 Tim Chen
2016-09-29  7:04 ` Hillf Danton
2016-09-29 16:50   ` Tim Chen [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1475167836.3916.270.camel@linux.intel.com \
    --to=tim.c.chen@linux.intel.com \
    --cc=aarcange@redhat.com \
    --cc=aaron.lu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi.kleen@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=hillf.zj@alibaba-inc.com \
    --cc=hughd@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=riel@redhat.com \
    --cc=shli@kernel.org \
    --cc=vdavydov@virtuozzo.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox