linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: zhong jiang <zhongjiang@huawei.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	"mgorman@techsingularity.net" <mgorman@techsingularity.net>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Laura Abbott <labbott@redhat.com>,
	Hugh Dickins <hughd@google.com>, Oleg Nesterov <oleg@redhat.com>,
	Linux Memory Management List <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Question] A novel case happened when using mempool allocate memory.
Date: Thu, 2 Aug 2018 14:22:03 +0800	[thread overview]
Message-ID: <5B62A30B.9000008@huawei.com> (raw)
In-Reply-To: <20180801153713.GA4039@bombadil.infradead.org>

On 2018/8/1 23:37, Matthew Wilcox wrote:
> On Wed, Aug 01, 2018 at 11:31:15PM +0800, zhong jiang wrote:
>> Hi,  Everyone
>>
>>  I ran across the following novel case similar to memory leak in linux-4.1 stable when allocating
>>  memory object by kmem_cache_alloc.   it rarely can be reproduced.
>>
>> I create a specific  mempool with 24k size based on the slab.  it can not be merged with
>> other kmem cache.  I  record the allocation and free usage by atomic_add/sub.    After a while,
>> I watch the specific slab consume most of total memory.   After halting the code execution.
>> The counter of allocation and free is equal.  Therefore,  I am sure that module have released
>> all meory resource.  but the statistic of specific slab is very high but stable by checking /proc/slabinfo.
> Please post the code.
>
> .
>

when module is loaded. we create the specific mempool. The code flow is as follows.

mem_pool_create() {

slab_cache = kmem_cache_create(name, item_size, 0, 0 , NULL);

mempoll_create(min_pool_size, mempool_alloc_slab, mempool_free_slab, slab_cache);   //min_pool_size is assigned to 1024
atomic_set(pool->statistics, 0);
}

we allocate memory from specific mempool , The code flow is as follows.

mem_alloc() {
mempool_alloc(pool, gfp_flags);

atomic_inc(pool->statistics);
}

we release memory to specific mempool . The code flow is as follows.
mem_free() {
mempool_free(object_ptr, pool);

atomic_dec(pool->statistics);
}


when we unregister the module,  the memory has been taken up will get back the system.
the code flow is as follows.

mem_pool_destroy() {
 mempool_destroy(pool);
kmem_cache_destroy(slab_cache);
}

>From the above information.  I assume the specific kmem_cache will not take up overmuch memory
when halting the execution and pool->statistics is equal to 0.

I have no idea about the issue. 

Thanks
zhong jiang

  reply	other threads:[~2018-08-02  6:22 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-01 15:31 zhong jiang
2018-08-01 15:37 ` Matthew Wilcox
2018-08-02  6:22   ` zhong jiang [this message]
2018-08-02 13:31     ` Matthew Wilcox
2018-08-02 14:17       ` zhong jiang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5B62A30B.9000008@huawei.com \
    --to=zhongjiang@huawei.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=labbott@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=oleg@redhat.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox