From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f41.google.com (mail-pa0-f41.google.com [209.85.220.41]) by kanga.kvack.org (Postfix) with ESMTP id 1EF326B00F8 for ; Mon, 3 Nov 2014 16:00:22 -0500 (EST) Received: by mail-pa0-f41.google.com with SMTP id rd3so12895670pab.14 for ; Mon, 03 Nov 2014 13:00:21 -0800 (PST) Received: from mx2.parallels.com (mx2.parallels.com. [199.115.105.18]) by mx.google.com with ESMTPS id cf1si16210245pbc.33.2014.11.03.13.00.20 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Nov 2014 13:00:20 -0800 (PST) From: Vladimir Davydov Subject: [PATCH -mm 6/8] memcg: introduce memcg_kmem_should_charge helper Date: Mon, 3 Nov 2014 23:59:44 +0300 Message-ID: In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org We use the same set of checks in both memcg_kmem_newpage_charge and memcg_kmem_get_cache, and I need it in yet another function, which will be introduced by one of the following patches. So let's introduce a helper function for it. Signed-off-by: Vladimir Davydov --- include/linux/memcontrol.h | 43 ++++++++++++++++++++++--------------------- 1 file changed, 22 insertions(+), 21 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 617652712da8..224c045fd37f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -416,6 +416,26 @@ void memcg_update_array_size(int num_groups); struct kmem_cache * __memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp); +static __always_inline bool memcg_kmem_should_charge(gfp_t gfp) +{ + /* + * __GFP_NOFAIL allocations will move on even if charging is not + * possible. Therefore we don't even try, and have this allocation + * unaccounted. We could in theory charge it forcibly, but we hope + * those allocations are rare, and won't be worth the trouble. + */ + if (gfp & __GFP_NOFAIL) + return false; + if (in_interrupt()) + return false; + if (!current->mm || (current->flags & PF_KTHREAD)) + return false; + /* If the test is dying, just let it go. */ + if (unlikely(fatal_signal_pending(current))) + return false; + return true; +} + /** * memcg_kmem_newpage_charge: verify if a new kmem allocation is allowed. * @gfp: the gfp allocation flags. @@ -433,22 +453,8 @@ memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg, int order) { if (!memcg_kmem_enabled()) return true; - - /* - * __GFP_NOFAIL allocations will move on even if charging is not - * possible. Therefore we don't even try, and have this allocation - * unaccounted. We could in theory charge it forcibly, but we hope - * those allocations are rare, and won't be worth the trouble. - */ - if (gfp & __GFP_NOFAIL) - return true; - if (in_interrupt() || (!current->mm) || (current->flags & PF_KTHREAD)) - return true; - - /* If the test is dying, just let it go. */ - if (unlikely(fatal_signal_pending(current))) + if (!memcg_kmem_should_charge(gfp)) return true; - return __memcg_kmem_newpage_charge(gfp, memcg, order); } @@ -491,13 +497,8 @@ memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp) { if (!memcg_kmem_enabled()) return cachep; - if (gfp & __GFP_NOFAIL) - return cachep; - if (in_interrupt() || (!current->mm) || (current->flags & PF_KTHREAD)) - return cachep; - if (unlikely(fatal_signal_pending(current))) + if (!memcg_kmem_should_charge(gfp)) return cachep; - return __memcg_kmem_get_cache(cachep, gfp); } #else -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org