From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com [209.85.217.178]) by kanga.kvack.org (Postfix) with ESMTP id B47546B009E for ; Mon, 25 Nov 2013 07:07:53 -0500 (EST) Received: by mail-lb0-f178.google.com with SMTP id c11so3189669lbj.9 for ; Mon, 25 Nov 2013 04:07:53 -0800 (PST) Received: from relay.parallels.com (relay.parallels.com. [195.214.232.42]) by mx.google.com with ESMTPS id d9si3601109lad.75.2013.11.25.04.07.52 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 25 Nov 2013 04:07:52 -0800 (PST) From: Vladimir Davydov Subject: [PATCH v11 08/15] vmscan: take at least one pass with shrinkers Date: Mon, 25 Nov 2013 16:07:41 +0400 Message-ID: <2ddb3940d9a96ba6625809a17e23ccb4e65fe29d.1385377616.git.vdavydov@parallels.com> In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org, mhocko@suse.cz Cc: glommer@openvz.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, devel@openvz.org From: Glauber Costa In very low free kernel memory situations, it may be the case that we have less objects to free than our initial batch size. If this is the case, it is better to shrink those, and open space for the new workload then to keep them and fail the new allocations. In particular, we are concerned with the direct reclaim case for memcg. Although this same technique can be applied to other situations just as well, we will start conservative and apply it for that case, which is the one that matters the most. Signed-off-by: Glauber Costa CC: Dave Chinner CC: Carlos Maiolino CC: "Theodore Ts'o" CC: Al Viro --- mm/vmscan.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 36fc133..bfedcdc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -311,20 +311,33 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker, nr_pages_scanned, lru_pages, max_pass, delta, total_scan); - while (total_scan >= batch_size) { + do { unsigned long ret; + unsigned long nr_to_scan = min(batch_size, total_scan); + struct mem_cgroup *memcg = shrinkctl->target_mem_cgroup; + + /* + * Differentiate between "few objects" and "no objects" + * as returned by the count step. + */ + if (!total_scan) + break; + + if ((total_scan < batch_size) && + !(memcg && memcg_kmem_is_active(memcg))) + break; - shrinkctl->nr_to_scan = batch_size; + shrinkctl->nr_to_scan = nr_to_scan; ret = shrinker->scan_objects(shrinker, shrinkctl); if (ret == SHRINK_STOP) break; freed += ret; - count_vm_events(SLABS_SCANNED, batch_size); - total_scan -= batch_size; + count_vm_events(SLABS_SCANNED, nr_to_scan); + total_scan -= nr_to_scan; cond_resched(); - } + } while (total_scan >= batch_size); /* * move the unused scan count back into the shrinker in a -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org