From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx194.postini.com [74.125.245.194]) by kanga.kvack.org (Postfix) with SMTP id 8F1786B0006 for ; Fri, 29 Mar 2013 05:14:03 -0400 (EDT) From: Glauber Costa Subject: [PATCH v2 02/28] vmscan: take at least one pass with shrinkers Date: Fri, 29 Mar 2013 13:13:44 +0400 Message-Id: <1364548450-28254-3-git-send-email-glommer@parallels.com> In-Reply-To: <1364548450-28254-1-git-send-email-glommer@parallels.com> References: <1364548450-28254-1-git-send-email-glommer@parallels.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org, containers@lists.linux-foundation.org, Michal Hocko , Johannes Weiner , kamezawa.hiroyu@jp.fujitsu.com, Andrew Morton , Dave Shrinnker , Greg Thelen , hughd@google.com, yinghan@google.com, Glauber Costa , Theodore Ts'o , Al Viro In very low free kernel memory situations, it may be the case that we have less objects to free than our initial batch size. If this is the case, it is better to shrink those, and open space for the new workload then to keep them and fail the new allocations. More specifically, this happens because we encode this in a loop with the condition: "while (total_scan >= batch_size)". So if we are in such a case, we'll not even enter the loop. This patch modifies turns it into a do () while {} loop, that will guarantee that we scan it at least once, while keeping the behaviour exactly the same for the cases in which total_scan > batch_size. Signed-off-by: Glauber Costa Reviewed-by: Dave Chinner Reviewed-by: Carlos Maiolino CC: "Theodore Ts'o" CC: Al Viro --- mm/vmscan.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 88c5fed..fc6d45a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -280,7 +280,7 @@ unsigned long shrink_slab(struct shrink_control *shrink, nr_pages_scanned, lru_pages, max_pass, delta, total_scan); - while (total_scan >= batch_size) { + do { int nr_before; nr_before = do_shrinker_shrink(shrinker, shrink, 0); @@ -294,7 +294,7 @@ unsigned long shrink_slab(struct shrink_control *shrink, total_scan -= batch_size; cond_resched(); - } + } while (total_scan >= batch_size); /* * move the unused scan count back into the shrinker in a -- 1.8.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org