From: Glauber Costa <glommer@parallels.com>
To: linux-mm@kvack.org
Cc: cgroups@vger.kernel.org, Dave Shrinnker <david@fromorbit.com>,
Serge Hallyn <serge.hallyn@canonical.com>,
kamezawa.hiroyu@jp.fujitsu.com, Michal Hocko <mhocko@suse.cz>,
Johannes Weiner <hannes@cmpxchg.org>,
Andrew Morton <akpm@linux-foundation.org>,
hughd@google.com, linux-fsdevel@vger.kernel.org,
containers@lists.linux-foundation.org,
Greg Thelen <gthelen@google.com>,
Glauber Costa <glommer@parallels.com>,
Dave Chinner <dchinner@redhat.com>, Mel Gorman <mgorman@suse.de>,
Rik van Riel <riel@redhat.com>
Subject: [PATCH v3 27/32] list_lru: reclaim proportionaly between memcgs and nodes
Date: Mon, 8 Apr 2013 18:00:54 +0400 [thread overview]
Message-ID: <1365429659-22108-28-git-send-email-glommer@parallels.com> (raw)
In-Reply-To: <1365429659-22108-1-git-send-email-glommer@parallels.com>
The current list_lru code will try to scan objects until nr_to_walk is
reached, and then stop. This number can be different from the total
number of objects we have as returned by our count function. This is
because the main shrinker driver is the one ultimately responsible for
determining how many objects to shrink from each shrinker.
Specially if this number is lower than the number of objects, and
because we transverse the list always in the same order, we can have
the last node and/or the last memcg always being less penalized than
the others.
My proposed solution is to introduce some metric of proportionality
based on the total number of objects per node and then scan all nodes
and memcgs up until their share is reached.
Signed-off-by: Glauber Costa <glommer@parallels.com>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
lib/list_lru.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 83 insertions(+), 13 deletions(-)
diff --git a/lib/list_lru.c b/lib/list_lru.c
index 3093f98..8434a18 100644
--- a/lib/list_lru.c
+++ b/lib/list_lru.c
@@ -234,6 +234,43 @@ restart:
return isolated;
}
+static long
+memcg_isolate_lru(
+ struct list_lru *lru,
+ list_lru_walk_cb isolate,
+ void *cb_arg,
+ long nr_to_walk,
+ struct mem_cgroup *memcg,
+ int nid, unsigned long total_node)
+{
+ int memcg_id = memcg_cache_id(memcg);
+ unsigned long nr_to_walk_this;
+ long isolated = 0;
+ int idx;
+ struct list_lru_node *nlru;
+
+ for_each_memcg_lru_index(idx, memcg_id) {
+ nlru = lru_node_of_index(lru, idx, nid);
+ if (!nlru || !nlru->nr_items)
+ continue;
+
+ /*
+ * no memcg: walk every memcg proportionally.
+ * memcg case: scan everything (total_node)
+ */
+ if (!memcg)
+ nr_to_walk_this = mult_frac(nlru->nr_items, nr_to_walk,
+ total_node);
+ else
+ nr_to_walk_this = total_node;
+
+ isolated += list_lru_walk_node(lru, nlru, nid, isolate,
+ cb_arg, &nr_to_walk_this);
+ }
+
+ return isolated;
+}
+
long
list_lru_walk_nodemask_memcg(
struct list_lru *lru,
@@ -246,9 +283,7 @@ list_lru_walk_nodemask_memcg(
long isolated = 0;
int nid;
nodemask_t nodes;
- int memcg_id = memcg_cache_id(memcg);
- int idx;
- struct list_lru_node *nlru;
+ unsigned long n_node, total_node, total = 0;
/*
* Conservative code can call this setting nodes with node_setall.
@@ -256,17 +291,52 @@ list_lru_walk_nodemask_memcg(
*/
nodes_and(nodes, *nodes_to_walk, node_online_map);
+ /*
+ * We will first find out how many objects there are in the LRU, in
+ * total. We could store that in a per-LRU counter as well, the same
+ * way we store it in a per-NLRU. But lru_add and lru_del are way more
+ * frequent operations, so it is better to pay the price here.
+ *
+ * Once we have that number, we will try to scan the nodes
+ * proportionally to the amount of objects they have. The main shrinker
+ * driver in vmscan.c will often ask us to shrink a quantity different
+ * from the total quantity we reported in the count function (usually
+ * less). This means that not scanning proportionally may leave nodes
+ * (usually the last), unfairly charged.
+ *
+ * The final number we want is
+ *
+ * n_node = nr_to_scan * total_node / total
+ */
+ for_each_node_mask(nid, nodes)
+ total += atomic_long_read(&lru->node_totals[nid]);
+
for_each_node_mask(nid, nodes) {
- for_each_memcg_lru_index(idx, memcg_id) {
- nlru = lru_node_of_index(lru, idx, nid);
- if (!nlru)
- continue;
-
- isolated += list_lru_walk_node(lru, nlru, nid, isolate,
- cb_arg, &nr_to_walk);
- if (nr_to_walk <= 0)
- break;
- }
+ total_node = atomic_long_read(&lru->node_totals[nid]);
+ if (!total_node)
+ continue;
+
+ /*
+ * There are items, but in less proportion. Because we have no
+ * information about where exactly the pressure originates
+ * from, it is better to try shrinking the few we have than to
+ * skip it. It might very well be that this node is under
+ * pressure and any help would be welcome.
+ */
+ n_node = mult_frac(total_node, nr_to_walk, total);
+ if (!n_node)
+ n_node = total_node;
+
+ /*
+ * We will now scan all memcg-like entities (which includes the
+ * global LRU, of index -1, and also try to mantain
+ * proportionality among them.
+ *
+ * We will try to isolate:
+ * nr_memcg = n_node * nr_memcg_lru / total_node
+ */
+ isolated += memcg_isolate_lru(lru, isolate, cb_arg,
+ n_node, memcg, nid, total_node);
}
return isolated;
}
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-04-08 14:02 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-04-08 14:00 [PATCH v3 00/32] memcg-aware slab shrinking with lasers and numbers Glauber Costa
2013-04-08 14:00 ` [PATCH v3 01/32] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-04-08 14:00 ` [PATCH v3 02/32] vmscan: take at least one pass with shrinkers Glauber Costa
2013-04-08 14:00 ` [PATCH v3 03/32] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-04-08 14:00 ` [PATCH v3 04/32] dentry: move to per-sb LRU locks Glauber Costa
2013-04-08 14:00 ` [PATCH v3 05/32] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-04-08 14:00 ` [PATCH v3 06/32] mm: new shrinker API Glauber Costa
2013-04-08 14:00 ` [PATCH v3 07/32] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-04-08 14:00 ` [PATCH v3 08/32] list: add a new LRU list type Glauber Costa
2013-04-15 5:35 ` Greg Thelen
2013-04-15 17:56 ` Greg Thelen
2013-04-16 14:43 ` Glauber Costa
2013-04-08 14:00 ` [PATCH v3 09/32] inode: convert inode lru list to generic lru list code Glauber Costa
2013-04-08 14:00 ` [PATCH v3 10/32] dcache: convert to use new lru list infrastructure Glauber Costa
2013-04-08 14:00 ` [PATCH v3 11/32] list_lru: per-node " Glauber Costa
2013-04-15 5:37 ` Greg Thelen
2013-04-08 14:00 ` [PATCH v3 12/32] shrinker: add node awareness Glauber Costa
2013-04-15 5:38 ` Greg Thelen
2013-04-08 14:00 ` [PATCH v3 13/32] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-04-08 14:00 ` [PATCH v3 14/32] xfs: convert buftarg LRU to generic code Glauber Costa
2013-04-15 5:38 ` Greg Thelen
2013-04-15 10:14 ` Glauber Costa
2013-04-08 14:00 ` [PATCH v3 15/32] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-04-08 14:00 ` [PATCH v3 16/32] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-04-08 14:00 ` [PATCH v3 17/32] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-04-08 14:00 ` [PATCH v3 18/32] shrinker: convert remaining shrinkers to " Glauber Costa
2013-04-08 14:00 ` [PATCH v3 19/32] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-04-15 5:38 ` Greg Thelen
2013-04-15 8:10 ` Kirill A. Shutemov
2013-04-08 14:00 ` [PATCH v3 20/32] shrinker: Kill old ->shrink API Glauber Costa
2013-04-15 5:38 ` Greg Thelen
2013-04-08 14:00 ` [PATCH v3 21/32] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-04-08 14:00 ` [PATCH v3 22/32] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-04-08 14:00 ` [PATCH v3 23/32] lru: add an element to a memcg list Glauber Costa
2013-04-08 14:00 ` [PATCH v3 24/32] list_lru: also include memcg lists in counts and scans Glauber Costa
2013-04-08 14:00 ` [PATCH v3 25/32] list_lru: per-memcg walks Glauber Costa
2013-04-08 14:00 ` [PATCH v3 26/32] memcg: per-memcg kmem shrinking Glauber Costa
2013-04-08 14:00 ` Glauber Costa [this message]
2013-04-08 14:00 ` [PATCH v3 28/32] memcg: scan cache objects hierarchically Glauber Costa
2013-04-08 14:00 ` [PATCH v3 29/32] memcg: move initialization to memcg creation Glauber Costa
2013-04-08 14:00 ` [PATCH v3 30/32] memcg: shrink dead memcgs upon global memory pressure Glauber Costa
2013-04-08 14:00 ` [PATCH v3 31/32] super: targeted memcg reclaim Glauber Costa
2013-04-08 14:00 ` [PATCH v3 32/32] memcg: debugging facility to access dangling memcgs Glauber Costa
2013-04-08 20:51 ` [PATCH v3 00/32] memcg-aware slab shrinking with lasers and numbers Andrew Morton
2013-04-09 7:25 ` Glauber Costa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1365429659-22108-28-git-send-email-glommer@parallels.com \
--to=glommer@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=containers@lists.linux-foundation.org \
--cc=david@fromorbit.com \
--cc=dchinner@redhat.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
--cc=riel@redhat.com \
--cc=serge.hallyn@canonical.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox