From: Glauber Costa <glommer@parallels.com>
To: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org,
containers@lists.linux-foundation.org,
Michal Hocko <mhocko@suse.cz>,
Johannes Weiner <hannes@cmpxchg.org>,
kamezawa.hiroyu@jp.fujitsu.com,
Andrew Morton <akpm@linux-foundation.org>,
Dave Shrinnker <david@fromorbit.com>,
Greg Thelen <gthelen@google.com>,
hughd@google.com, yinghan@google.com,
Glauber Costa <glommer@parallels.com>,
Dave Chinner <dchinner@redhat.com>, Mel Gorman <mgorman@suse.de>,
Rik van Riel <riel@redhat.com>
Subject: [PATCH v2 27/28] list_lru: reclaim proportionaly between memcgs and nodes
Date: Fri, 29 Mar 2013 13:14:09 +0400 [thread overview]
Message-ID: <1364548450-28254-28-git-send-email-glommer@parallels.com> (raw)
In-Reply-To: <1364548450-28254-1-git-send-email-glommer@parallels.com>
The current list_lru code will try to scan objects until nr_to_walk is
reached, and then stop. This number can be different from the total
number of objects we have as returned by our count function. This is
because the main shrinker driver is the one ultimately responsible for
determining how many objects to shrink from each shrinker.
Specially if this number is lower than the number of objects, and
because we transverse the list always in the same order, we can have
the last node and/or the last memcg always being less penalized than
the others.
My proposed solution is to introduce some metric of proportionality
based on the total number of objects per node and then scan all nodes
and memcgs up until their share is reached.
Signed-off-by: Glauber Costa <glommer@parallels.com>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
lib/list_lru.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++--------
1 file changed, 83 insertions(+), 13 deletions(-)
diff --git a/lib/list_lru.c b/lib/list_lru.c
index a49a9b5..af67725 100644
--- a/lib/list_lru.c
+++ b/lib/list_lru.c
@@ -177,6 +177,43 @@ restart:
return isolated;
}
+static long
+memcg_isolate_lru(
+ struct list_lru *lru,
+ list_lru_walk_cb isolate,
+ void *cb_arg,
+ long nr_to_walk,
+ struct mem_cgroup *memcg,
+ int nid, unsigned long total_node)
+{
+ int memcg_id = memcg_cache_id(memcg);
+ unsigned long nr_to_walk_this;
+ long isolated = 0;
+ int idx;
+ struct list_lru_node *nlru;
+
+ for_each_memcg_lru_index(idx, memcg_id) {
+ nlru = lru_node_of_index(lru, idx, nid);
+ if (!nlru || !nlru->nr_items)
+ continue;
+
+ /*
+ * no memcg: walk every memcg proportionally.
+ * memcg case: scan everything (total_node)
+ */
+ if (!memcg)
+ nr_to_walk_this = mult_frac(nlru->nr_items, nr_to_walk,
+ total_node);
+ else
+ nr_to_walk_this = total_node;
+
+ isolated += list_lru_walk_node(lru, nlru, nid, isolate,
+ cb_arg, &nr_to_walk_this);
+ }
+
+ return isolated;
+}
+
long
list_lru_walk_nodemask_memcg(
struct list_lru *lru,
@@ -189,9 +226,7 @@ list_lru_walk_nodemask_memcg(
long isolated = 0;
int nid;
nodemask_t nodes;
- int memcg_id = memcg_cache_id(memcg);
- int idx;
- struct list_lru_node *nlru;
+ unsigned long n_node, total_node, total = 0;
/*
* Conservative code can call this setting nodes with node_setall.
@@ -199,17 +234,52 @@ list_lru_walk_nodemask_memcg(
*/
nodes_and(nodes, *nodes_to_walk, node_online_map);
+ /*
+ * We will first find out how many objects there are in the LRU, in
+ * total. We could store that in a per-LRU counter as well, the same
+ * way we store it in a per-NLRU. But lru_add and lru_del are way more
+ * frequent operations, so it is better to pay the price here.
+ *
+ * Once we have that number, we will try to scan the nodes
+ * proportionally to the amount of objects they have. The main shrinker
+ * driver in vmscan.c will often ask us to shrink a quantity different
+ * from the total quantity we reported in the count function (usually
+ * less). This means that not scanning proportionally may leave nodes
+ * (usually the last), unfairly charged.
+ *
+ * The final number we want is
+ *
+ * n_node = nr_to_scan * total_node / total
+ */
+ for_each_node_mask(nid, nodes)
+ total += atomic_long_read(&lru->node_totals[nid]);
+
for_each_node_mask(nid, nodes) {
- for_each_memcg_lru_index(idx, memcg_id) {
- nlru = lru_node_of_index(lru, idx, nid);
- if (!nlru)
- continue;
-
- isolated += list_lru_walk_node(lru, nlru, nid, isolate,
- cb_arg, &nr_to_walk);
- if (nr_to_walk <= 0)
- break;
- }
+ total_node = atomic_long_read(&lru->node_totals[nid]);
+ if (!total_node)
+ continue;
+
+ /*
+ * There are items, but in less proportion. Because we have no
+ * information about where exactly the pressure originates
+ * from, it is better to try shrinking the few we have than to
+ * skip it. It might very well be that this node is under
+ * pressure and any help would be welcome.
+ */
+ n_node = mult_frac(total_node, nr_to_walk, total);
+ if (!n_node)
+ n_node = total_node;
+
+ /*
+ * We will now scan all memcg-like entities (which includes the
+ * global LRU, of index -1, and also try to mantain
+ * proportionality among them.
+ *
+ * We will try to isolate:
+ * nr_memcg = n_node * nr_memcg_lru / total_node
+ */
+ isolated += memcg_isolate_lru(lru, isolate, cb_arg,
+ n_node, memcg, nid, total_node);
}
return isolated;
}
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-03-29 9:15 UTC|newest]
Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-29 9:13 [PATCH v2 00/28] memcg-aware slab shrinking Glauber Costa
2013-03-29 9:13 ` [PATCH v2 01/28] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-04-01 7:16 ` Kamezawa Hiroyuki
2013-03-29 9:13 ` [PATCH v2 02/28] vmscan: take at least one pass with shrinkers Glauber Costa
2013-04-01 7:26 ` Kamezawa Hiroyuki
2013-04-01 8:10 ` Glauber Costa
2013-04-10 5:09 ` Ric Mason
2013-04-10 7:32 ` Glauber Costa
2013-04-10 9:19 ` Dave Chinner
2013-04-08 8:42 ` Joonsoo Kim
2013-04-08 8:47 ` Glauber Costa
2013-04-08 9:01 ` Joonsoo Kim
2013-04-08 9:05 ` Glauber Costa
2013-04-09 0:55 ` Joonsoo Kim
2013-04-09 1:29 ` Dave Chinner
2013-04-09 2:05 ` Joonsoo Kim
2013-04-09 7:43 ` Glauber Costa
2013-04-09 9:08 ` Joonsoo Kim
2013-04-09 12:30 ` Dave Chinner
2013-04-10 2:51 ` Joonsoo Kim
2013-04-10 7:30 ` Glauber Costa
2013-04-10 8:19 ` Joonsoo Kim
2013-04-10 8:46 ` Wanpeng Li
[not found] ` <20130410025115.GA5872-Hm3cg6mZ9cc@public.gmane.org>
2013-04-10 8:46 ` Wanpeng Li
2013-04-10 8:46 ` Wanpeng Li
2013-04-10 10:07 ` Dave Chinner
2013-04-10 14:03 ` JoonSoo Kim
2013-04-11 0:41 ` Dave Chinner
2013-04-11 7:27 ` Wanpeng Li
2013-04-11 7:27 ` Wanpeng Li
2013-04-11 7:27 ` Wanpeng Li
2013-04-11 9:25 ` Dave Chinner
2013-03-29 9:13 ` [PATCH v2 03/28] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-04-05 1:09 ` Greg Thelen
2013-04-05 1:15 ` Dave Chinner
2013-04-08 9:14 ` Glauber Costa
2013-04-08 13:18 ` Glauber Costa
2013-04-08 23:26 ` Dave Chinner
2013-04-09 8:02 ` Glauber Costa
2013-04-09 12:47 ` Dave Chinner
2013-03-29 9:13 ` [PATCH v2 04/28] dentry: move to per-sb LRU locks Glauber Costa
2013-03-29 9:13 ` [PATCH v2 05/28] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-04-03 6:51 ` Sha Zhengju
2013-04-03 8:55 ` Glauber Costa
2013-04-04 6:19 ` Dave Chinner
2013-04-04 6:56 ` Glauber Costa
2013-03-29 9:13 ` [PATCH v2 06/28] mm: new shrinker API Glauber Costa
2013-04-05 1:09 ` Greg Thelen
2013-03-29 9:13 ` [PATCH v2 07/28] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-03-29 9:13 ` [PATCH v2 08/28] list: add a new LRU list type Glauber Costa
2013-04-04 21:53 ` Greg Thelen
2013-04-05 1:20 ` Dave Chinner
2013-04-05 8:01 ` Glauber Costa
2013-04-06 0:04 ` Dave Chinner
2013-03-29 9:13 ` [PATCH v2 09/28] inode: convert inode lru list to generic lru list code Glauber Costa
2013-03-29 9:13 ` [PATCH v2 10/28] dcache: convert to use new lru list infrastructure Glauber Costa
2013-04-08 13:14 ` Glauber Costa
2013-04-08 23:28 ` Dave Chinner
2013-03-29 9:13 ` [PATCH v2 11/28] list_lru: per-node " Glauber Costa
2013-03-29 9:13 ` [PATCH v2 12/28] shrinker: add node awareness Glauber Costa
2013-03-29 9:13 ` [PATCH v2 13/28] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-03-29 9:13 ` [PATCH v2 14/28] xfs: convert buftarg LRU to generic code Glauber Costa
2013-03-29 9:13 ` [PATCH v2 15/28] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-03-29 9:13 ` [PATCH v2 16/28] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-03-29 9:13 ` [PATCH v2 17/28] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-03-29 9:14 ` [PATCH v2 18/28] shrinker: convert remaining shrinkers to " Glauber Costa
2013-03-29 9:14 ` [PATCH v2 19/28] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-03-29 9:14 ` [PATCH v2 20/28] shrinker: Kill old ->shrink API Glauber Costa
2013-03-29 9:14 ` [PATCH v2 21/28] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-04-01 7:46 ` Kamezawa Hiroyuki
2013-04-01 8:51 ` Glauber Costa
2013-04-03 10:11 ` Sha Zhengju
2013-04-03 10:43 ` Glauber Costa
2013-04-04 9:35 ` Sha Zhengju
2013-04-05 8:25 ` Glauber Costa
2013-03-29 9:14 ` [PATCH v2 22/28] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-04-01 8:05 ` Kamezawa Hiroyuki
2013-04-01 8:22 ` Glauber Costa
2013-03-29 9:14 ` [PATCH v2 23/28] lru: add an element to a memcg list Glauber Costa
2013-04-01 8:18 ` Kamezawa Hiroyuki
2013-04-01 8:29 ` Glauber Costa
2013-03-29 9:14 ` [PATCH v2 24/28] list_lru: also include memcg lists in counts and scans Glauber Costa
2013-03-29 9:14 ` [PATCH v2 25/28] list_lru: per-memcg walks Glauber Costa
2013-03-29 9:14 ` [PATCH v2 26/28] memcg: per-memcg kmem shrinking Glauber Costa
2013-04-01 8:31 ` Kamezawa Hiroyuki
2013-04-01 8:48 ` Glauber Costa
2013-04-01 9:01 ` Kamezawa Hiroyuki
2013-04-01 9:14 ` Glauber Costa
2013-04-01 9:35 ` Kamezawa Hiroyuki
2013-03-29 9:14 ` Glauber Costa [this message]
2013-03-29 9:14 ` [PATCH v2 28/28] super: targeted memcg reclaim Glauber Costa
2013-04-01 12:38 ` [PATCH v2 00/28] memcg-aware slab shrinking Serge Hallyn
2013-04-01 12:45 ` Glauber Costa
2013-04-01 14:12 ` Serge Hallyn
2013-04-08 8:11 ` Glauber Costa
2013-04-02 4:58 ` Dave Chinner
2013-04-02 7:55 ` Glauber Costa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1364548450-28254-28-git-send-email-glommer@parallels.com \
--to=glommer@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=containers@lists.linux-foundation.org \
--cc=david@fromorbit.com \
--cc=dchinner@redhat.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
--cc=riel@redhat.com \
--cc=yinghan@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox