From: Glauber Costa <glommer@parallels.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Glauber Costa <glommer@openvz.org>,
linux-mm@kvack.org, cgroups@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Greg Thelen <gthelen@google.com>,
kamezawa.hiroyu@jp.fujitsu.com, Michal Hocko <mhocko@suse.cz>,
Johannes Weiner <hannes@cmpxchg.org>,
linux-fsdevel@vger.kernel.org, Dave Chinner <dchinner@redhat.com>
Subject: Re: [PATCH v6 12/31] fs: convert inode and dentry shrinking to be node aware
Date: Thu, 16 May 2013 23:14:50 +0400 [thread overview]
Message-ID: <5195302A.2090406@parallels.com> (raw)
In-Reply-To: <20130516000216.GC24635@dastard>
[-- Attachment #1: Type: text/plain, Size: 1829 bytes --]
> IOWs, shr->nr_in_batch can grow much larger than any single node LRU
> list, and the deffered count is only limited to (2 * max_pass).
> Hence if the same node is the one that keeps stealing the global
> shr->nr_in_batch calculation, it will always be a number related to
> the size of the cache on that node. All the other nodes will simply
> keep adding their delta counts to it.
>
> Hence if you've got a node with less cache in it than others, and
> kswapd comes along, it will see a gigantic amount of deferred work
> in nr_in_batch, and then we end up removing a large amount of the
> cache on that node, even though it hasn't had a significant amount
> of pressure. And the node that has pressure continues to wind up
> nr_in_batch until it's the one that gets hit by a kswapd run with
> that wound up nr_in_batch....
>
> Cheers,
>
> Dave.
>
Ok Dave,
My system in general seems to behave quite differently than this. In
special, I hardly see peaks and the caches fill up very slowly. They
later are pruned but always down to the same level, and then they grow
slowly again, in a triangular fashion. Always within a fairly reasonable
range. This might be because my disks are slower than yours.
It may also be some glitch in my setup. I spent a fair amount of time
today trying to see your behavior but I can't. I will try more tomorrow.
For the time being, what do you think about the following patch (that
obviously need a lot more work, just a PoC) ?
If we are indeed deferring work to unrelated nodes, keeping the deferred
work per-node should help. I don't want to make it a static array
because the shrinker structure tend to be embedded in structures. In
particular, the superblock already have two list_lrus with per-node
static arrays. This will make the sb gigantic. But that is not the main
thing.
[-- Attachment #2: patch.patch --]
[-- Type: text/x-patch, Size: 3334 bytes --]
diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 98be3ab..3edcd7f 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -53,7 +53,7 @@ struct shrinker {
/* These are for internal use */
struct list_head list;
- atomic_long_t nr_in_batch; /* objs pending delete */
+ atomic_long_t *nr_in_batch; /* objs pending delete, per node */
};
#define DEFAULT_SEEKS 2 /* A good number if you don't know better. */
extern void register_shrinker(struct shrinker *);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 35a6a9b..6dddc8d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -159,7 +159,14 @@ static unsigned long get_lru_size(struct lruvec *lruvec, enum lru_list lru)
*/
void register_shrinker(struct shrinker *shrinker)
{
- atomic_long_set(&shrinker->nr_in_batch, 0);
+ int i = 0;
+
+ shrinker->nr_in_batch = kmalloc(sizeof(atomic_long_t) * nr_node_ids, GFP_KERNEL);
+ BUG_ON(!shrinker->nr_in_batch); /* obviously bogus */
+
+ for (i = 0; i < nr_node_ids; i++)
+ atomic_long_set(&shrinker->nr_in_batch[i], 0);
+
down_write(&shrinker_rwsem);
list_add_tail(&shrinker->list, &shrinker_list);
up_write(&shrinker_rwsem);
@@ -211,6 +218,7 @@ unsigned long shrink_slab(struct shrink_control *shrinkctl,
{
struct shrinker *shrinker;
unsigned long freed = 0;
+ unsigned long nr_active_nodes = 0;
if (nr_pages_scanned == 0)
nr_pages_scanned = SWAP_CLUSTER_MAX;
@@ -229,6 +237,7 @@ unsigned long shrink_slab(struct shrink_control *shrinkctl,
long new_nr;
long batch_size = shrinker->batch ? shrinker->batch
: SHRINK_BATCH;
+ int nid;
if (shrinker->scan_objects) {
max_pass = shrinker->count_objects(shrinker, shrinkctl);
@@ -238,12 +247,17 @@ unsigned long shrink_slab(struct shrink_control *shrinkctl,
if (max_pass <= 0)
continue;
- /*
- * copy the current shrinker scan count into a local variable
- * and zero it so that other concurrent shrinker invocations
- * don't also do this scanning work.
- */
- nr = atomic_long_xchg(&shrinker->nr_in_batch, 0);
+ nr = 0;
+ for_each_node_mask(nid, shrinkctl->nodes_to_scan) {
+ /*
+ * copy the current shrinker scan count into a local
+ * variable and zero it so that other concurrent
+ * shrinker invocations don't also do this scanning
+ * work.
+ */
+ nr += atomic_long_xchg(&shrinker->nr_in_batch[nid], 0);
+ nr_active_nodes++;
+ }
total_scan = nr;
delta = (4 * nr_pages_scanned) / shrinker->seeks;
@@ -311,17 +325,16 @@ unsigned long shrink_slab(struct shrink_control *shrinkctl,
cond_resched();
}
- /*
- * move the unused scan count back into the shrinker in a
- * manner that handles concurrent updates. If we exhausted the
- * scan, there is no need to do an update.
- */
- if (total_scan > 0)
- new_nr = atomic_long_add_return(total_scan,
- &shrinker->nr_in_batch);
- else
- new_nr = atomic_long_read(&shrinker->nr_in_batch);
+ new_nr = 0;
+ total_scan /= nr_active_nodes;
+ for_each_node_mask(nid, shrinkctl->nodes_to_scan) {
+ if (total_scan > 0)
+ new_nr += atomic_long_add_return(total_scan / nr_active_nodes,
+ &shrinker->nr_in_batch[nid]);
+ else
+ new_nr += atomic_long_read(&shrinker->nr_in_batch[nid]);
+ }
trace_mm_shrink_slab_end(shrinker, freed, nr, new_nr);
}
up_read(&shrinker_rwsem);
next prev parent reply other threads:[~2013-05-16 19:14 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-12 18:13 [PATCH v6 00/31] kmemcg shrinkers Glauber Costa
2013-05-12 18:13 ` [PATCH v6 01/31] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-05-12 18:13 ` [PATCH v6 02/31] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-05-12 18:13 ` [PATCH v6 03/31] dentry: move to per-sb LRU locks Glauber Costa
2013-05-12 18:13 ` [PATCH v6 04/31] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-05-14 2:02 ` Dave Chinner
2013-05-14 5:46 ` [PATCH v7 " Dave Chinner
2013-05-14 7:10 ` Dave Chinner
2013-05-14 12:43 ` Glauber Costa
2013-05-14 20:32 ` Dave Chinner
2013-05-12 18:13 ` [PATCH v6 05/31] mm: new shrinker API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 06/31] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 07/31] list: add a new LRU list type Glauber Costa
2013-05-13 9:25 ` Mel Gorman
2013-05-12 18:13 ` [PATCH v6 08/31] inode: convert inode lru list to generic lru list code Glauber Costa
2013-05-12 18:13 ` [PATCH v6 09/31] dcache: convert to use new lru list infrastructure Glauber Costa
2013-05-14 6:59 ` Dave Chinner
2013-05-14 7:50 ` Glauber Costa
2013-05-14 14:01 ` Glauber Costa
2013-05-12 18:13 ` [PATCH v6 10/31] list_lru: per-node " Glauber Costa
2013-05-12 18:13 ` [PATCH v6 11/31] shrinker: add node awareness Glauber Costa
2013-05-12 18:13 ` [PATCH v6 12/31] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-05-14 9:52 ` Dave Chinner
2013-05-15 15:27 ` Glauber Costa
2013-05-16 0:02 ` Dave Chinner
2013-05-16 8:03 ` Glauber Costa
2013-05-16 19:14 ` Glauber Costa [this message]
2013-05-17 0:51 ` Dave Chinner
2013-05-17 7:29 ` Glauber Costa
2013-05-17 14:49 ` Glauber Costa
2013-05-17 22:54 ` Glauber Costa
2013-05-18 3:39 ` Dave Chinner
2013-05-18 7:20 ` Glauber Costa
2013-05-12 18:13 ` [PATCH v6 13/31] xfs: convert buftarg LRU to generic code Glauber Costa
2013-05-12 18:13 ` [PATCH v6 14/31] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-05-12 18:13 ` [PATCH v6 15/31] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-05-13 6:12 ` Artem Bityutskiy
2013-05-13 7:28 ` Glauber Costa
2013-05-13 7:43 ` Artem Bityutskiy
2013-05-13 10:36 ` Jan Kara
2013-05-12 18:13 ` [PATCH v6 16/31] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 17/31] i915: bail out earlier when shrinker cannot acquire mutex Glauber Costa
2013-05-12 18:13 ` [PATCH v6 18/31] shrinker: convert remaining shrinkers to count/scan API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 19/31] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 20/31] shrinker: Kill old ->shrink API Glauber Costa
2013-05-12 18:13 ` [PATCH v6 21/31] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-05-12 18:13 ` [PATCH v6 22/31] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-05-12 18:13 ` [PATCH v6 23/31] lru: add an element to a memcg list Glauber Costa
2013-05-12 18:13 ` [PATCH v6 24/31] list_lru: per-memcg walks Glauber Costa
2013-05-12 18:13 ` [PATCH v6 25/31] memcg: per-memcg kmem shrinking Glauber Costa
2013-05-12 18:13 ` [PATCH v6 26/31] memcg: scan cache objects hierarchically Glauber Costa
2013-05-12 18:13 ` [PATCH v6 27/31] vmscan: take at least one pass with shrinkers Glauber Costa
2013-05-12 18:13 ` [PATCH v6 28/31] super: targeted memcg reclaim Glauber Costa
2013-05-12 18:13 ` [PATCH v6 29/31] memcg: move initialization to memcg creation Glauber Costa
2013-05-12 18:13 ` [PATCH v6 30/31] vmpressure: in-kernel notifications Glauber Costa
2013-05-12 18:13 ` [PATCH v6 31/31] memcg: reap dead memcgs upon global memory pressure Glauber Costa
2013-05-13 7:14 ` [PATCH v6 00/31] kmemcg shrinkers Dave Chinner
2013-05-13 7:21 ` Dave Chinner
2013-05-13 8:00 ` Glauber Costa
2013-05-14 1:48 ` Dave Chinner
2013-05-14 5:22 ` Dave Chinner
2013-05-14 5:45 ` Dave Chinner
2013-05-14 7:38 ` Glauber Costa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5195302A.2090406@parallels.com \
--to=glommer@parallels.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=david@fromorbit.com \
--cc=dchinner@redhat.com \
--cc=glommer@openvz.org \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox