From: JoonSoo Kim <js1304@gmail.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>,
Glauber Costa <glommer@parallels.com>,
Linux Memory Management List <linux-mm@kvack.org>,
linux-fsdevel@vger.kernel.org,
containers@lists.linux-foundation.org,
Michal Hocko <mhocko@suse.cz>,
Johannes Weiner <hannes@cmpxchg.org>,
kamezawa.hiroyu@jp.fujitsu.com,
Andrew Morton <akpm@linux-foundation.org>,
Greg Thelen <gthelen@google.com>,
hughd@google.com, yinghan@google.com,
Theodore Ts'o <tytso@mit.edu>, Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH v2 02/28] vmscan: take at least one pass with shrinkers
Date: Wed, 10 Apr 2013 23:03:39 +0900 [thread overview]
Message-ID: <CAAmzW4OMyZ=nVbHK_AiifPK5LVxvhOQUXmsD5NGfo33CBjf=eA@mail.gmail.com> (raw)
In-Reply-To: <20130410100752.GA10481@dastard>
2013/4/10 Dave Chinner <david@fromorbit.com>:
> On Wed, Apr 10, 2013 at 04:46:06PM +0800, Wanpeng Li wrote:
>> On Wed, Apr 10, 2013 at 11:51:16AM +0900, Joonsoo Kim wrote:
>> >On Tue, Apr 09, 2013 at 10:30:08PM +1000, Dave Chinner wrote:
>> >> On Tue, Apr 09, 2013 at 11:05:05AM +0900, Joonsoo Kim wrote:
>> >> > I don't think so.
>> >> > Yes, lowmem_shrink() return number of (in)active lru pages
>> >> > when nr_to_scan is 0. And in shrink_slab(), we divide it by lru_pages.
>> >> > lru_pages can vary where shrink_slab() is called, anyway, perhaps this
>> >> > logic makes total_scan below 128.
>> >>
>> >> "perhaps"
>> >>
>> >>
>> >> There is no "perhaps" here - there is *zero* guarantee of the
>> >> behaviour you are claiming the lowmem killer shrinker is dependent
>> >> on with the existing shrinker infrastructure. So, lets say we have:
> .....
>> >> IOWs, this algorithm effectively causes the shrinker to be called
>> >> 127 times out of 128 in this arbitrary scenario. It does not behave
>> >> as you are assuming it to, and as such any code based on those
>> >> assumptions is broken....
>> >
>> >Thanks for good example. I got your point :)
>> >But, my concern is not solved entirely, because this is not problem
>> >just for lowmem killer and I can think counter example. And other drivers
>> >can be suffered from this change.
>> >
>> >I look at the code for "huge_zero_page_shrinker".
>> >They return HPAGE_PMD_NR if there is shrikerable object.
>
> <sigh>
>
> Yet another new shrinker that is just plain broken. it tracks a
> *single object*, and returns a value only when the ref count value
> is 1 which will result in freeing the zero page at some
> random time in the future after some number of other calls to the
> shrinker where the refcount is also 1.
>
> This is *insane*.
>
>> >I try to borrow your example for this case.
>> >
>> > nr_pages_scanned = 1,000
>> > lru_pages = 100,000
>> > batch_size = SHRINK_BATCH = 128
>> > max_pass= 512 (HPAGE_PMD_NR)
>> >
>> > total_scan = shrinker->nr_in_batch = 0
>> > delta = 4 * 1,000 / 2 = 2,000
>> > delta = 2,000 * 512 = 1,024,000
>> > delta = 1,024,000 / 100,001 = 10
>> > total_scan += delta = 10
>> >
>> >As you can see, before this patch, do_shrinker_shrink() for
>> >"huge_zero_page_shrinker" is not called until we call shrink_slab() more
>> >than 13 times. *Frequency* we call do_shrinker_shrink() actually is
>> >largely different with before.
>
> If the frequency of the shrinker calls breaks the shrinker
> functionality or the subsystem because it pays no attention to
> nr_to_scan, then the shrinker is fundamentally broken. The shrinker
> has *no control* over the frequency of the calls to it or the bathc
> size, and so being dependent on "small numbers means few calls" for
> correct behaviour is dangerously unpredictable and completely
> non-deterministic.
>
> Besides, if you don't want to be shrunk, return a count of -1.
> Shock, horror, it is even documented in the API!
>
> * 'sc' is passed shrink_control which includes a count 'nr_to_scan'
> * and a 'gfpmask'. It should look through the least-recently-used
> * 'nr_to_scan' entries and attempt to free them up. It should return
> * the number of objects which remain in the cache. If it returns -1, it means
> * it cannot do any scanning at this time (eg. there is a risk of deadlock).
>
>> >With this patch, we actually call
>> >do_shrinker_shrink() for "huge_zero_page_shrinker" 12 times more
>> >than before. Can we be convinced that there will be no problem?
>> >
>> >This is why I worry about this change.
>> >Am I worried too much? :)
>
> You're worrying about the wrong thing. You're assuming that
> shrinkers are implemented correctly and sanely, but the reality is
> that most shrinkers are fundamentally broken in some way or another.
>
> These are just two examples of many. We are trying to fix the API
> and shrinker infrastructure to remove the current insanity. We want
> to make the shrinkers more flexible so that stuff like one-shot low
> memory event notifications can be implemented without grotesque
> hacks like the shrinkers you've used as examples so far...
Yes, it is great.
I already know that many shrinkers are wrongly implemented.
Above examples explain themselves.
Another one what I found is that they don't account "nr_reclaimed" precisely.
There is no code which check whether "current->reclaim_state" exist or not,
except prune_inode(). So if they reclaim a page directly, they will not
account how many pages are freed, so shrink_zone() and shrink_slab() will
be called excessively.
Maybe there is no properly implemented shrinker except fs' one :)
But, this is a reality where we live. So I have worried about it.
Now, I'm Okay. So please fotget my concern.
Thanks.
> -Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-04-10 14:03 UTC|newest]
Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-29 9:13 [PATCH v2 00/28] memcg-aware slab shrinking Glauber Costa
2013-03-29 9:13 ` [PATCH v2 01/28] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-04-01 7:16 ` Kamezawa Hiroyuki
2013-03-29 9:13 ` [PATCH v2 02/28] vmscan: take at least one pass with shrinkers Glauber Costa
2013-04-01 7:26 ` Kamezawa Hiroyuki
2013-04-01 8:10 ` Glauber Costa
2013-04-10 5:09 ` Ric Mason
2013-04-10 7:32 ` Glauber Costa
2013-04-10 9:19 ` Dave Chinner
2013-04-08 8:42 ` Joonsoo Kim
2013-04-08 8:47 ` Glauber Costa
2013-04-08 9:01 ` Joonsoo Kim
2013-04-08 9:05 ` Glauber Costa
2013-04-09 0:55 ` Joonsoo Kim
2013-04-09 1:29 ` Dave Chinner
2013-04-09 2:05 ` Joonsoo Kim
2013-04-09 7:43 ` Glauber Costa
2013-04-09 9:08 ` Joonsoo Kim
2013-04-09 12:30 ` Dave Chinner
2013-04-10 2:51 ` Joonsoo Kim
2013-04-10 7:30 ` Glauber Costa
2013-04-10 8:19 ` Joonsoo Kim
2013-04-10 8:46 ` Wanpeng Li
2013-04-10 8:46 ` Wanpeng Li
2013-04-10 10:07 ` Dave Chinner
2013-04-10 14:03 ` JoonSoo Kim [this message]
2013-04-11 0:41 ` Dave Chinner
2013-04-11 7:27 ` Wanpeng Li
2013-04-11 9:25 ` Dave Chinner
2013-04-11 7:27 ` Wanpeng Li
2013-04-11 7:27 ` Wanpeng Li
[not found] ` <20130410025115.GA5872-Hm3cg6mZ9cc@public.gmane.org>
2013-04-10 8:46 ` Wanpeng Li
2013-03-29 9:13 ` [PATCH v2 03/28] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-04-05 1:09 ` Greg Thelen
2013-04-05 1:15 ` Dave Chinner
2013-04-08 9:14 ` Glauber Costa
2013-04-08 13:18 ` Glauber Costa
2013-04-08 23:26 ` Dave Chinner
2013-04-09 8:02 ` Glauber Costa
2013-04-09 12:47 ` Dave Chinner
2013-03-29 9:13 ` [PATCH v2 04/28] dentry: move to per-sb LRU locks Glauber Costa
2013-03-29 9:13 ` [PATCH v2 05/28] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-04-03 6:51 ` Sha Zhengju
2013-04-03 8:55 ` Glauber Costa
2013-04-04 6:19 ` Dave Chinner
2013-04-04 6:56 ` Glauber Costa
2013-03-29 9:13 ` [PATCH v2 06/28] mm: new shrinker API Glauber Costa
2013-04-05 1:09 ` Greg Thelen
2013-03-29 9:13 ` [PATCH v2 07/28] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-03-29 9:13 ` [PATCH v2 08/28] list: add a new LRU list type Glauber Costa
2013-04-04 21:53 ` Greg Thelen
2013-04-05 1:20 ` Dave Chinner
2013-04-05 8:01 ` Glauber Costa
2013-04-06 0:04 ` Dave Chinner
2013-03-29 9:13 ` [PATCH v2 09/28] inode: convert inode lru list to generic lru list code Glauber Costa
2013-03-29 9:13 ` [PATCH v2 10/28] dcache: convert to use new lru list infrastructure Glauber Costa
2013-04-08 13:14 ` Glauber Costa
2013-04-08 23:28 ` Dave Chinner
2013-03-29 9:13 ` [PATCH v2 11/28] list_lru: per-node " Glauber Costa
2013-03-29 9:13 ` [PATCH v2 12/28] shrinker: add node awareness Glauber Costa
2013-03-29 9:13 ` [PATCH v2 13/28] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-03-29 9:13 ` [PATCH v2 14/28] xfs: convert buftarg LRU to generic code Glauber Costa
2013-03-29 9:13 ` [PATCH v2 15/28] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-03-29 9:13 ` [PATCH v2 16/28] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-03-29 9:13 ` [PATCH v2 17/28] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-03-29 9:14 ` [PATCH v2 18/28] shrinker: convert remaining shrinkers to " Glauber Costa
2013-03-29 9:14 ` [PATCH v2 19/28] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-03-29 9:14 ` [PATCH v2 20/28] shrinker: Kill old ->shrink API Glauber Costa
2013-03-29 9:14 ` [PATCH v2 21/28] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-04-01 7:46 ` Kamezawa Hiroyuki
2013-04-01 8:51 ` Glauber Costa
2013-04-03 10:11 ` Sha Zhengju
2013-04-03 10:43 ` Glauber Costa
2013-04-04 9:35 ` Sha Zhengju
2013-04-05 8:25 ` Glauber Costa
2013-03-29 9:14 ` [PATCH v2 22/28] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-04-01 8:05 ` Kamezawa Hiroyuki
2013-04-01 8:22 ` Glauber Costa
2013-03-29 9:14 ` [PATCH v2 23/28] lru: add an element to a memcg list Glauber Costa
2013-04-01 8:18 ` Kamezawa Hiroyuki
2013-04-01 8:29 ` Glauber Costa
2013-03-29 9:14 ` [PATCH v2 24/28] list_lru: also include memcg lists in counts and scans Glauber Costa
2013-03-29 9:14 ` [PATCH v2 25/28] list_lru: per-memcg walks Glauber Costa
2013-03-29 9:14 ` [PATCH v2 26/28] memcg: per-memcg kmem shrinking Glauber Costa
2013-04-01 8:31 ` Kamezawa Hiroyuki
2013-04-01 8:48 ` Glauber Costa
2013-04-01 9:01 ` Kamezawa Hiroyuki
2013-04-01 9:14 ` Glauber Costa
2013-04-01 9:35 ` Kamezawa Hiroyuki
2013-03-29 9:14 ` [PATCH v2 27/28] list_lru: reclaim proportionaly between memcgs and nodes Glauber Costa
2013-03-29 9:14 ` [PATCH v2 28/28] super: targeted memcg reclaim Glauber Costa
2013-04-01 12:38 ` [PATCH v2 00/28] memcg-aware slab shrinking Serge Hallyn
2013-04-01 12:45 ` Glauber Costa
2013-04-01 14:12 ` Serge Hallyn
2013-04-08 8:11 ` Glauber Costa
2013-04-02 4:58 ` Dave Chinner
2013-04-02 7:55 ` Glauber Costa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAAmzW4OMyZ=nVbHK_AiifPK5LVxvhOQUXmsD5NGfo33CBjf=eA@mail.gmail.com' \
--to=js1304@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=containers@lists.linux-foundation.org \
--cc=david@fromorbit.com \
--cc=glommer@parallels.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liwanp@linux.vnet.ibm.com \
--cc=mhocko@suse.cz \
--cc=tytso@mit.edu \
--cc=viro@zeniv.linux.org.uk \
--cc=yinghan@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox