From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f72.google.com (mail-wm0-f72.google.com [74.125.82.72]) by kanga.kvack.org (Postfix) with ESMTP id 5564D6B007E for ; Tue, 10 May 2016 04:57:08 -0400 (EDT) Received: by mail-wm0-f72.google.com with SMTP id e201so8153118wme.1 for ; Tue, 10 May 2016 01:57:08 -0700 (PDT) Received: from mail-wm0-f67.google.com (mail-wm0-f67.google.com. [74.125.82.67]) by mx.google.com with ESMTPS id i200si30648308wmd.116.2016.05.10.01.57.07 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 10 May 2016 01:57:07 -0700 (PDT) Received: by mail-wm0-f67.google.com with SMTP id n129so1522376wmn.1 for ; Tue, 10 May 2016 01:57:07 -0700 (PDT) Date: Tue, 10 May 2016 10:57:06 +0200 From: Michal Hocko Subject: Re: [PATCH 6/6] mm/page_owner: use stackdepot to store stacktrace Message-ID: <20160510085706.GG23576@dhcp22.suse.cz> References: <1462252984-8524-1-git-send-email-iamjoonsoo.kim@lge.com> <1462252984-8524-7-git-send-email-iamjoonsoo.kim@lge.com> <20160503085356.GD28039@dhcp22.suse.cz> <20160504021449.GA10256@js1304-P5Q-DELUXE> <20160504092133.GG29978@dhcp22.suse.cz> <20160504194019.GE21490@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: owner-linux-mm@kvack.org List-ID: To: Joonsoo Kim Cc: Joonsoo Kim , Andrew Morton , Vlastimil Babka , Mel Gorman , Minchan Kim , Alexander Potapenko , Linux Memory Management List , LKML On Tue 10-05-16 16:07:14, Joonsoo Kim wrote: > 2016-05-05 4:40 GMT+09:00 Michal Hocko : > > On Thu 05-05-16 00:30:35, Joonsoo Kim wrote: > >> 2016-05-04 18:21 GMT+09:00 Michal Hocko : > > [...] > >> > Do we really consume 512B of stack during reclaim. That sounds more than > >> > worrying to me. > >> > >> Hmm...I checked it by ./script/stackusage and result is as below. > >> > >> shrink_zone() 128 > >> shrink_zone_memcg() 248 > >> shrink_active_list() 176 > >> > >> We have a call path that shrink_zone() -> shrink_zone_memcg() -> > >> shrink_active_list(). > >> I'm not sure whether it is the deepest path or not. > > > > This is definitely not the deepest path. Slab shrinkers can take more > > but 512B is still a lot. Some call paths are already too deep when > > calling into the allocator and some of them already use GFP_NOFS to > > prevent from potentially deep callchain slab shrinkers. Anyway worth > > exploring for better solutions. > > > > And I believe it would be better to solve this in the stackdepot > > directly so other users do not have to invent their own ways around the > > same issue. I have just checked the code and set_track uses save_stack > > which does the same thing and it seems to be called from the slab > > allocator. I have missed this usage before so the problem already does > > exist. It would be unfair to request you to fix that in order to add a > > new user. It would be great if this got addressed though. > > Yes, fixing it in stackdepot looks more reasonable. > Then, I will just change PAGE_OWNER_STACK_DEPTH from 64 to 16 and > leave the code as is for now. With this change, we will just consume 128B stack > and would not cause stack problem. If anyone has an objection, > please let me know. 128B is still quite a lot but considering there is a plan to make it more robust I can live with it as a temporary workaround. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org