From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f72.google.com (mail-wm0-f72.google.com [74.125.82.72]) by kanga.kvack.org (Postfix) with ESMTP id 450E16B0033 for ; Fri, 13 Jan 2017 10:08:37 -0500 (EST) Received: by mail-wm0-f72.google.com with SMTP id r144so16809068wme.0 for ; Fri, 13 Jan 2017 07:08:37 -0800 (PST) Received: from mx2.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id 34si11526712wrc.11.2017.01.13.07.08.35 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 13 Jan 2017 07:08:36 -0800 (PST) Date: Fri, 13 Jan 2017 16:08:34 +0100 From: Michal Hocko Subject: Re: [PATCH 4/4] lib/show_mem.c: teach show_mem to work with the given nodemask Message-ID: <20170113150834.GN25212@dhcp22.suse.cz> References: <20170112131659.23058-1-mhocko@kernel.org> <20170112131659.23058-5-mhocko@kernel.org> <13903870-92bd-1ea2-aefc-0481c850da19@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <13903870-92bd-1ea2-aefc-0481c850da19@suse.cz> Sender: owner-linux-mm@kvack.org List-ID: To: Vlastimil Babka Cc: linux-mm@kvack.org, Andrew Morton , Johannes Weiner , Mel Gorman , David Rientjes On Fri 13-01-17 14:08:34, Vlastimil Babka wrote: > On 01/12/2017 02:16 PM, Michal Hocko wrote: > > From: Michal Hocko > > > > show_mem() allows to filter out node specific data which is irrelevant > > to the allocation request via SHOW_MEM_FILTER_NODES. The filtering > > is done in skip_free_areas_node which skips all nodes which are not > > in the mems_allowed of the current process. This works most of the > > time as expected because the nodemask shouldn't be outside of the > > allocating task but there are some exceptions. E.g. memory hotplug might > > want to request allocations from outside of the allowed nodes (see > > new_node_page). > > Hm AFAICS memory hotplug's new_node_page() is restricted both by cpusets (by > using GFP_USER), and by the nodemask it constructs. That's probably a bug in > itself, as it shouldn't matter which task is triggering the offline? yes that is true. A task bound to a node which is offlined would be funny... > Which probably means that if show_mem() wants to be really precise, it would > have to start from nodemask and intersect with cpuset when the allocation in > question cannot escape it. But if we accept that it's ok when we print too > many nodes (because we can filter them out when reading the output by having > also nodemask and mems_allowed printed), and strive only to not miss any > nodes, then this patch could really fix cases when we do miss (although > new_node_page() currently isn't such example). I guess it should be sufficient to add cpuset_print_current_mems_allowed() in warn_alloc. This should give us the full picture without doing too much twiddling. What do you think? -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org