From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08081C433ED for ; Wed, 7 Apr 2021 12:32:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 64B32611C9 for ; Wed, 7 Apr 2021 12:32:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 64B32611C9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DD74F6B007D; Wed, 7 Apr 2021 08:32:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D602D6B007E; Wed, 7 Apr 2021 08:32:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB29D6B0080; Wed, 7 Apr 2021 08:32:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 9BD386B007D for ; Wed, 7 Apr 2021 08:32:06 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4AD0F1803DD08 for ; Wed, 7 Apr 2021 12:32:06 +0000 (UTC) X-FDA: 78005508252.02.6BACF57 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf28.hostedemail.com (Postfix) with ESMTP id 004D2200026C for ; Wed, 7 Apr 2021 12:32:05 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6136CB138; Wed, 7 Apr 2021 12:32:04 +0000 (UTC) Subject: Re: [PATCH v2] mm: page_owner: detect page_owner recursion via task_struct To: Sergei Trofimovich , Andrew Morton , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira References: <20210402125039.671f1f40@sf> <20210402115342.1463781-1-slyfox@gentoo.org> From: Vlastimil Babka Message-ID: Date: Wed, 7 Apr 2021 14:32:03 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <20210402115342.1463781-1-slyfox@gentoo.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 004D2200026C X-Stat-Signature: o86764umhmibzqgps6b6i9ymaabspfh6 X-Rspamd-Server: rspam02 Received-SPF: none (suse.cz>: No applicable sender policy available) receiver=imf28; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: none/none X-HE-Tag: 1617798725-148341 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/2/21 1:53 PM, Sergei Trofimovich wrote: > Before the change page_owner recursion was detected via fetching > backtrace and inspecting it for current instruction pointer. > It has a few problems: > - it is slightly slow as it requires extra backtrace and a linear > stack scan of the result > - it is too late to check if backtrace fetching required memory > allocation itself (ia64's unwinder requires it). > > To simplify recursion tracking let's use page_owner recursion flag > in 'struct task_struct'. > > The change make page_owner=on work on ia64 by avoiding infinite > recursion in: > kmalloc() > -> __set_page_owner() > -> save_stack() > -> unwind() [ia64-specific] > -> build_script() > -> kmalloc() > -> __set_page_owner() [we short-circuit here] > -> save_stack() > -> unwind() [recursion] > > CC: Ingo Molnar > CC: Peter Zijlstra > CC: Juri Lelli > CC: Vincent Guittot > CC: Dietmar Eggemann > CC: Steven Rostedt > CC: Ben Segall > CC: Mel Gorman > CC: Daniel Bristot de Oliveira > CC: Andrew Morton > CC: linux-mm@kvack.org > Signed-off-by: Sergei Trofimovich Much better indeed, thanks. Acked-by: Vlastimil Babka > --- > Change since v1: > - use bit from task_struct instead of a new field > - track only one recursion depth level so far > > include/linux/sched.h | 4 ++++ > mm/page_owner.c | 32 ++++++++++---------------------- > 2 files changed, 14 insertions(+), 22 deletions(-) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index ef00bb22164c..00986450677c 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -841,6 +841,10 @@ struct task_struct { > /* Stalled due to lack of memory */ > unsigned in_memstall:1; > #endif > +#ifdef CONFIG_PAGE_OWNER > + /* Used by page_owner=on to detect recursion in page tracking. */ > + unsigned in_page_owner:1; > +#endif > > unsigned long atomic_flags; /* Flags requiring atomic access. */ > > diff --git a/mm/page_owner.c b/mm/page_owner.c > index 7147fd34a948..64b2e4c6afb7 100644 > --- a/mm/page_owner.c > +++ b/mm/page_owner.c > @@ -97,42 +97,30 @@ static inline struct page_owner *get_page_owner(struct page_ext *page_ext) > return (void *)page_ext + page_owner_ops.offset; > } > > -static inline bool check_recursive_alloc(unsigned long *entries, > - unsigned int nr_entries, > - unsigned long ip) > -{ > - unsigned int i; > - > - for (i = 0; i < nr_entries; i++) { > - if (entries[i] == ip) > - return true; > - } > - return false; > -} > - > static noinline depot_stack_handle_t save_stack(gfp_t flags) > { > unsigned long entries[PAGE_OWNER_STACK_DEPTH]; > depot_stack_handle_t handle; > unsigned int nr_entries; > > - nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); > - > /* > - * We need to check recursion here because our request to > - * stackdepot could trigger memory allocation to save new > - * entry. New memory allocation would reach here and call > - * stack_depot_save_entries() again if we don't catch it. There is > - * still not enough memory in stackdepot so it would try to > - * allocate memory again and loop forever. > + * Avoid recursion. > + * > + * Sometimes page metadata allocation tracking requires more > + * memory to be allocated: > + * - when new stack trace is saved to stack depot > + * - when backtrace itself is calculated (ia64) > */ > - if (check_recursive_alloc(entries, nr_entries, _RET_IP_)) > + if (current->in_page_owner) > return dummy_handle; > + current->in_page_owner = 1; > > + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); > handle = stack_depot_save(entries, nr_entries, flags); > if (!handle) > handle = failure_handle; > > + current->in_page_owner = 0; > return handle; > } > >