From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D96EBC2D0DB for ; Mon, 20 Jan 2020 10:22:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 95D192073D for ; Mon, 20 Jan 2020 10:22:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95D192073D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 221BA6B062C; Mon, 20 Jan 2020 05:22:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D2BE6B062D; Mon, 20 Jan 2020 05:22:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09A816B062E; Mon, 20 Jan 2020 05:22:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id E3AC36B062C for ; Mon, 20 Jan 2020 05:22:03 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 8C1818248047 for ; Mon, 20 Jan 2020 10:22:03 +0000 (UTC) X-FDA: 76397622126.17.balls60_693968c324c4d X-HE-Tag: balls60_693968c324c4d X-Filterd-Recvd-Size: 8101 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Mon, 20 Jan 2020 10:22:02 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id t14so14069829wmi.5 for ; Mon, 20 Jan 2020 02:22:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=cbm5di5vXHtzwgB8Y6nBPj8ah0P3MuQdgmNtWPBOPP4=; b=D6kHB57v9HF/rX/njHcYnYBzEBFKp3wRXmsK74WprWEDXvpOzBRJ5qgoOJ4oAVOo6X RD66akFjxi3qbl9wrODIOktRg5VY74fl3YcQc48mlAbCr8k9Bi/4YI8h/+3EcFOLi8Lv 2rg2zvee37j4J9+XDMKMq5wbYrix8IlsQM0YysOc1PISLqaprq1k7y+4TM4ME0XF/jyx UUuONACvzJ4ChCpFApYuithmHZb/kN5ezEQNjqeGQu40FnATEDNoQ8iurwrOXjg1vKt9 neBzPnWiLoqyfBfOZPt+DrMUpCS432qP+M0puQfGHCVlrO0gleZoUuZ4JIxyRS9t+HlD RBPg== X-Gm-Message-State: APjAAAWgmdPPPpckUWx5rfvf54nHgSCx1bd1SOhhGox0hsggfLqnlDbO ow+HhZRPeOCiE8vIeICDhyOU6wkz X-Google-Smtp-Source: APXvYqxWG4IEGru8SBnEIah91uAMRY5UJooyI81T4dEpFUbo1FSWo1ptoJ0EbASvpa8WdykUIBkTSQ== X-Received: by 2002:a7b:c38c:: with SMTP id s12mr17740668wmj.96.1579515721753; Mon, 20 Jan 2020 02:22:01 -0800 (PST) Received: from localhost (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id n189sm329608wme.33.2020.01.20.02.22.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Jan 2020 02:22:01 -0800 (PST) Date: Mon, 20 Jan 2020 11:22:00 +0100 From: Michal Hocko To: Wei Yang Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, rientjes@google.com Subject: Re: [Patch v2 3/4] mm/page_alloc.c: pass all bad reasons to bad_page() Message-ID: <20200120102200.GW18451@dhcp22.suse.cz> References: <20200120030415.15925-1-richardw.yang@linux.intel.com> <20200120030415.15925-4-richardw.yang@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200120030415.15925-4-richardw.yang@linux.intel.com> User-Agent: Mutt/1.12.2 (2019-09-21) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 20-01-20 11:04:14, Wei Yang wrote: > Now we can pass all bad reasons to __dump_page(). And we do we want to do that? The dump of the page will tell us the whole story so a single and the most important reason sounds like a better implementation. The code is also more subtle because each caller of the function has to be aware of how many reasons there might be. Not to mention that you need a room for 5 pointers on the stack and this and page allocator might be called from deeper call chains. > Signed-off-by: Wei Yang > --- > mm/page_alloc.c | 52 ++++++++++++++++++++++++++----------------------- > 1 file changed, 28 insertions(+), 24 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index a43b9d2482f2..a7b793c739fc 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -609,7 +609,7 @@ static inline int __maybe_unused bad_range(struct zone *zone, struct page *page) > } > #endif > > -static void bad_page(struct page *page, const char *reason, > +static void bad_page(struct page *page, int nr, const char **reason, > unsigned long bad_flags) > { > static unsigned long resume; > @@ -638,7 +638,7 @@ static void bad_page(struct page *page, const char *reason, > > pr_alert("BUG: Bad page state in process %s pfn:%05lx\n", > current->comm, page_to_pfn(page)); > - __dump_page(page, 1, &reason); > + __dump_page(page, nr, reason); > bad_flags &= page->flags; > if (bad_flags) > pr_alert("bad because of flags: %#lx(%pGp)\n", > @@ -1027,27 +1027,25 @@ static inline bool page_expected_state(struct page *page, > > static void free_pages_check_bad(struct page *page) > { > - const char *bad_reason; > - unsigned long bad_flags; > - > - bad_reason = NULL; > - bad_flags = 0; > + const char *bad_reason[5]; > + unsigned long bad_flags = 0; > + int nr = 0; > > if (unlikely(atomic_read(&page->_mapcount) != -1)) > - bad_reason = "nonzero mapcount"; > + bad_reason[nr++] = "nonzero mapcount"; > if (unlikely(page->mapping != NULL)) > - bad_reason = "non-NULL mapping"; > + bad_reason[nr++] = "non-NULL mapping"; > if (unlikely(page_ref_count(page) != 0)) > - bad_reason = "nonzero _refcount"; > + bad_reason[nr++] = "nonzero _refcount"; > if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_FREE)) { > - bad_reason = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set"; > + bad_reason[nr++] = "PAGE_FLAGS_CHECK_AT_FREE flag(s) set"; > bad_flags = PAGE_FLAGS_CHECK_AT_FREE; > } > #ifdef CONFIG_MEMCG > if (unlikely(page->mem_cgroup)) > - bad_reason = "page still charged to cgroup"; > + bad_reason[nr++] = "page still charged to cgroup"; > #endif > - bad_page(page, bad_reason, bad_flags); > + bad_page(page, nr, bad_reason, bad_flags); > } > > static inline int free_pages_check(struct page *page) > @@ -1062,6 +1060,7 @@ static inline int free_pages_check(struct page *page) > > static int free_tail_pages_check(struct page *head_page, struct page *page) > { > + const char *reason; > int ret = 1; > > /* > @@ -1078,7 +1077,8 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) > case 1: > /* the first tail page: ->mapping may be compound_mapcount() */ > if (unlikely(compound_mapcount(page))) { > - bad_page(page, "nonzero compound_mapcount", 0); > + reason = "nonzero compound_mapcount"; > + bad_page(page, 1, &reason, 0); > goto out; > } > break; > @@ -1090,17 +1090,20 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) > break; > default: > if (page->mapping != TAIL_MAPPING) { > - bad_page(page, "corrupted mapping in tail page", 0); > + reason = "corrupted mapping in tail page"; > + bad_page(page, 1, &reason, 0); > goto out; > } > break; > } > if (unlikely(!PageTail(page))) { > - bad_page(page, "PageTail not set", 0); > + reason = "PageTail not set"; > + bad_page(page, 1, &reason, 0); > goto out; > } > if (unlikely(compound_head(page) != head_page)) { > - bad_page(page, "compound_head not consistent", 0); > + reason = "compound_head not consistent"; > + bad_page(page, 1, &reason, 0); > goto out; > } > ret = 0; > @@ -2041,29 +2044,30 @@ static inline void expand(struct zone *zone, struct page *page, > > static void check_new_page_bad(struct page *page) > { > - const char *bad_reason = NULL; > + const char *bad_reason[5]; > unsigned long bad_flags = 0; > + int nr = 0; > > if (unlikely(atomic_read(&page->_mapcount) != -1)) > - bad_reason = "nonzero mapcount"; > + bad_reason[nr++] = "nonzero mapcount"; > if (unlikely(page->mapping != NULL)) > - bad_reason = "non-NULL mapping"; > + bad_reason[nr++] = "non-NULL mapping"; > if (unlikely(page_ref_count(page) != 0)) > - bad_reason = "nonzero _refcount"; > + bad_reason[nr++] = "nonzero _refcount"; > if (unlikely(page->flags & __PG_HWPOISON)) { > /* Don't complain about hwpoisoned pages */ > page_mapcount_reset(page); /* remove PageBuddy */ > return; > } > if (unlikely(page->flags & PAGE_FLAGS_CHECK_AT_PREP)) { > - bad_reason = "PAGE_FLAGS_CHECK_AT_PREP flag set"; > + bad_reason[nr++] = "PAGE_FLAGS_CHECK_AT_PREP flag set"; > bad_flags = PAGE_FLAGS_CHECK_AT_PREP; > } > #ifdef CONFIG_MEMCG > if (unlikely(page->mem_cgroup)) > - bad_reason = "page still charged to cgroup"; > + bad_reason[nr++] = "page still charged to cgroup"; > #endif > - bad_page(page, bad_reason, bad_flags); > + bad_page(page, 1, bad_reason, bad_flags); > } > > /* > -- > 2.17.1 > -- Michal Hocko SUSE Labs