From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12AD0C433EF for ; Wed, 11 May 2022 14:23:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2540E6B0078; Wed, 11 May 2022 10:23:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 201BB6B007B; Wed, 11 May 2022 10:23:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A3D28D0002; Wed, 11 May 2022 10:23:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EFB616B0078 for ; Wed, 11 May 2022 10:23:12 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B41E81DA8 for ; Wed, 11 May 2022 14:23:12 +0000 (UTC) X-FDA: 79453679424.16.4E95079 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf16.hostedemail.com (Postfix) with ESMTP id 21D4A1800C5 for ; Wed, 11 May 2022 14:23:02 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 8CBDE1F8B4; Wed, 11 May 2022 14:23:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1652278990; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xD3/QEILxbkXd9LbTfoyaU7XN9dW0NYEfVuLDPsITXU=; b=XosQ4uH0cgBdXFuFHSBICXfZjVDKzSlp484V43bI+ZisGscnUp7KyMqB52HXqoullrmeOK gbdCM7fyNx1FnzqiL81GCsQupdkOu8kz9+OweJ2YNR685BOZotvhlCprTAavb/VUWyYzAn 5grwiDqw9guq9qkj7pkNBMQ+G1x4lr4= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1652278990; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xD3/QEILxbkXd9LbTfoyaU7XN9dW0NYEfVuLDPsITXU=; b=vnw70Mkgdte5aGdvUwofLtyi/KyjgXev43LoBdPt5+SedcfaDIgtSdfNWJnz/CjPAtBZfv 8IJLG5it69npjQAQ== Received: from suse.de (unknown [10.163.32.246]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id D14102C141; Wed, 11 May 2022 14:23:05 +0000 (UTC) Date: Wed, 11 May 2022 15:23:03 +0100 From: Mel Gorman To: Wonhyuk Yang Cc: Steven Rostedt , Ingo Molnar , Andrew Morton , Baik Song An , Hong Yeon Kim , Taeung Song , linuxgeek@linuxgeek.io, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] mm/page_alloc: Fix tracepoint mm_page_alloc_zone_locked() Message-ID: <20220511142303.GN20579@suse.de> References: <20220511081207.132034-1-vvghjk1234@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20220511081207.132034-1-vvghjk1234@gmail.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: 21D4A1800C5 X-Stat-Signature: 8a5ma3rk1mrxix7ijmdyerfbg7wac4k1 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=XosQ4uH0; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=vnw70Mkg; dmarc=pass (policy=none) header.from=suse.de; spf=pass (imf16.hostedemail.com: domain of mgorman@suse.de designates 195.135.220.29 as permitted sender) smtp.mailfrom=mgorman@suse.de X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1652278982-389800 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 11, 2022 at 05:12:07PM +0900, Wonhyuk Yang wrote: > Currently, trace point mm_page_alloc_zone_locked() doesn't show > correct information. > > First, when alloc_flag has ALLOC_HARDER/ALLOC_CMA, page can > be allocated from MIGRATE_HIGHATOMIC/MIGRATE_CMA. Nevertheless, > tracepoint use requested migration type not MIGRATE_HIGHATOMIC and > MIGRATE_CMA. > > Second, after Commit 44042b4498728 ("mm/page_alloc: allow high-order > pages to be stored on the per-cpu lists") percpu-list can store > high order pages. But trace point determine whether it is a refiil > of percpu-list by comparing requested order and 0. > > To handle these problems, use cached migration type by > get_pcppage_migratetype() instead of requested migration type. > Then, make mm_page_alloc_zone_locked() be called only two contexts > (rmqueue_bulk, rmqueue). With a new argument called percpu_refill, > it can show whether it is a refill of percpu-list correctly. > You're definitely right that the current tracepoint is broken. I got momentarily confused because HIGHATOMIC and CMA are not stored on PCP lists even though they are a pageblock migrate type. Superficially calling get_pcppage_migratetype on a page that cannot be a PCP page seems silly but in the context of this patch, it happens to work because it was isolated with __rmqueue_smallest which sets the PCP type even if the page is not going to a PCP list. The original intent of that tracepoint was to trace when pages were removed from the buddy list. That would suggest this untested patch on top of yours as a simplication; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0351808322ba..66a70b898130 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2476,6 +2476,8 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, del_page_from_free_list(page, zone, current_order); expand(zone, page, order, current_order, migratetype); set_pcppage_migratetype(page, migratetype); + trace_mm_page_alloc_zone_locked(page, order, migratetype, + pcp_allowed_order(order) && migratetype < MIGRATE_PCPTYPES); return page; } @@ -3025,7 +3027,6 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, int migratetype, unsigned int alloc_flags) { int i, allocated = 0; - int mt; /* * local_lock_irq held so equivalent to spin_lock_irqsave for @@ -3053,9 +3054,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, */ list_add_tail(&page->lru, list); allocated++; - mt = get_pcppage_migratetype(page); - trace_mm_page_alloc_zone_locked(page, order, mt, true); - if (is_migrate_cma(mt)) + if (is_migrate_cma(get_pcppage_migratetype(page))) __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, -(1 << order)); } @@ -3704,7 +3703,6 @@ struct page *rmqueue(struct zone *preferred_zone, { unsigned long flags; struct page *page; - int mt; if (likely(pcp_allowed_order(order))) { /* @@ -3734,17 +3732,15 @@ struct page *rmqueue(struct zone *preferred_zone, * reserved for high-order atomic allocation, so order-0 * request should skip it. */ - if (order > 0 && alloc_flags & ALLOC_HARDER) { + if (order > 0 && alloc_flags & ALLOC_HARDER) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); - } if (!page) { page = __rmqueue(zone, order, migratetype, alloc_flags); if (!page) goto failed; } - mt = get_pcppage_migratetype(page); - trace_mm_page_alloc_zone_locked(page, order, mt, false); - __mod_zone_freepage_state(zone, -(1 << order), mt); + __mod_zone_freepage_state(zone, -(1 << order), + get_pcppage_migratetype(page)); spin_unlock_irqrestore(&zone->lock, flags); } while (check_new_pages(page, order));