From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ABC8C32792 for ; Thu, 3 Oct 2019 12:01:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2473F20842 for ; Thu, 3 Oct 2019 12:01:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2473F20842 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B18436B0005; Thu, 3 Oct 2019 08:01:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF0086B0006; Thu, 3 Oct 2019 08:01:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A06448E0003; Thu, 3 Oct 2019 08:01:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 8128C6B0005 for ; Thu, 3 Oct 2019 08:01:46 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 24CB183E5 for ; Thu, 3 Oct 2019 12:01:46 +0000 (UTC) X-FDA: 76002334212.28.arm90_f5602f75c90a X-HE-Tag: arm90_f5602f75c90a X-Filterd-Recvd-Size: 4736 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Thu, 3 Oct 2019 12:01:45 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 12729337; Thu, 3 Oct 2019 05:01:44 -0700 (PDT) Received: from [10.162.40.180] (p8cg001049571a15.blr.arm.com [10.162.40.180]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0C6203F534; Thu, 3 Oct 2019 05:01:40 -0700 (PDT) Subject: Re: [PATCH] mm/page_alloc: Add a reason for reserved pages in has_unmovable_pages() To: Qian Cai Cc: linux-mm@kvack.org, Andrew Morton , Michal Hocko , Vlastimil Babka , Oscar Salvador , Mel Gorman , Mike Rapoport , Dan Williams , Pavel Tatashin , linux-kernel@vger.kernel.org References: <983E7EA4-A022-448C-B11D-8C10441A2E07@lca.pw> From: Anshuman Khandual Message-ID: <49fa7dea-00ac-155f-e7b7-eeca206556b5@arm.com> Date: Thu, 3 Oct 2019 17:32:02 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <983E7EA4-A022-448C-B11D-8C10441A2E07@lca.pw> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000010, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/03/2019 05:20 PM, Qian Cai wrote: >=20 >=20 >> On Oct 3, 2019, at 7:31 AM, Anshuman Khandual wrote: >> >> Ohh, never meant that the 'Reserved' bit is anything special here but = it >> is a reason to make a page unmovable unlike many other flags. >=20 > But dump_page() is used everywhere, and it is better to reserve =E2=80=9C= reason=E2=80=9D to indicate something more important rather than duplicat= ing the page flags. >=20 > Especially, it is trivial enough right now for developers look in the p= age flags dumping from has_unmovable_pages(), and figure out the exact br= anching in the code. >=20 Will something like this be better ? hugepage_migration_supported() has g= ot uncertainty depending on platform and huge page size. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 15c2050c629b..8dbc86696515 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8175,7 +8175,7 @@ bool has_unmovable_pages(struct zone *zone, struct = page *page, int count, unsigned long found; unsigned long iter =3D 0; unsigned long pfn =3D page_to_pfn(page); - const char *reason =3D "unmovable page"; + const char *reason; /* * TODO we could make this much more efficient by not checking ev= ery @@ -8194,7 +8194,7 @@ bool has_unmovable_pages(struct zone *zone, struct = page *page, int count, if (is_migrate_cma(migratetype)) return false; - reason =3D "CMA page"; + reason =3D "Unmovable CMA page"; goto unmovable; } @@ -8206,8 +8206,10 @@ bool has_unmovable_pages(struct zone *zone, struct= page *page, int count, page =3D pfn_to_page(check); - if (PageReserved(page)) + if (PageReserved(page)) { + reason =3D "Unmovable reserved page"; goto unmovable; + } /* * If the zone is movable and we have ruled out all reser= ved @@ -8226,8 +8228,10 @@ bool has_unmovable_pages(struct zone *zone, struct= page *page, int count, struct page *head =3D compound_head(page); unsigned int skip_pages; - if (!hugepage_migration_supported(page_hstate(hea= d))) + if (!hugepage_migration_supported(page_hstate(hea= d))) { + reason =3D "Unmovable HugeTLB page"; goto unmovable; + } skip_pages =3D compound_nr(head) - (page - head); iter +=3D skip_pages - 1; @@ -8271,8 +8275,10 @@ bool has_unmovable_pages(struct zone *zone, struct= page *page, int count, * is set to both of a memory hole page and a _used_ kern= el * page at boot. */ - if (found > count) + if (found > count) { + reason =3D "Unmovable non-LRU page"; goto unmovable; + } } return false; unmovable: