From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10B6EC77B7E for ; Mon, 29 May 2023 03:02:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 79BBF900003; Sun, 28 May 2023 23:02:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 74BC1900002; Sun, 28 May 2023 23:02:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63AB2900003; Sun, 28 May 2023 23:02:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 53C1D900002 for ; Sun, 28 May 2023 23:02:49 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 19356C011C for ; Mon, 29 May 2023 03:02:49 +0000 (UTC) X-FDA: 80841795258.09.1C7CB47 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf12.hostedemail.com (Postfix) with ESMTP id F341840005 for ; Mon, 29 May 2023 03:02:46 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HspPkQ9q; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf12.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685329367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FTkWOaYisJh1kEseVGRmEodkSxV5uZYMsoFJaP4sUcI=; b=kMMbMp2iiOlgoecaq3xJ8SMK5F96LsfP9ejWwQKTY7+YVDfBOlQCD8uIXMa9DUMzBmZc+7 GAfUGhMFJpn+CGlSqX3UR+5oyEJwdD2UAcbIZmyL46V1zU+OCzeFGxqgZEe/L/hKTGy6Oy dtapSdxIH1WLoo1RizX/HYHTKTqFxbQ= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HspPkQ9q; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf12.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685329367; a=rsa-sha256; cv=none; b=tvxOO6KgC2BZocgAGkXx9sDE5igtekulnMV9f8bBQvSeBZxheFz9UdIpvtg0ZmlYvq3Jqc UKqZthsdR5omtdvvIoyCatNjIEUY90hRbgNZxZARDmZ/i4oqh7ruKiTGQVkh15FILVT8bn dNazFodEwDdqMHzLaHP01isUzkBiemY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685329367; x=1716865367; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=cpAx3C+taT+Gelle0teqM12QyhLpw86X0NIl0V5m6xo=; b=HspPkQ9qoYID/Iuwz6Xu713WVQOuUbAAnDJ00t5Gnj6CJ2BLTHdRYDq4 ORAe/W8kNeiXTjlfQsVcTgC4EiazVOSBNeBz0lwDdg/7owcg78FiGE8iF yhfMJ2NzDwBstyBILqnTcwRXtlDlH4o3X3Ty058Lx5yqkPgCyT8lSyA0y d7nr46JJI5sIrQ0VZrAdHHWSKKWiwAuH7AB3qzJFAQusH4MoTjB6vbfrL HcJq8QjehI5R80uD2ynV1nZJ/gpFtYigdLPiMhiztHoXBBbfN2gnmuD2A sDufpvZmbijHVtpdqzqRIGikjoHBFae5zfNTDaoWyTM8XSbOiKkdggLe6 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="356981099" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="356981099" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 20:02:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="795767167" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="795767167" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 20:02:42 -0700 From: "Huang, Ying" To: Khalid Aziz Cc: akpm@linux-foundation.org, willy@infradead.org, steven.sistare@oracle.com, david@redhat.com, mgorman@techsingularity.net, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Khalid Aziz Subject: Re: [PATCH v4] mm, compaction: Skip all non-migratable pages during scan References: <20230525191507.160076-1-khalid.aziz@oracle.com> Date: Mon, 29 May 2023 11:01:40 +0800 In-Reply-To: <20230525191507.160076-1-khalid.aziz@oracle.com> (Khalid Aziz's message of "Thu, 25 May 2023 13:15:07 -0600") Message-ID: <87ttvvx2ln.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Stat-Signature: 7ku8prknqeecwggnfs7px4f3tk4wocte X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: F341840005 X-HE-Tag: 1685329366-778881 X-HE-Meta: U2FsdGVkX1+WKfRxgr7eJnJqUoVTtS6EJBf23UDd+o/UB5Wo8hCmvITpBkeh4/xQdciiE+RgxbLzg54moxyOfpL3XpvDfAi1YmzHL6Jj7plB+v6TlA9GZIC3q7nQihkpnTP6IAEQ3ky20c8jr4Mi+EtsXxU5KumFzUF8drfZp/BaB0mJDJQZ7wLakjgtt05sRmLbkD87lcUmYKqJ+8FGgybrsvjcF9ekOidgeZKs8+YPhDKfHLN1jHTZynTg8bEsYf1PMZUCQoU0hg8FCNTuTwZptvfa113Xbt/43lcgHnrXGctyzbOAEbImYQOhCicZsQemwhSYi0bZwpEoovl6hM9edMMuDHWtrpnGgAZ1Cc8vDoxft2WFuWdleuAVQbmjG3I2zrvn2B7JBk+UvoefbSLoz4/dzNBn4JMCHbXurEQIPupPJ6av3J1RepoOcH4+7GpxArOQ7EKq3qfMpKX7d1ZFiXSr/rIPqAe6IV93hvqNeW5vReFs+EuYp3hoiaLPeVkfnnNc9UlnLiBBUmLOaZCxNoEmXgVZw3Ti0JhYmlgie5ZuI7uDoSMh1tYnDJ+fXkQ3/Uvs9Av0m4vcjkbPNF95uvZ3plTOXXb4LaMan+95vf8NZ13KhUGTdlxbu+nvQuQ4lrMzctPrxW7KHF6EixAh2A8Gqg+Fj4V6c4eAprZKqzu8n/0EwXju14OY9wHdlJyYZure+UCiFFvTeEcqyRlyMGmUwDdi76d9JsyR7z8q4vcYU9AXLOhSswAJ3OCqVSSNEEvXgjo2PKzC4YidXLbyz6nMTm4sk0lLme3FzSXzkZ/8mBQt//Hfq4pCWiJnWrtTC9S6wDeFv3sUgtYcwkBPs4/ZicEol0/a57GlH93Y2POAs8IuTgEsXyy2aIj5Xo13Cblr95mFLnCrhRBVzvwE8D476kuRsVTD0Aa6+yuNzyaEL6ANgJP+MAl3/w8zA1H6rxS5omn7gQC12dx R1GE8lza ibBZKWzmcuhRPYKFQXksjEmYT/s88af0kf/yebAbbVpqoRjDOyiX9yL5OVEjqoBP6XOrEwAOrR15JvjfVFEkMjdqDRBc7FHRRZR9vyXvWUYSWdc7gFuN7u4gcQNOiiurVBh4A8jHULsJrAU7nAPkKbVKgl18lW8J7mAu5/EwIViZOF0w7gEnmGrWIE17HTZiwwZFhZqC3UCZja5rOLJku5H42Mdh6Mo3jpFWeEtVwHAsxxyY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Khalid Aziz writes: > Pages pinned in memory through extra refcounts can not be migrated. > Currently as isolate_migratepages_block() scans pages for > compaction, it skips any pinned anonymous pages. All non-migratable > pages should be skipped and not just the anonymous pinned pages. > This patch adds a check for extra refcounts on a page to determine > if the page can be migrated. This was seen as a real issue on a > customer workload where a large number of pages were pinned by vfio > on the host and any attempts to allocate hugepages resulted in > significant amount of cpu time spent in either direct compaction or > in kcompactd scanning vfio pinned pages over and over again that can > not be migrated. These are the changes in relevant stats with this > patch for a test run of this scenario: > > Before After > compact_migrate_scanned 329,798,858 370,984,387 > compact_free_scanned 40,478,406 25,843,262 > compact_isolated 135,470,452 777,235 > pgmigrate_success 544,255 507,325 > pgmigrate_fail 134,616,282 47 > kcompactd CPU time 5:12.81 0:12.28 > > Before the patch, large number of pages were isolated but most of > them failed to migrate. > > Signed-off-by: Khalid Aziz > Suggested-by: Steve Sistare > Cc: Khalid Aziz > --- > v4: > - Use existing folio_expected_refs() function (Suggested > by Huang, Ying) > - Use folio functions > - Take into account contig allocations when checking for > long temr pinning and skip pages in ZONE_MOVABLE and > MIGRATE_CMA type pages (Suggested by David Hildenbrand) > - Use folio version of total_mapcount() instead of > page_mapcount() (Suggested by Baolin Wang) > > v3: > - Account for extra ref added by get_page_unless_zero() earlier > in isolate_migratepages_block() (Suggested by Huang, Ying) > - Clean up computation of extra refs to be consistent > (Suggested by Huang, Ying) > > v2: > - Update comments in the code (Suggested by Andrew) > - Use PagePrivate() instead of page_has_private() (Suggested > by Matthew) > - Pass mapping to page_has_extrarefs() (Suggested by Matthew) > - Use page_ref_count() (Suggested by Matthew) > - Rename is_pinned_page() to reflect its function more > accurately (Suggested by Matthew) > > include/linux/migrate.h | 16 +++++++++++++++ > mm/compaction.c | 44 +++++++++++++++++++++++++++++++++++++---- > mm/migrate.c | 14 ------------- > 3 files changed, 56 insertions(+), 18 deletions(-) > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 6241a1596a75..4f59e15eae99 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -141,6 +141,22 @@ const struct movable_operations *page_movable_ops(struct page *page) > ((unsigned long)page->mapping - PAGE_MAPPING_MOVABLE); > } > > +static inline > +int folio_expected_refs(struct address_space *mapping, > + struct folio *folio) I don't think that it's necessary to make this function inline. It isn't called in hot path. > +{ > + int refs = 1; > + > + if (!mapping) > + return refs; > + > + refs += folio_nr_pages(folio); > + if (folio_test_private(folio)) > + refs++; > + > + return refs; > +} > + > #ifdef CONFIG_NUMA_BALANCING > int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, > int node); > diff --git a/mm/compaction.c b/mm/compaction.c > index 5a9501e0ae01..b548e05f0349 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -764,6 +764,42 @@ static bool too_many_isolated(pg_data_t *pgdat) > return too_many; > } > > +/* > + * Check if this base page should be skipped from isolation because > + * it has extra refcounts that will prevent it from being migrated. > + * This code is inspired by similar code in migrate_vma_check_page(), > + * can_split_folio() and folio_migrate_mapping() > + */ > +static inline bool page_has_extra_refs(struct page *page, > + struct address_space *mapping) > +{ > + unsigned long extra_refs; s/extra_refs/expected_refs/ ? > + struct folio *folio; > + > + /* > + * Skip this check for pages in ZONE_MOVABLE or MIGRATE_CMA > + * pages that can not be long term pinned > + */ > + if (is_zone_movable_page(page) || is_migrate_cma_page(page)) > + return false; I suggest to move these 2 checks out to the place before calling the function. Or change the name of the function. > + > + folio = page_folio(page); > + > + /* > + * caller holds a ref already from get_page_unless_zero() > + * which is accounted for in folio_expected_refs() > + */ > + extra_refs = folio_expected_refs(mapping, folio); > + > + /* > + * This is an admittedly racy check but good enough to determine > + * if a page is pinned and can not be migrated > + */ > + if ((folio_ref_count(folio) - extra_refs) > folio_mapcount(folio)) > + return true; > + return false; > +} > + > /** > * isolate_migratepages_block() - isolate all migrate-able pages within > * a single pageblock > @@ -992,12 +1028,12 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > goto isolate_fail; > > /* > - * Migration will fail if an anonymous page is pinned in memory, > - * so avoid taking lru_lock and isolating it unnecessarily in an > - * admittedly racy check. > + * Migration will fail if a page has extra refcounts > + * from long term pinning preventing it from migrating, > + * so avoid taking lru_lock and isolating it unnecessarily. > */ > mapping = page_mapping(page); > - if (!mapping && (page_count(page) - 1) > total_mapcount(page)) > + if (!cc->alloc_contig && page_has_extra_refs(page, mapping)) > goto isolate_fail_put; > > /* > diff --git a/mm/migrate.c b/mm/migrate.c > index db3f154446af..a2f3e5834996 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -385,20 +385,6 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) > } > #endif > > -static int folio_expected_refs(struct address_space *mapping, > - struct folio *folio) > -{ > - int refs = 1; > - if (!mapping) > - return refs; > - > - refs += folio_nr_pages(folio); > - if (folio_test_private(folio)) > - refs++; > - > - return refs; > -} > - > /* > * Replace the page in the mapping. > * Best Regards, Huang, Ying