From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADE01C433F5 for ; Tue, 22 Mar 2022 16:42:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 304D96B0078; Tue, 22 Mar 2022 12:42:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B4A16B007B; Tue, 22 Mar 2022 12:42:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12EF46B007D; Tue, 22 Mar 2022 12:42:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 050DF6B0078 for ; Tue, 22 Mar 2022 12:42:48 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id B0F82618CB for ; Tue, 22 Mar 2022 16:42:47 +0000 (UTC) X-FDA: 79272591174.08.B2C77DD Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 205D31C0036 for ; Tue, 22 Mar 2022 16:42:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647967365; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pbDUOeqShz33twUSi6DXosV6ZkPSPlcuLCDCl6mkuQM=; b=huV0i6lhVMAF5x92a0od1Jm1e7Lkm1n0WhLmYkhwSEEzWtTqs4DqnWMFK0I+8uzq5KPcql I+Fl9hEJZLPYsFfeTboMbcv7KR3QZnKR+DzfzhKIGhNSJjbmqhWjbqRrR3+wuNWy/J787J nTbm+OQb+F0B1l2nFsVqPEqHxfBG9IQ= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-323-IqOw6tEmOQOV58bJpaRxYg-1; Tue, 22 Mar 2022 12:42:44 -0400 X-MC-Unique: IqOw6tEmOQOV58bJpaRxYg-1 Received: by mail-wr1-f72.google.com with SMTP id a17-20020a5d6cb1000000b00203f85a2ed9so1975992wra.7 for ; Tue, 22 Mar 2022 09:42:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:organization:subject :in-reply-to:content-transfer-encoding; bh=pbDUOeqShz33twUSi6DXosV6ZkPSPlcuLCDCl6mkuQM=; b=zkrmMJ7wpPlZGKIVVHJEs3SJt2dCaztIetZ15O4mfPUQ+Ra8S3Kli7uRZ2IFNfX/cZ scc1kB1Vs+l7+o0xJWfGuGQkHyYTkhBCftrSjdn0fAJOBd/70chiN2HGP7NEiIR1X8Lg kLg77DS18isSUHy2o1LlbNMEPmdnmvFpemYSH/FoqPKbExhdyRYlUdwE58hCO7o+eqNp dgNw3l1dz/ae8ajkYd/CbRnnb8kQgkMo6yy910Yn9iFnZSwvl2DV6DHaZVJfd72nUKDI EUkBS+zJuVrfKenBQ6eIwKQr5+ogIM9dOr/+YEurZF01BU0IdIBldQZSg4Tnh0MswkXk U8/A== X-Gm-Message-State: AOAM533/kaK+PktuAOsLG3ULDEtNkiRI9Pye6YqIxlDLOc5asx7q9fq/ 87ZArsl3+ZqsrCKlD3rDmPZNnoI6hL7TVBJ5CBNPqD7rXwWACbUCYG7HOC5GAPJhTNkItsHZUpG L2Xzxmpm7iaU= X-Received: by 2002:adf:f192:0:b0:203:e38f:afa1 with SMTP id h18-20020adff192000000b00203e38fafa1mr22986516wro.120.1647967363075; Tue, 22 Mar 2022 09:42:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwd4H5zsnoSZNx8cpL6j2SFxZOQKMbrXPFzGhHlMfw61cH0xjGomBTtGVrnNwirsHukscpKkQ== X-Received: by 2002:adf:f192:0:b0:203:e38f:afa1 with SMTP id h18-20020adff192000000b00203e38fafa1mr22986485wro.120.1647967362657; Tue, 22 Mar 2022 09:42:42 -0700 (PDT) Received: from ?IPV6:2003:cb:c708:de00:549e:e4e4:98df:ff72? (p200300cbc708de00549ee4e498dfff72.dip0.t-ipconnect.de. [2003:cb:c708:de00:549e:e4e4:98df:ff72]) by smtp.gmail.com with ESMTPSA id q14-20020a1cf30e000000b0038986a18ec8sm2243869wmq.46.2022.03.22.09.42.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 22 Mar 2022 09:42:42 -0700 (PDT) Message-ID: Date: Tue, 22 Mar 2022 17:42:41 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 To: Zi Yan Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Vlastimil Babka , Mel Gorman , Eric Ren , Mike Rapoport , Oscar Salvador , Christophe Leroy References: <20220317153733.2171277-1-zi.yan@sent.com> <20220317153733.2171277-3-zi.yan@sent.com> <44a512ba-1707-d9c7-7df3-b81af9b5f0fb@redhat.com> <3379379B-489B-460F-8B01-9A1D584A5036@nvidia.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v8 2/5] mm: page_isolation: check specified range for unmovable pages In-Reply-To: <3379379B-489B-460F-8B01-9A1D584A5036@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=huV0i6lh; spf=none (imf20.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 205D31C0036 X-Stat-Signature: gpnx4c5cd9sastobhzwynjf971ajxgfw X-HE-Tag: 1647967365-47935 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 21.03.22 19:23, Zi Yan wrote: > On 21 Mar 2022, at 13:30, David Hildenbrand wrote: > >> On 17.03.22 16:37, Zi Yan wrote: >>> From: Zi Yan >>> >>> Enable set_migratetype_isolate() to check specified sub-range for >>> unmovable pages during isolation. Page isolation is done >>> at max(MAX_ORDER_NR_PAEGS, pageblock_nr_pages) granularity, but not all >>> pages within that granularity are intended to be isolated. For example, >>> alloc_contig_range(), which uses page isolation, allows ranges without >>> alignment. This commit makes unmovable page check only look for >>> interesting pages, so that page isolation can succeed for any >>> non-overlapping ranges. >>> >>> Signed-off-by: Zi Yan >>> --- >>> include/linux/page-isolation.h | 10 +++++ >>> mm/page_alloc.c | 13 +------ >>> mm/page_isolation.c | 69 ++++++++++++++++++++-------------- >>> 3 files changed, 51 insertions(+), 41 deletions(-) >>> >>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h >>> index e14eddf6741a..eb4a208fe907 100644 >>> --- a/include/linux/page-isolation.h >>> +++ b/include/linux/page-isolation.h >>> @@ -15,6 +15,16 @@ static inline bool is_migrate_isolate(int migratetype) >>> { >>> return migratetype == MIGRATE_ISOLATE; >>> } >>> +static inline unsigned long pfn_max_align_down(unsigned long pfn) >>> +{ >>> + return ALIGN_DOWN(pfn, MAX_ORDER_NR_PAGES); >>> +} >>> + >>> +static inline unsigned long pfn_max_align_up(unsigned long pfn) >>> +{ >>> + return ALIGN(pfn, MAX_ORDER_NR_PAGES); >>> +} >>> + >>> #else >>> static inline bool has_isolate_pageblock(struct zone *zone) >>> { >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>> index 6de57d058d3d..680580a40a35 100644 >>> --- a/mm/page_alloc.c >>> +++ b/mm/page_alloc.c >>> @@ -8937,16 +8937,6 @@ void *__init alloc_large_system_hash(const char *tablename, >>> } >>> >>> #ifdef CONFIG_CONTIG_ALLOC >>> -static unsigned long pfn_max_align_down(unsigned long pfn) >>> -{ >>> - return ALIGN_DOWN(pfn, MAX_ORDER_NR_PAGES); >>> -} >>> - >>> -static unsigned long pfn_max_align_up(unsigned long pfn) >>> -{ >>> - return ALIGN(pfn, MAX_ORDER_NR_PAGES); >>> -} >>> - >>> #if defined(CONFIG_DYNAMIC_DEBUG) || \ >>> (defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE)) >>> /* Usage: See admin-guide/dynamic-debug-howto.rst */ >>> @@ -9091,8 +9081,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, >>> * put back to page allocator so that buddy can use them. >>> */ >>> >>> - ret = start_isolate_page_range(pfn_max_align_down(start), >>> - pfn_max_align_up(end), migratetype, 0); >>> + ret = start_isolate_page_range(start, end, migratetype, 0); >>> if (ret) >>> return ret; >> >> Shouldn't we similarly adjust undo_isolate_page_range()? IOW, all users >> of pfn_max_align_down()/pfn_max_align_up(). would be gone from that file >> and you can move these defines into mm/page_isolation.c instead of >> include/linux/page-isolation.h? > > undo_isolate_page_range() faces much simpler situation, just needing > to unset migratetype. We can just pass pageblock_nr_pages aligned range > to it. For start_isolate_page_range(), start and end are also used for > has_unmovable_pages() for precise unmovable page identification, so > they cannot be pageblock_nr_pages aligned. But for readability and symmetry, > yes, I can change undo_isolate_page_range() too. Yeah, we should call both with the same range and any extension of the range should be handled internally. I thought about some corner cases, especially once we relax some (CMA) alignment thingies -- then, the CMA area might be placed at weird locations. I haven't checked to which degree they apply, but we should certainly keep them in mind whenever we're extending the isolation range. We can assume that the contig range we're allocation a) Belongs to a single zone b) Does not contain any memory holes / mmap holes Let's double check 1) Different zones in extended range ... ZONE A ][ ZONE B .... [ Pageblock X ][ Pageblock Y ][ Pageblock Z ] [ MAX_ORDER - 1 ] We can never create a higher-order page between X and Y, because they are in different zones. Most probably we should *not* extend the range to cover pageblock X in case the zones don't match. The same consideration applies to the end of the range, when extending the isolation range. But I wonder if we can have such a zone layout. At least mm/page_alloc.c:find_zone_movable_pfns_for_nodes() makes sure to always align the start of ZONE_MOVABLE to MAX_ORDER_NR_PAGES. I hope it applies to all other zones as well? :/ Anyhow, it should be easy to check when isolating/un-isolating. Only conditionally extend the range if the zones of both pageblocks match. When eventually growing MAX_ORDER_NR_PAGES further, could we be in trouble because we could suddenly span multiple zones with a single MAX_ORDER - 1 page? Then we'd have to handle that I guess. 2) mmap holes I think that's already covered by the existing __first_valid_page() handling. So, I feel like we might have to tackle the zones issue, especially when extending MAX_ORDER_NR_PAGES? -- Thanks, David / dhildenb