From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94513C433F5 for ; Tue, 12 Apr 2022 13:10:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 04E346B0082; Tue, 12 Apr 2022 09:10:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F40CE6B0083; Tue, 12 Apr 2022 09:10:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBA366B0085; Tue, 12 Apr 2022 09:10:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id CC1DD6B0082 for ; Tue, 12 Apr 2022 09:10:19 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 90C07183AB0BD for ; Tue, 12 Apr 2022 13:10:19 +0000 (UTC) X-FDA: 79348260558.25.D48A9C6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 0B479100005 for ; Tue, 12 Apr 2022 13:10:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649769018; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cxNi3KUC9hG5+dIEHXwk8T0pk8dlLB9WdrbtBDd6QQY=; b=UmS2TfEHIr83TkGUTzkbo6DutbWGtW5sYkKIz6FX0776hPcWnjbJ3GHkFK9ObWdSamKWwQ LWoKn7DJm9+6NCkJrc2MOA1/+E8bUm3Rao+4vrI2Bu/kC7BuaB3KP9dZBpQqL56i4/Kps4 rKwKfiuWSC9Ujzz3lMQZy+NfX2JnR3o= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-645-_2MKOwNqOaeZj9WbwbWSug-1; Tue, 12 Apr 2022 09:10:15 -0400 X-MC-Unique: _2MKOwNqOaeZj9WbwbWSug-1 Received: by mail-wr1-f69.google.com with SMTP id i64-20020adf90c6000000b00203f2b5e090so4008604wri.9 for ; Tue, 12 Apr 2022 06:10:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:organization:subject :in-reply-to:content-transfer-encoding; bh=cxNi3KUC9hG5+dIEHXwk8T0pk8dlLB9WdrbtBDd6QQY=; b=nUGtuleaCH9qrQsDgafXVXX7IdATuvlpkVmLop1w9q2fj2kVcvUmaw2wuHykTcsTXz 1DAmHVacf9wJWwaTefU5KdNl0nOIi4nh905xayNiISqR20FXsMSTXKDvj+r4PByX22nm 76QL3nyqk0YTIan/yjmMzXkLl3d+fvv3C0iEDqoHBR6cgDDWaIskhR1vogCdgZKAV0sU wMoMUKT5xFPvHn/kLVSND34jUDpSrYFMPE617KHmJ/d+AzPsmbRnSkUH3zKhaCz76L/Z cuGizAZ+EScBBBV+7uaBW0ez3BEHytIC6kkZUBO7+xk+2v4JSHURBCqhxTkKpq/5jX8k jDFw== X-Gm-Message-State: AOAM533oWTQGzPqz3VF+gobG3nRC3dKo+UGZnrP3WNGZZm7U38bvi7hO Yui2eXYXBTEgsnhgqpyrQkL8xKPNIzcPiQjJ8X0nf0rUwSDsBXnhuu4LWYjxKSvhqDQH87rHQm/ 8dznDr9X1hwk= X-Received: by 2002:a1c:ed12:0:b0:38e:9e4e:2db2 with SMTP id l18-20020a1ced12000000b0038e9e4e2db2mr4092001wmh.96.1649769014223; Tue, 12 Apr 2022 06:10:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzPO9VTOs/2MHLmQnUoLuhtZklz6TW1zHJj9EIuwagGg5Ey6vVOLVwIN01vt2AHQuMXqL4Efw== X-Received: by 2002:a1c:ed12:0:b0:38e:9e4e:2db2 with SMTP id l18-20020a1ced12000000b0038e9e4e2db2mr4091971wmh.96.1649769013882; Tue, 12 Apr 2022 06:10:13 -0700 (PDT) Received: from ?IPV6:2003:cb:c707:1800:7c14:16cc:5291:a9f3? (p200300cbc70718007c1416cc5291a9f3.dip0.t-ipconnect.de. [2003:cb:c707:1800:7c14:16cc:5291:a9f3]) by smtp.gmail.com with ESMTPSA id i7-20020a5d5587000000b00207a8cde900sm4506601wrv.19.2022.04.12.06.10.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 12 Apr 2022 06:10:13 -0700 (PDT) Message-ID: Date: Tue, 12 Apr 2022 15:10:12 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 To: Zi Yan , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Vlastimil Babka , Mel Gorman , Eric Ren , Mike Rapoport , Oscar Salvador , Christophe Leroy References: <20220406151858.3149821-1-zi.yan@sent.com> <20220406151858.3149821-3-zi.yan@sent.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v10 2/5] mm: page_isolation: check specified range for unmovable pages In-Reply-To: <20220406151858.3149821-3-zi.yan@sent.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0B479100005 X-Stat-Signature: hs4hyaz5a33yrper4bdiqyoyip45rj6i Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UmS2TfEH; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf14.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com X-Rspam-User: X-HE-Tag: 1649769018-640023 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 06.04.22 17:18, Zi Yan wrote: > From: Zi Yan > > Enable set_migratetype_isolate() to check specified sub-range for > unmovable pages during isolation. Page isolation is done > at MAX_ORDER_NR_PAEGS granularity, but not all pages within that > granularity are intended to be isolated. For example, > alloc_contig_range(), which uses page isolation, allows ranges without > alignment. This commit makes unmovable page check only look for > interesting pages, so that page isolation can succeed for any > non-overlapping ranges. > > Signed-off-by: Zi Yan > --- [...] > /* > - * This function checks whether pageblock includes unmovable pages or not. > + * This function checks whether the range [start_pfn, end_pfn) includes > + * unmovable pages or not. The range must fall into a single pageblock and > + * consequently belong to a single zone. > * > * PageLRU check without isolation or lru_lock could race so that > * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable > @@ -28,12 +30,14 @@ > * cannot get removed (e.g., via memory unplug) concurrently. > * > */ > -static struct page *has_unmovable_pages(struct zone *zone, struct page *page, > - int migratetype, int flags) > +static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long end_pfn, > + int migratetype, int flags) > { > - unsigned long iter = 0; > - unsigned long pfn = page_to_pfn(page); > - unsigned long offset = pfn % pageblock_nr_pages; > + unsigned long pfn = start_pfn; > + struct page *page = pfn_to_page(pfn); Just do struct page *page = pfn_to_page(start_pfn); struct zone *zone = page_zone(page); here. No need to lookup the zone again in the loop because, as you document "must ... belong to a single zone.". Then, there is also no need to initialize "pfn" here. In the loop header is sufficient. > + > + VM_BUG_ON(ALIGN_DOWN(start_pfn, pageblock_nr_pages) != > + ALIGN_DOWN(end_pfn - 1, pageblock_nr_pages)); > > if (is_migrate_cma_page(page)) { > /* > @@ -47,8 +51,11 @@ static struct page *has_unmovable_pages(struct zone *zone, struct page *page, > return page; > } > > - for (; iter < pageblock_nr_pages - offset; iter++) { > - page = pfn_to_page(pfn + iter); > + for (pfn = start_pfn; pfn < end_pfn; pfn++) { > + struct zone *zone; > + > + page = pfn_to_page(pfn); > + zone = page_zone(page); > > /* > * Both, bootmem allocations and memory holes are marked > @@ -85,7 +92,7 @@ static struct page *has_unmovable_pages(struct zone *zone, struct page *page, > } > > skip_pages = compound_nr(head) - (page - head); > - iter += skip_pages - 1; > + pfn += skip_pages - 1; > continue; > } > > @@ -97,7 +104,7 @@ static struct page *has_unmovable_pages(struct zone *zone, struct page *page, > */ > if (!page_ref_count(page)) { > if (PageBuddy(page)) > - iter += (1 << buddy_order(page)) - 1; > + pfn += (1 << buddy_order(page)) - 1; > continue; > } > > @@ -134,11 +141,18 @@ static struct page *has_unmovable_pages(struct zone *zone, struct page *page, > return NULL; > } > > -static int set_migratetype_isolate(struct page *page, int migratetype, int isol_flags) > +/* > + * This function set pageblock migratetype to isolate if no unmovable page is > + * present in [start_pfn, end_pfn). The pageblock must intersect with > + * [start_pfn, end_pfn). > + */ > +static int set_migratetype_isolate(struct page *page, int migratetype, int isol_flags, > + unsigned long start_pfn, unsigned long end_pfn) I think we might be able do better, eventually not passing start_pfn at all. Hmm. I think we want to pull out the start_isolate_page_range()/undo_isolate_page_range() interface change into a separate patch. Let me try to give it a shot, I'll try hacking something up real quick to see if we can do better. -- Thanks, David / dhildenb