From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD5F1C4332F for ; Mon, 7 Nov 2022 07:26:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CACD8E0002; Mon, 7 Nov 2022 02:26:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 27AD28E0001; Mon, 7 Nov 2022 02:26:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16A2D8E0002; Mon, 7 Nov 2022 02:26:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 091348E0001 for ; Mon, 7 Nov 2022 02:26:21 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id BDB2E120CDC for ; Mon, 7 Nov 2022 07:26:20 +0000 (UTC) X-FDA: 80105812920.03.B46559E Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by imf22.hostedemail.com (Postfix) with ESMTP id E7F7DC0006 for ; Mon, 7 Nov 2022 07:26:18 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VU8EpIc_1667805973; Received: from 30.97.48.51(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VU8EpIc_1667805973) by smtp.aliyun-inc.com; Mon, 07 Nov 2022 15:26:14 +0800 Message-ID: <63a8f509-6acb-48b5-1aa1-c278deaaa719@linux.alibaba.com> Date: Mon, 7 Nov 2022 15:26:18 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.4.1 Subject: Re: [PATCH 1/2] migrate: convert unmap_and_move() to use folios To: Huang Ying , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Zi Yan , Yang Shi , Oscar Salvador , Matthew Wilcox References: <20221104083020.155835-1-ying.huang@intel.com> <20221104083020.155835-2-ying.huang@intel.com> From: Baolin Wang In-Reply-To: <20221104083020.155835-2-ying.huang@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf22.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.45 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667805980; a=rsa-sha256; cv=none; b=RixBtuaxiEowItyLQsMwU4BzlqoEfHgbYoPi6RGzZcJESoh80O0lfZkLWrrxzZ9BNc2ds3 GHa4bA4oL94w+QZLIF9LunMXE/bp8/09EMty4GW0bDRqwku/Gnl7aS+06jADeIjT14oTVj lQW3PGHMfqsPMEu2vR1iAnYmbCqmpU0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667805980; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kwPooI2vMwq/qE+EKwvtC5MeYxR/zNPGNJB2v21lyRU=; b=b3xOojCevtOseO5U26Uc0BfXe2J8PzClnoK0sxmW4l24bjb+3HPVE/hn2vhHiAWgO+joLg K1+XpiS6YiZTX2vox5fjDY3iSxqW3JqdxFye11J2tP+rR4VHM2Be6NY+Tj/Fx6x8fiZ7hd WP3jXoWDX9LkG5JDcUf7tQybWPos4x0= X-Stat-Signature: 4xeszxogmqby9gd9mesjssjnxf51fhw1 X-Rspamd-Queue-Id: E7F7DC0006 Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf22.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.45 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1667805978-149317 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/4/2022 4:30 PM, Huang Ying wrote: > Quite straightforward, the page functions are converted to > corresponding folio functions. Same for comments. > LGTM. Please feel free to add: Reviewed-by: Baolin Wang > Signed-off-by: "Huang, Ying" > Cc: Andrew Morton > Cc: Zi Yan > Cc: Yang Shi > Cc: Baolin Wang > Cc: Oscar Salvador > Cc: Matthew Wilcox > --- > mm/migrate.c | 54 ++++++++++++++++++++++++++-------------------------- > 1 file changed, 27 insertions(+), 27 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index dff333593a8a..f6dd749dd2f8 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1150,79 +1150,79 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, > } > > /* > - * Obtain the lock on page, remove all ptes and migrate the page > - * to the newly allocated page in newpage. > + * Obtain the lock on folio, remove all ptes and migrate the folio > + * to the newly allocated folio in dst. > */ > static int unmap_and_move(new_page_t get_new_page, > free_page_t put_new_page, > - unsigned long private, struct page *page, > + unsigned long private, struct folio *src, > int force, enum migrate_mode mode, > enum migrate_reason reason, > struct list_head *ret) > { > - struct folio *dst, *src = page_folio(page); > + struct folio *dst; > int rc = MIGRATEPAGE_SUCCESS; > struct page *newpage = NULL; > > - if (!thp_migration_supported() && PageTransHuge(page)) > + if (!thp_migration_supported() && folio_test_transhuge(src)) > return -ENOSYS; > > - if (page_count(page) == 1) { > - /* Page was freed from under us. So we are done. */ > - ClearPageActive(page); > - ClearPageUnevictable(page); > + if (folio_ref_count(src) == 1) { > + /* Folio was freed from under us. So we are done. */ > + folio_clear_active(src); > + folio_clear_unevictable(src); > /* free_pages_prepare() will clear PG_isolated. */ > goto out; > } > > - newpage = get_new_page(page, private); > + newpage = get_new_page(&src->page, private); > if (!newpage) > return -ENOMEM; > dst = page_folio(newpage); > > - newpage->private = 0; > + dst->private = 0; > rc = __unmap_and_move(src, dst, force, mode); > if (rc == MIGRATEPAGE_SUCCESS) > - set_page_owner_migrate_reason(newpage, reason); > + set_page_owner_migrate_reason(&dst->page, reason); > > out: > if (rc != -EAGAIN) { > /* > - * A page that has been migrated has all references > - * removed and will be freed. A page that has not been > + * A folio that has been migrated has all references > + * removed and will be freed. A folio that has not been > * migrated will have kept its references and be restored. > */ > - list_del(&page->lru); > + list_del(&src->lru); > } > > /* > * If migration is successful, releases reference grabbed during > - * isolation. Otherwise, restore the page to right list unless > + * isolation. Otherwise, restore the folio to right list unless > * we want to retry. > */ > if (rc == MIGRATEPAGE_SUCCESS) { > /* > - * Compaction can migrate also non-LRU pages which are > + * Compaction can migrate also non-LRU folios which are > * not accounted to NR_ISOLATED_*. They can be recognized > - * as __PageMovable > + * as __folio_test_movable > */ > - if (likely(!__PageMovable(page))) > - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + > - page_is_file_lru(page), -thp_nr_pages(page)); > + if (likely(!__folio_test_movable(src))) > + mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + > + folio_is_file_lru(src), -folio_nr_pages(src)); > > if (reason != MR_MEMORY_FAILURE) > /* > - * We release the page in page_handle_poison. > + * We release the folio in page_handle_poison. > */ > - put_page(page); > + folio_put(src); > } else { > if (rc != -EAGAIN) > - list_add_tail(&page->lru, ret); > + list_add_tail(&src->lru, ret); > > if (put_new_page) > - put_new_page(newpage, private); > + put_new_page(&dst->page, private); > else > - put_page(newpage); > + folio_put(dst); > } > > return rc; > @@ -1459,7 +1459,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > &ret_pages); > else > rc = unmap_and_move(get_new_page, put_new_page, > - private, page, pass > 2, mode, > + private, page_folio(page), pass > 2, mode, > reason, &ret_pages); > /* > * The rules are: