From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59C7CC636D3 for ; Tue, 7 Feb 2023 17:12:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2FD06B0119; Tue, 7 Feb 2023 12:12:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AE0326B011B; Tue, 7 Feb 2023 12:12:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CEF06B011C; Tue, 7 Feb 2023 12:12:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8A49E6B0119 for ; Tue, 7 Feb 2023 12:12:28 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4847FC0193 for ; Tue, 7 Feb 2023 17:12:28 +0000 (UTC) X-FDA: 80441139576.05.CC31D0D Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) by imf08.hostedemail.com (Postfix) with ESMTP id 7965316001C for ; Tue, 7 Feb 2023 17:12:18 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.98 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675789946; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+2geV34T0Qw3bHiVmXXkRbidaFNJ4S3XeJv/NKYGj68=; b=SjrInAdCdmx2DzTzwSjrcbpPkZ6CrT1Mc/COk8h3og5UBEHzsJYQIvXcxbs9YJ/+SMc2Du EK34UhYtmBjBTOzgzxyXDFPPnW5XXWR5nda+FNmFjKGviwnr6EDe8fgdHGK2eifeVhe0Kl C4XrmNDJHmkAnAAEY8bRvDaU2VCtEdQ= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.98 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675789946; a=rsa-sha256; cv=none; b=X3h7e9rXsPT6uBd96WQjpYTr6EhZ1oD/nS3LPeYkDxKpUwLcb4ez6YnV4S8krUeUiDayjO pJFvFQKfgAcnAXY0YF+7YzYq9T4ajaLOtlQrgWkXYGhFWNHiiGLCAv4q2qMiE0e362Jt4h Nda+52+TQCmD8EeUMuIMATES4F6EP6U= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=xhao@linux.alibaba.com;NM=1;PH=DU;RN=14;SR=0;TI=SMTPD_---0Vb8IAK5_1675789911; Received: from 30.25.212.190(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0Vb8IAK5_1675789911) by smtp.aliyun-inc.com; Wed, 08 Feb 2023 01:12:00 +0800 Message-ID: <28b77814-efea-d5e5-100b-d96da72254ad@linux.alibaba.com> Date: Wed, 8 Feb 2023 01:11:50 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [PATCH -v4 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move() To: Huang Ying , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Baolin Wang , Zi Yan , Yang Shi , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> References: <20230206063313.635011-1-ying.huang@intel.com> <20230206063313.635011-5-ying.huang@intel.com> From: haoxin In-Reply-To: <20230206063313.635011-5-ying.huang@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Stat-Signature: zxuqrzx8urs47hiw86id5yopdcbm8mft X-Rspam-User: X-Rspamd-Queue-Id: 7965316001C X-Rspamd-Server: rspam06 X-HE-Tag: 1675789938-637338 X-HE-Meta: U2FsdGVkX1/4OSI4YF9ZcBU661gf4rZTYLzz3pDE4RurfbuFJp5++S8A/8ePTQ84aKChkfNU/YOmN3IOGrOjklrWaZ5s1RG/BMeSoDgQR0+viuA7IfY/6N+k1lFV2/crARMyKQQDfP6xN/NC0RFdxSgBpdpsyTAsmz/LFLFpDisxIrwVcngK8Iay4gRHGv9Pxfk0RGIkrGrBRa4y/9UekAc7Lx3nZwCs2aCx3+9sxeWWnPEp3PNjGcYtp7qpvliagq7QnogbBM3WvtNsXPjCjVC7980SDAfHVeX7sT3JHhTsj1IhKS/ZUeboLeArKyNj5pUodRNqjkblA3hd0oec3dePjC+O8XMAUe9GcQrg9MCz9IUp8CsHbRG/gTc5aulD+owzrlRHS7evuQJtlB7MJAWpikVB0ChYyx80yt21M1og19gHoh4EMxMc3N50dUUxJ5hy29mj5tDynVbz593BDc/79YxgVQ6gvleqV6d8SbJVNirT/SdhbSnydptRHGCZdcWukmnCPKqIgxhEHwn0c4FjAYEsg7AxXghudrGNLzFsHTesGeiRy6ZpOZ8ongB2a6pzB0XsTMnEk8BmFqBGypmI8+w7R2Y1sPFmhj1KSETxgcQkqNSU7ddMqvgDadJx6poncrhbbn/KyR4MioTwFNVCoA66Tl343S/U/6bocfxYnL5wM+5b4WyFmRE+rvC8q3sIQaWdqPfnGfoMcnHXkf0mzP2eIjVGeS3XKyMGRbBceVVAygovjNyM0nh33ocMQzWrNbAg3Xe7ktLhCDW+wH3VTnzNcbTVc7HuGeO8UMsjL0jdYAgQpPTh0cO7qq+OnvpuYzMU0O9yxWL6xxwTJr1N7ga1Wp5OeEReD+MqwpR2L8lQ/G8Y+pfBIvJt0yT4+6W30OBnUaAHf0gIC1iDVLIsRuXTsymCd7vyPE9jns+hkg9pnjDzvfj6oOq4RM0ff9eh1TgrajVEQ6tY+Iw IjU1N90Z VJvPdJTDCQ88pjyOIFfX8gFEW9BDigOS0STKPHedalMIZEK9zSg6riK9XeIbLSO1P0ooYHZGbpDR8IJdP+jvojxsRacd7TozNfIIM/xaCdmOygi2OV5jurvwea/IPtFVVJ7zlCdy102ZR1kNkHF7YvQKGcAp4SgDiocIVhQQFp0weTSnSlRyfIHYUJFmEY8Sed1UEGewOAXO00lJNjTpAFvDQkyt7EJ8ZRQIsNUaDmBbUdgvaNPWVRHKmC/IUCqOqTy2LO5GOJeAIb8+4cM+zzFgHaVQYdc0KQkBFbyH3mPXc8tzUK2g5EHx1/aHloBvwrNjML1gO1JT2nBa44Kgwa1V9uirPrIR6EhS+Kupp/uFeuNp5NdASA8jTGf1ywNJGfLnZ7Wwuf7QXmVkIu/YvEjLg7CMCsePKQPT1qyauyhP/p/jE9hGYdaS87qUq6eZkuEnJPMKx5zkA3ap6ZBQDkmvb/jATr3Dw5sS2x6HVcVwsIOGijO9ydNzcX/El2B11sO0a3J06BMofCuMzKgNh5NZ8XI0LWSorCwbwDuXXy5ESZuWpqPF5UgUh/alGDEI25GZa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 在 2023/2/6 下午2:33, Huang Ying 写道: > This is a preparation patch to batch the folio unmapping and moving. > > In this patch, unmap_and_move() is split to migrate_folio_unmap() and > migrate_folio_move(). So, we can batch _unmap() and _move() in > different loops later. To pass some information between unmap and > move, the original unused dst->mapping and dst->private are used. > > Signed-off-by: "Huang, Ying" > Reviewed-by: Baolin Wang > Cc: Zi Yan > Cc: Yang Shi > Cc: Oscar Salvador > Cc: Matthew Wilcox > Cc: Bharata B Rao > Cc: Alistair Popple > Cc: haoxin > Cc: Minchan Kim > Cc: Mike Kravetz > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > --- > include/linux/migrate.h | 1 + > mm/migrate.c | 170 ++++++++++++++++++++++++++++++---------- > 2 files changed, 130 insertions(+), 41 deletions(-) > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index 3ef77f52a4f0..7376074f2e1e 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -18,6 +18,7 @@ struct migration_target_control; > * - zero on page migration success; > */ > #define MIGRATEPAGE_SUCCESS 0 > +#define MIGRATEPAGE_UNMAP 1 > > /** > * struct movable_operations - Driver page migration > diff --git a/mm/migrate.c b/mm/migrate.c > index 9a667039c34c..0428449149f4 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1009,11 +1009,53 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, > return rc; > } > > -static int __unmap_and_move(struct folio *src, struct folio *dst, > +/* > + * To record some information during migration, we uses uses / use > some unused > + * fields (mapping and private) of struct folio of the newly allocated > + * destination folio. This is safe because nobody is using them > + * except us. > + */ > +static void __migrate_folio_record(struct folio *dst, > + unsigned long page_was_mapped, > + struct anon_vma *anon_vma) > +{ > + dst->mapping = (void *)anon_vma; > + dst->private = (void *)page_was_mapped; > +} > + > +static void __migrate_folio_extract(struct folio *dst, > + int *page_was_mappedp, > + struct anon_vma **anon_vmap) > +{ > + *anon_vmap = (void *)dst->mapping; > + *page_was_mappedp = (unsigned long)dst->private; > + dst->mapping = NULL; > + dst->private = NULL; > +} > + > +/* Cleanup src folio upon migration success */ > +static void migrate_folio_done(struct folio *src, > + enum migrate_reason reason) > +{ > + /* > + * Compaction can migrate also non-LRU pages which are > + * not accounted to NR_ISOLATED_*. They can be recognized > + * as __PageMovable > + */ > + if (likely(!__folio_test_movable(src))) > + mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + > + folio_is_file_lru(src), -folio_nr_pages(src)); > + > + if (reason != MR_MEMORY_FAILURE) > + /* We release the page in page_handle_poison. */ > + folio_put(src); > +} > + > +static int __migrate_folio_unmap(struct folio *src, struct folio *dst, > int force, enum migrate_mode mode) > { > int rc = -EAGAIN; > - bool page_was_mapped = false; > + int page_was_mapped = 0; > struct anon_vma *anon_vma = NULL; > bool is_lru = !__PageMovable(&src->page); > > @@ -1089,8 +1131,8 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, > goto out_unlock; > > if (unlikely(!is_lru)) { > - rc = move_to_new_folio(dst, src, mode); > - goto out_unlock_both; > + __migrate_folio_record(dst, page_was_mapped, anon_vma); > + return MIGRATEPAGE_UNMAP; > } > > /* > @@ -1115,11 +1157,42 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, > VM_BUG_ON_FOLIO(folio_test_anon(src) && > !folio_test_ksm(src) && !anon_vma, src); > try_to_migrate(src, 0); > - page_was_mapped = true; > + page_was_mapped = 1; > } > > - if (!folio_mapped(src)) > - rc = move_to_new_folio(dst, src, mode); > + if (!folio_mapped(src)) { > + __migrate_folio_record(dst, page_was_mapped, anon_vma); > + return MIGRATEPAGE_UNMAP; > + } > + > + if (page_was_mapped) > + remove_migration_ptes(src, src, false); > + > +out_unlock_both: > + folio_unlock(dst); > +out_unlock: > + /* Drop an anon_vma reference if we took one */ > + if (anon_vma) > + put_anon_vma(anon_vma); > + folio_unlock(src); > +out: > + > + return rc; > +} > + > +static int __migrate_folio_move(struct folio *src, struct folio *dst, > + enum migrate_mode mode) > +{ > + int rc; > + int page_was_mapped = 0; > + struct anon_vma *anon_vma = NULL; > + bool is_lru = !__PageMovable(&src->page); > + > + __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); > + > + rc = move_to_new_folio(dst, src, mode); > + if (unlikely(!is_lru)) > + goto out_unlock_both; > > /* > * When successful, push dst to LRU immediately: so that if it > @@ -1142,12 +1215,10 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, > > out_unlock_both: > folio_unlock(dst); > -out_unlock: > /* Drop an anon_vma reference if we took one */ > if (anon_vma) > put_anon_vma(anon_vma); > folio_unlock(src); > -out: > /* > * If migration is successful, decrease refcount of dst, > * which will not free the page because new page owner increased > @@ -1159,19 +1230,15 @@ static int __unmap_and_move(struct folio *src, struct folio *dst, > return rc; > } > > -/* > - * Obtain the lock on folio, remove all ptes and migrate the folio > - * to the newly allocated folio in dst. > - */ > -static int unmap_and_move(new_page_t get_new_page, > - free_page_t put_new_page, > - unsigned long private, struct folio *src, > - int force, enum migrate_mode mode, > - enum migrate_reason reason, > - struct list_head *ret) > +/* Obtain the lock on page, remove all ptes. */ > +static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, > + unsigned long private, struct folio *src, > + struct folio **dstp, int force, > + enum migrate_mode mode, enum migrate_reason reason, > + struct list_head *ret) > { > struct folio *dst; > - int rc = MIGRATEPAGE_SUCCESS; > + int rc = MIGRATEPAGE_UNMAP; > struct page *newpage = NULL; > > if (!thp_migration_supported() && folio_test_transhuge(src)) > @@ -1182,20 +1249,50 @@ static int unmap_and_move(new_page_t get_new_page, > folio_clear_active(src); > folio_clear_unevictable(src); > /* free_pages_prepare() will clear PG_isolated. */ > - goto out; > + list_del(&src->lru); > + migrate_folio_done(src, reason); > + return MIGRATEPAGE_SUCCESS; > } > > newpage = get_new_page(&src->page, private); > if (!newpage) > return -ENOMEM; > dst = page_folio(newpage); > + *dstp = dst; > > dst->private = NULL; > - rc = __unmap_and_move(src, dst, force, mode); > + rc = __migrate_folio_unmap(src, dst, force, mode); > + if (rc == MIGRATEPAGE_UNMAP) > + return rc; > + > + /* > + * A page that has not been migrated will have kept its > + * references and be restored. > + */ > + /* restore the folio to right list. */ > + if (rc != -EAGAIN) > + list_move_tail(&src->lru, ret); > + > + if (put_new_page) > + put_new_page(&dst->page, private); > + else > + folio_put(dst); > + > + return rc; > +} > + > +/* Migrate the folio to the newly allocated folio in dst. */ > +static int migrate_folio_move(free_page_t put_new_page, unsigned long private, > + struct folio *src, struct folio *dst, > + enum migrate_mode mode, enum migrate_reason reason, > + struct list_head *ret) > +{ > + int rc; > + > + rc = __migrate_folio_move(src, dst, mode); > if (rc == MIGRATEPAGE_SUCCESS) > set_page_owner_migrate_reason(&dst->page, reason); > > -out: > if (rc != -EAGAIN) { > /* > * A folio that has been migrated has all references > @@ -1211,20 +1308,7 @@ static int unmap_and_move(new_page_t get_new_page, > * we want to retry. > */ > if (rc == MIGRATEPAGE_SUCCESS) { > - /* > - * Compaction can migrate also non-LRU folios which are > - * not accounted to NR_ISOLATED_*. They can be recognized > - * as __folio_test_movable > - */ > - if (likely(!__folio_test_movable(src))) > - mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + > - folio_is_file_lru(src), -folio_nr_pages(src)); > - > - if (reason != MR_MEMORY_FAILURE) > - /* > - * We release the folio in page_handle_poison. > - */ > - folio_put(src); > + migrate_folio_done(src, reason); > } else { > if (rc != -EAGAIN) > list_add_tail(&src->lru, ret); > @@ -1516,7 +1600,7 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, > int pass = 0; > bool is_large = false; > bool is_thp = false; > - struct folio *folio, *folio2; > + struct folio *folio, *folio2, *dst = NULL; > int rc, nr_pages; > LIST_HEAD(split_folios); > bool nosplit = (reason == MR_NUMA_MISPLACED); > @@ -1543,9 +1627,13 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, > > cond_resched(); > > - rc = unmap_and_move(get_new_page, put_new_page, > - private, folio, pass > 2, mode, > - reason, ret_folios); > + rc = migrate_folio_unmap(get_new_page, put_new_page, private, > + folio, &dst, pass > 2, mode, > + reason, ret_folios); > + if (rc == MIGRATEPAGE_UNMAP) > + rc = migrate_folio_move(put_new_page, private, > + folio, dst, mode, > + reason, ret_folios); How to deal with the whole  pages are ummaped success,  but only part  pages are moved success ? > /* > * The rules are: > * Success: folio will be freed