From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0A1DC4167B for ; Mon, 26 Dec 2022 09:41:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 46B21900003; Mon, 26 Dec 2022 04:41:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 41D3A900002; Mon, 26 Dec 2022 04:41:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E2B7900003; Mon, 26 Dec 2022 04:41:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1F37C900002 for ; Mon, 26 Dec 2022 04:41:03 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E37D3A981C for ; Mon, 26 Dec 2022 09:41:02 +0000 (UTC) X-FDA: 80283963564.19.DC0D59B Received: from mail-vs1-f51.google.com (mail-vs1-f51.google.com [209.85.217.51]) by imf21.hostedemail.com (Postfix) with ESMTP id 5C8D91C0008 for ; Mon, 26 Dec 2022 09:41:01 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.51 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672047661; a=rsa-sha256; cv=none; b=WjtBqkT5luG6yWj2698ZUMhvN+wNEpPBxzFwL5KoBlm2vm0ciW6NjOP0Ng0bHE60K0c2tR w72j5yhGdUUfay6aywx5UYM9W46liD/5T1Bf24SNWNGzswf75ZlASqSE15mJyCRXoyqij2 agR3YG2Lh3pi2uWEAO9ycbWKuhSQnB8= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.51 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672047661; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mrB1MiLPJiyOd+sQ+kYL31Kb2U8C2us/yziaQLwpPq8=; b=56gWizROvrtb/+FFf0pvWLwoupeCNUUSA5ZZA3vzgqkjwCbUIbN4UXGxBAuET8MPkOyeqs LwW8WL1mWBysbQOSJZKaIJYktdLvpgwqu8BeTqdlzoEn+L8sBev+Ti8DswItUqpwVLynoV Ex6yAN5F+ZGwmQ6H1ZPAF6GJc7M71Lo= Received: by mail-vs1-f51.google.com with SMTP id m129so4093536vsc.11 for ; Mon, 26 Dec 2022 01:41:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=mrB1MiLPJiyOd+sQ+kYL31Kb2U8C2us/yziaQLwpPq8=; b=sPMpE9Rx3orBzkCKXOXiSOqJghOFhPwV7eYFXQxXWg0dq9k4nuG+nOvGrCwvX+/3HU pbOWRRThzTUGgfAu1BctH4NkEYDLaMuqMYSyh7q1a9LzakuZMTVyFl2qGqhJqmTMVN1W ck7m1I2ogpVYU0oKQrmpMo7FTwAr/A1zCDzFhQ137WMEjMkLKgceBKzreWd8JpMl7s1r JVNyJKLxW7//Z4NzRqqq5wUO3bm9KyWZQlphQohzEzh7GrPDz7jX7roTBITRF9kRy/Ce cqn/6d1Dw3l6/PoK/4W35rtcCOQ75GU9pxI6EmJ0g1kLZuqS3ZpO6qqnUhu7kEyRgwqO SVhA== X-Gm-Message-State: AFqh2krmld+Fck9+RHK/pPACuEVCn7WnHUz7OIz7tEEExW0TI5OEfAGg 0wfi4cLJ7LgJV4xrI5BM4TI/CTLYFwxY8x1Ik9E= X-Google-Smtp-Source: AMrXdXsVzqaHkbUqbvny2BDSkiJRVh4nppgt87nfKQlXrhq9Z4SdrI8RELKsWGb9cI9kRnAJj10jnOhJaGrKh/o8S/I= X-Received: by 2002:a05:6102:1041:b0:3c6:2426:2210 with SMTP id h1-20020a056102104100b003c624262210mr672551vsq.86.1672047660448; Mon, 26 Dec 2022 01:41:00 -0800 (PST) MIME-Version: 1.0 References: <20221220072743.3039060-1-shiyn.lin@gmail.com> <20221220072743.3039060-5-shiyn.lin@gmail.com> In-Reply-To: <20221220072743.3039060-5-shiyn.lin@gmail.com> From: Barry Song Date: Mon, 26 Dec 2022 22:40:49 +1300 Message-ID: Subject: Re: [PATCH v3 04/14] mm/rmap: Break COW PTE in rmap walking To: Chih-En Lin Cc: Andrew Morton , Qi Zheng , David Hildenbrand , Matthew Wilcox , Christophe Leroy , John Hubbard , Nadav Amit , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Yang Shi , Peter Xu , "Zach O'Keefe" , "Liam R . Howlett" , Alex Sierra , Xianting Tian , Colin Cross , Suren Baghdasaryan , Pasha Tatashin , Suleiman Souhlal , Brian Geffon , Yu Zhao , Tong Tiangen , Liu Shixin , Li kunyu , Anshuman Khandual , Vlastimil Babka , Hugh Dickins , Minchan Kim , Miaohe Lin , Gautam Menghani , Catalin Marinas , Mark Brown , Will Deacon , "Eric W . Biederman" , Thomas Gleixner , Sebastian Andrzej Siewior , Andy Lutomirski , Fenghua Yu , Barret Rhoden , Davidlohr Bueso , "Jason A . Donenfeld" , Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Queue-Id: 5C8D91C0008 X-Rspamd-Server: rspam01 X-Stat-Signature: 4ctutcstnuan95xy8ktzr8q9j9jrairj X-HE-Tag: 1672047661-282470 X-HE-Meta: U2FsdGVkX1/QFI8rx3k7CfdxbSeMbsORRJO91X5VmQPU/oS0cRFIWHnO0uCX6mag/7v2s0TaRjvhGF4y2MG+MR3dpLROnBEtjmvIikw4xxef2/iB7hMcjSVeKegYgWz9LHWl0+c2O1wnDMgQfBIzsVT2lwxSIemQtey5ieDoenGJPVQdCmLr/2cDmKyW3cM2ynEixwFk7LnRLZAD3h9IXW4cBQ6yZGHkUFJF/hRJmn70pzv78lxMqSd8NF92WHgUyGqBgaMviAVLPNtMZvIIXnBZa2dK7REs3A8pVZzvsBBYjqCVB5HMInkjEex+aimiS7EMjSUsTUk/ILQNcRnzkqT2wTRUE3iUVXI0MI7wuKEgjeSh7K3QkoG/79OpHP0YCSmXzrHITK2Apx0wGwXyCpfckqWUsBBiDEOHBeL/u5kuNwPaETM6u76leC/ciYi6aAIMPFOtHT47xHLM5Vv2qpbdE6treZ0Wqgc+XDAnDfioRM5/yTAUE6uxQ3m0u8rPokwtrJFCAnWOwHX8rf8v8sRcfOQrCV/CYzpOQ4QNdc4R841i6hamBwKmXkUx9PYTTAOc87kD1JVTUyGJvKiGrvZYUfYXh3kN8fBmi0NykS1Am7Cq0FTu3wxQPK81ibnuMvk5XHH1ZXOW+9MI9s+IRursz5Gv55HHI3LUNFkNF4+sFhbfcqlzmM2TILMxoSvLZlOreMHthOYhouNhCB4NDvCU+kxJLqXEdRyRGOzFBkQ1C2erY9i2E8aXP2DNLpsRJGS6E6AxyFQ3l/81NtBeIELBFMcm+uUTGyxMuTV/Kd/Wb1J9+U4ZUc+YU4Wj3H2hfzaOmU1VsCLQN1/gmOT0PNcIFCEFfKeMiIPVF4WQqSa9MpWHdyuBIk9F0mAY9T+BpzppL4dqIEETH2h8VJx+7vKg9ypzIdU5ltFD+kyOPkJF4kG/2unVzhpoQcgz0Ffs3YjfzrrK+D+4aefkajj VkQ0fY+w ff+ckdbNyDV8Imd4MX0z/hVi/rbz3Sw7+MhmmCpwQo67VoS7i1ZKFNwA5P7lb6f4jjGZc+5fq7iTmsJpyiKAfKc0KNGOR8KIm8AnpqHwz0pdLHXAHCFrssc1ZF0td1Ee4Ah0b5LHaQnAdIuv+8uUJ54xQQOHXGrN0PbXECfkDhReP//wNTyEp7XPVsUpMYdCNRrhM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 20, 2022 at 8:25 PM Chih-En Lin wrote: > > Some of the features (unmap, migrate, device exclusive, mkclean, etc) > might modify the pte entry via rmap. Add a new page vma mapped walk > flag, PVMW_BREAK_COW_PTE, to indicate the rmap walking to break COW PTE. > > Signed-off-by: Chih-En Lin > --- > include/linux/rmap.h | 2 ++ > mm/migrate.c | 3 ++- > mm/page_vma_mapped.c | 2 ++ > mm/rmap.c | 12 +++++++----- > mm/vmscan.c | 7 ++++++- > 5 files changed, 19 insertions(+), 7 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index bd3504d11b155..d0f07e5519736 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -368,6 +368,8 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, > #define PVMW_SYNC (1 << 0) > /* Look for migration entries rather than present PTEs */ > #define PVMW_MIGRATION (1 << 1) > +/* Break COW-ed PTE during walking */ > +#define PVMW_BREAK_COW_PTE (1 << 2) > > struct page_vma_mapped_walk { > unsigned long pfn; > diff --git a/mm/migrate.c b/mm/migrate.c > index dff333593a8ae..a4be7e04c9b09 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -174,7 +174,8 @@ void putback_movable_pages(struct list_head *l) > static bool remove_migration_pte(struct folio *folio, > struct vm_area_struct *vma, unsigned long addr, void *old) > { > - DEFINE_FOLIO_VMA_WALK(pvmw, old, vma, addr, PVMW_SYNC | PVMW_MIGRATION); > + DEFINE_FOLIO_VMA_WALK(pvmw, old, vma, addr, > + PVMW_SYNC | PVMW_MIGRATION | PVMW_BREAK_COW_PTE); > > while (page_vma_mapped_walk(&pvmw)) { > rmap_t rmap_flags = RMAP_NONE; > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 93e13fc17d3cb..5dfc9236dc505 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -251,6 +251,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > step_forward(pvmw, PMD_SIZE); > continue; > } > + if (pvmw->flags & PVMW_BREAK_COW_PTE) > + break_cow_pte(vma, pvmw->pmd, pvmw->address); > if (!map_pte(pvmw)) > goto next_pte; > this_pte: > diff --git a/mm/rmap.c b/mm/rmap.c > index 2ec925e5fa6a9..b1b7dcbd498be 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -807,7 +807,8 @@ static bool folio_referenced_one(struct folio *folio, > struct vm_area_struct *vma, unsigned long address, void *arg) > { > struct folio_referenced_arg *pra = arg; > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > + /* it will clear the entry, so we should break COW PTE. */ > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); what do you mean by breaking cow pte? in memory reclamation case, we are only checking and clearing page referenced bit in pte, do we really need to break cow? > int referenced = 0; > > while (page_vma_mapped_walk(&pvmw)) { > @@ -1012,7 +1013,8 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) > static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, > unsigned long address, void *arg) > { > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, > + PVMW_SYNC | PVMW_BREAK_COW_PTE); > int *cleaned = arg; > > *cleaned += page_vma_mkclean_one(&pvmw); > @@ -1471,7 +1473,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > unsigned long address, void *arg) > { > struct mm_struct *mm = vma->vm_mm; > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); > pte_t pteval; > struct page *subpage; > bool anon_exclusive, ret = true; > @@ -1842,7 +1844,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, > unsigned long address, void *arg) > { > struct mm_struct *mm = vma->vm_mm; > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); > pte_t pteval; > struct page *subpage; > bool anon_exclusive, ret = true; > @@ -2195,7 +2197,7 @@ static bool page_make_device_exclusive_one(struct folio *folio, > struct vm_area_struct *vma, unsigned long address, void *priv) > { > struct mm_struct *mm = vma->vm_mm; > - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); > + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_BREAK_COW_PTE); > struct make_exclusive_args *args = priv; > pte_t pteval; > struct page *subpage; > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 026199c047e0e..980d2056adfd1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1781,6 +1781,10 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > } > } > > + /* > + * Break COW PTE since checking the reference > + * of folio might modify the PTE. > + */ > if (!ignore_references) > references = folio_check_references(folio, sc); > > @@ -1864,7 +1868,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, > > /* > * The folio is mapped into the page tables of one or more > - * processes. Try to unmap it here. > + * processes. Try to unmap it here. Also, since it will write > + * to the page tables, break COW PTE if they are. > */ > if (folio_mapped(folio)) { > enum ttu_flags flags = TTU_BATCH_FLUSH; > -- > 2.37.3 > Thanks Barry