From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45A56CD1284 for ; Sun, 7 Apr 2024 02:08:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0F1B6B0082; Sat, 6 Apr 2024 22:08:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BFA46B0083; Sat, 6 Apr 2024 22:08:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8867B6B0085; Sat, 6 Apr 2024 22:08:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 62B7F6B0082 for ; Sat, 6 Apr 2024 22:08:57 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 17961A0EA9 for ; Sun, 7 Apr 2024 02:08:57 +0000 (UTC) X-FDA: 81981102714.26.4DE7950 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf16.hostedemail.com (Postfix) with ESMTP id 92F9B18000F for ; Sun, 7 Apr 2024 02:08:53 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of mawupeng1@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=mawupeng1@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712455735; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jn9tAf+cwXP5dUPpQ5X7fnNF8PRDItKGwXYIjJrWr30=; b=lDQYXEQF5ub6yQ3im47mhSw2so1dGaxMRW4jdqZkO7XVJeYuwvwzWmPcdyiFF2IkmMzjTf xXvKT16mJzHs2gXZ26aozaGA0m0DgjWSPivJaLtTjZBU2DS3+WaVeLH2OPaLutIUtgeEue M7k9ndRnBDjWOjqWn1gr3lRIJrh4azw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712455735; a=rsa-sha256; cv=none; b=K1elT9fJN95ewGWnmZr2+33Hy7yNeUCx+5Tsf2SHP8tXF9tPnUNa49UatToni1cRUEqpgl iuft/eJA1M+uX9eaX4kVVHA8UdiHnNUV4kCQ7c3CFhOF2XgTk5M7J8zNl3n+kq5CSKgk61 CZFIGROPAOeWIpVwnyqslZf5qekl7dw= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of mawupeng1@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=mawupeng1@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4VBwZP1J59z1ymGm; Sun, 7 Apr 2024 10:06:37 +0800 (CST) Received: from dggpemd200001.china.huawei.com (unknown [7.185.36.224]) by mail.maildlp.com (Postfix) with ESMTPS id 362D21400D3; Sun, 7 Apr 2024 10:08:49 +0800 (CST) Received: from [10.174.178.120] (10.174.178.120) by dggpemd200001.china.huawei.com (7.185.36.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Sun, 7 Apr 2024 10:08:48 +0800 Message-ID: <26df9d6a-ff91-491e-9e50-6dd678acbd2d@huawei.com> Date: Sun, 7 Apr 2024 10:08:48 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird CC: , , , , , , , , , , , , , , Subject: Re: [PATCH v2 2/3] x86/mm/pat: fix VM_PAT handling in COW mappings To: , References: <20240403212131.929421-1-david@redhat.com> <20240403212131.929421-3-david@redhat.com> Content-Language: en-US From: mawupeng In-Reply-To: <20240403212131.929421-3-david@redhat.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.178.120] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemd200001.china.huawei.com (7.185.36.224) X-Rspamd-Queue-Id: 92F9B18000F X-Rspam-User: X-Stat-Signature: frijjxz3yxd6w8dqqepjewopxwegnwaw X-Rspamd-Server: rspam03 X-HE-Tag: 1712455733-577491 X-HE-Meta: U2FsdGVkX1/xiieQB3cMmCaaGXBkCOjk1TYOc1umNUY9MNJ1VF5WqLuaW22kt6plWdCaH/qA1DKEIa7fDeAOsqanv7mv1W5AhkACjhIKe63gsV2LbUIJozk1abkydMTB7QnIAObeJzVh3202WucbISbKRvv6tgw9xavKFWrt6scyvAfW5nV9NJMANsfiYQ9SaWq+DL/koR8uF1k+4ow3bGVJmtyr/rD98FK8IMm1IJ1RSkAy8owmBlZfydJFbv4mgZMbB85Kg5Fo92TPu049FUxgCFgIYPo7va9V/iF3IlNjoXnedc+0DJsG8tuNsnk3hiVEdzKmIgzedrPwBAXAezz3NVSffBO4poblYXF3UNqUB5xiVXlcP9M0JQKfUzg+RMy5FDOCmvZmo1A2pZSGSphH37kOpKT3c8sT8YSC0r2go5hzv9VBA2xM4/9CKFTV1bUHda0J2A3iNhV40vWkFTEmnUIcgEXG7wsOQwDXDRJe1SOGJn5nL6NeBS0O6AKLS6s7//jH0uHhK2GSvlRKDM291D/Qupyq4/ZHx2abuxzn0st/iRDUHY05zdcAQT3rpP9acLzqKzXWLo2iz3O6+iqeFtBTTlbTYlz/VQB4bSzpQsOFlvWCHtsJkuNyxWX/J2q5hyZn7g+qOVJUwOAnsAzZ0aiPVPn1QxbTYAzfQwpPSUrpsu+bjQbp+tCW0Cfak5usk/V+atV8A5smrY+ZFyfYkoz2GWRdc1qrb6wK/g8gcn+vMUM/HqRA9IPQEJUxC6QO7JNyIdYcU4ez/Yl9TX4lhjJUHXcIyFvtreKiD4HChVc4OxeaTTm3iGuMy3yJ7hNOBi4YwjfAsMRTKFoX/I34uy6qgiV9L2xGJPoX8xLsQBO+ujqbvehyTLWIc6R3LlwKALum6gZSM2jg8HrDiM9vrbuZHcgSFqFyB2MlLDNXd0KkoFtvxFmrOGJREpiIk9qNF7r61Yu/XZLkut8 fS6pgbjh tRaSPwrhWMc9FqAUU6MeSH7UEBMDkJmP5966+pEeb89e2TWZ07Okx1o7kkYhgWoCLtp1Q7GyCU338sqmruS7K16mY5jwr5fDjmJPdyy/Mls3PSbTE1lUdrCVHH+g6D40BXPSHTyaiTAQKKF4w9LXFhEGPHFw4Rx1pnIhTsrkPC3DQBT4XB7a0p7PA0FqsZz20zU82n85aSp4ZM5l+nfhUfkdeZtxWBUSFUXBMHWsKYF6uSedA8lvevRSBR06RnNRi+h69/N/04RY/qzXzBejulw3BLBaPFEv5WkCbQ8Syx+7HSLITxNWJV5Mte9zt6U+ZBXrLWmki0O5ZACu5EmdX+NUZHsFic5PdGwQrrpYCfqX61PpmZJhWFBSxWPXEUINn8A9WWeGs333I2nS3u6C2/zFv1wVwCF6I6cz/jBCLLOArdlLWfiZFZsB0BQjXkAKdmifnT+8TR6PjO5qlPEX+zLQHZc6YXmVnUU5YDgiEmBnGMfGBgGCMDj0DsuDsq01//ckd7IhuAcUCIMJcTMaGNOMUzQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/4/4 5:21, David Hildenbrand wrote: > PAT handling won't do the right thing in COW mappings: the first PTE > (or, in fact, all PTEs) can be replaced during write faults to point at > anon folios. Reliably recovering the correct PFN and cachemode using > follow_phys() from PTEs will not work in COW mappings. > > Using follow_phys(), we might just get the address+protection of the > anon folio (which is very wrong), or fail on swap/nonswap entries, > failing follow_phys() and triggering a WARN_ON_ONCE() in untrack_pfn() > and track_pfn_copy(), not properly calling free_pfn_range(). > > In free_pfn_range(), we either wouldn't call memtype_free() or would > call it with the wrong range, possibly leaking memory. > > To fix that, let's update follow_phys() to refuse returning anon folios, > and fallback to using the stored PFN inside vma->vm_pgoff for COW > mappings if we run into that. > > We will now properly handle untrack_pfn() with COW mappings, where we > don't need the cachemode. We'll have to fail fork()->track_pfn_copy() if > the first page was replaced by an anon folio, though: we'd have to store > the cachemode in the VMA to make this work, likely growing the VMA size. > > For now, lets keep it simple and let track_pfn_copy() just fail in that > case: it would have failed in the past with swap/nonswap entries already, > and it would have done the wrong thing with anon folios. > > Simple reproducer to trigger the WARN_ON_ONCE() in untrack_pfn(): > > <--- C reproducer ---> > #include > #include > #include > #include > > int main(void) > { > struct io_uring_params p = {}; > int ring_fd; > size_t size; > char *map; > > ring_fd = io_uring_setup(1, &p); > if (ring_fd < 0) { > perror("io_uring_setup"); > return 1; > } > size = p.sq_off.array + p.sq_entries * sizeof(unsigned); > > /* Map the submission queue ring MAP_PRIVATE */ > map = mmap(0, size, PROT_READ | PROT_WRITE, MAP_PRIVATE, > ring_fd, IORING_OFF_SQ_RING); > if (map == MAP_FAILED) { > perror("mmap"); > return 1; > } > > /* We have at least one page. Let's COW it. */ > *map = 0; > pause(); > return 0; > } > <--- C reproducer ---> > > On a system with 16 GiB RAM and swap configured: > # ./iouring & > # memhog 16G > # killall iouring > [ 301.552930] ------------[ cut here ]------------ > [ 301.553285] WARNING: CPU: 7 PID: 1402 at arch/x86/mm/pat/memtype.c:1060 untrack_pfn+0xf4/0x100 > [ 301.553989] Modules linked in: binfmt_misc nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_g > [ 301.558232] CPU: 7 PID: 1402 Comm: iouring Not tainted 6.7.5-100.fc38.x86_64 #1 > [ 301.558772] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebu4 > [ 301.559569] RIP: 0010:untrack_pfn+0xf4/0x100 > [ 301.559893] Code: 75 c4 eb cf 48 8b 43 10 8b a8 e8 00 00 00 3b 6b 28 74 b8 48 8b 7b 30 e8 ea 1a f7 000 > [ 301.561189] RSP: 0018:ffffba2c0377fab8 EFLAGS: 00010282 > [ 301.561590] RAX: 00000000ffffffea RBX: ffff9208c8ce9cc0 RCX: 000000010455e047 > [ 301.562105] RDX: 07fffffff0eb1e0a RSI: 0000000000000000 RDI: ffff9208c391d200 > [ 301.562628] RBP: 0000000000000000 R08: ffffba2c0377fab8 R09: 0000000000000000 > [ 301.563145] R10: ffff9208d2292d50 R11: 0000000000000002 R12: 00007fea890e0000 > [ 301.563669] R13: 0000000000000000 R14: ffffba2c0377fc08 R15: 0000000000000000 > [ 301.564186] FS: 0000000000000000(0000) GS:ffff920c2fbc0000(0000) knlGS:0000000000000000 > [ 301.564773] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 301.565197] CR2: 00007fea88ee8a20 CR3: 00000001033a8000 CR4: 0000000000750ef0 > [ 301.565725] PKRU: 55555554 > [ 301.565944] Call Trace: > [ 301.566148] > [ 301.566325] ? untrack_pfn+0xf4/0x100 > [ 301.566618] ? __warn+0x81/0x130 > [ 301.566876] ? untrack_pfn+0xf4/0x100 > [ 301.567163] ? report_bug+0x171/0x1a0 > [ 301.567466] ? handle_bug+0x3c/0x80 > [ 301.567743] ? exc_invalid_op+0x17/0x70 > [ 301.568038] ? asm_exc_invalid_op+0x1a/0x20 > [ 301.568363] ? untrack_pfn+0xf4/0x100 > [ 301.568660] ? untrack_pfn+0x65/0x100 > [ 301.568947] unmap_single_vma+0xa6/0xe0 > [ 301.569247] unmap_vmas+0xb5/0x190 > [ 301.569532] exit_mmap+0xec/0x340 > [ 301.569801] __mmput+0x3e/0x130 > [ 301.570051] do_exit+0x305/0xaf0 > ... > > Reported-by: Wupeng Ma > Closes: https://lkml.kernel.org/r/20240227122814.3781907-1-mawupeng1@huawei.com > Fixes: b1a86e15dc03 ("x86, pat: remove the dependency on 'vm_pgoff' in track/untrack pfn vma routines") > Fixes: 5899329b1910 ("x86: PAT: implement track/untrack of pfnmap regions for x86 - v3") > Acked-by: Ingo Molnar > Cc: Dave Hansen > Cc: Andy Lutomirski > Cc: Peter Zijlstra > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: Borislav Petkov > Cc: "H. Peter Anvin" > Cc: Andrew Morton > Signed-off-by: David Hildenbrand > --- > arch/x86/mm/pat/memtype.c | 49 ++++++++++++++++++++++++++++----------- > mm/memory.c | 4 ++++ > 2 files changed, 39 insertions(+), 14 deletions(- Test-by: Wupeng Ma > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c > index 0d72183b5dd0..36b603d0cdde 100644 > --- a/arch/x86/mm/pat/memtype.c > +++ b/arch/x86/mm/pat/memtype.c > @@ -947,6 +947,38 @@ static void free_pfn_range(u64 paddr, unsigned long size) > memtype_free(paddr, paddr + size); > } > > +static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr, > + pgprot_t *pgprot) > +{ > + unsigned long prot; > + > + VM_WARN_ON_ONCE(!(vma->vm_flags & VM_PAT)); > + > + /* > + * We need the starting PFN and cachemode used for track_pfn_remap() > + * that covered the whole VMA. For most mappings, we can obtain that > + * information from the page tables. For COW mappings, we might now > + * suddenly have anon folios mapped and follow_phys() will fail. > + * > + * Fallback to using vma->vm_pgoff, see remap_pfn_range_notrack(), to > + * detect the PFN. If we need the cachemode as well, we're out of luck > + * for now and have to fail fork(). > + */ > + if (!follow_phys(vma, vma->vm_start, 0, &prot, paddr)) { > + if (pgprot) > + *pgprot = __pgprot(prot); > + return 0; > + } > + if (is_cow_mapping(vma->vm_flags)) { > + if (pgprot) > + return -EINVAL; > + *paddr = (resource_size_t)vma->vm_pgoff << PAGE_SHIFT; > + return 0; > + } > + WARN_ON_ONCE(1); > + return -EINVAL; > +} > + > /* > * track_pfn_copy is called when vma that is covering the pfnmap gets > * copied through copy_page_range(). > @@ -957,20 +989,13 @@ static void free_pfn_range(u64 paddr, unsigned long size) > int track_pfn_copy(struct vm_area_struct *vma) > { > resource_size_t paddr; > - unsigned long prot; > unsigned long vma_size = vma->vm_end - vma->vm_start; > pgprot_t pgprot; > > if (vma->vm_flags & VM_PAT) { > - /* > - * reserve the whole chunk covered by vma. We need the > - * starting address and protection from pte. > - */ > - if (follow_phys(vma, vma->vm_start, 0, &prot, &paddr)) { > - WARN_ON_ONCE(1); > + if (get_pat_info(vma, &paddr, &pgprot)) > return -EINVAL; > - } > - pgprot = __pgprot(prot); > + /* reserve the whole chunk covered by vma. */ > return reserve_pfn_range(paddr, vma_size, &pgprot, 1); > } > > @@ -1045,7 +1070,6 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, > unsigned long size, bool mm_wr_locked) > { > resource_size_t paddr; > - unsigned long prot; > > if (vma && !(vma->vm_flags & VM_PAT)) > return; > @@ -1053,11 +1077,8 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, > /* free the chunk starting from pfn or the whole chunk */ > paddr = (resource_size_t)pfn << PAGE_SHIFT; > if (!paddr && !size) { > - if (follow_phys(vma, vma->vm_start, 0, &prot, &paddr)) { > - WARN_ON_ONCE(1); > + if (get_pat_info(vma, &paddr, NULL)) > return; > - } > - > size = vma->vm_end - vma->vm_start; > } > free_pfn_range(paddr, size); > diff --git a/mm/memory.c b/mm/memory.c > index 1211e2090c1a..1e9a0288fdaf 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -6002,6 +6002,10 @@ int follow_phys(struct vm_area_struct *vma, > goto out; > pte = ptep_get(ptep); > > + /* Never return PFNs of anon folios in COW mappings. */ > + if (vm_normal_folio(vma, address, pte)) > + goto unlock; > + > if ((flags & FOLL_WRITE) && !pte_write(pte)) > goto unlock; >