From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 808C3C7EE23 for ; Tue, 23 May 2023 19:43:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B517B6B0074; Tue, 23 May 2023 15:43:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B01E56B0075; Tue, 23 May 2023 15:43:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A1F2900002; Tue, 23 May 2023 15:43:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8B74F6B0074 for ; Tue, 23 May 2023 15:43:22 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 37E771C7237 for ; Tue, 23 May 2023 19:43:22 +0000 (UTC) X-FDA: 80822543844.17.45461AD Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf06.hostedemail.com (Postfix) with ESMTP id DAE22180005 for ; Tue, 23 May 2023 19:43:18 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ca2SMUQw; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf06.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684870999; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nVYTJz+ohmzpkMyzlI3AmODU2ocVBa7FPjYPfOIXS4Y=; b=4bVEhCfXiDgOSqn3fBD/hCGD0qejcjWl6zhVpzXCTO3blxLrVb1eHdCit8bIbdKAbCmILG zGtp86bLJIEo6Jj4OhLgc8al5d8T/nwQQI5IIzRoRvlgUaYGF3GvWgR7Tw7ayY1IDqqN9I hwUDUPA9biWFa0wC/Pw/rZzTKJWNZGk= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ca2SMUQw; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf06.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684870999; a=rsa-sha256; cv=none; b=xHP4d5yO7QJmhePZlX/A7/lhNjfMB3tw/haD5aFdGC9F5Ks05TyNSih/yg8UlQVIxbbOgR d0Xo8nQo7LUU5skhWBNRpeB48YYmdC/87becpFJxbnLG/KWYBVZxlwGSEeHhn0cT9qtDGa G3UOpBmL+XSOQ/nFiMHA+3KCtCxEdPE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684870998; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nVYTJz+ohmzpkMyzlI3AmODU2ocVBa7FPjYPfOIXS4Y=; b=Ca2SMUQwAZjnmIlpLFf24pLEiidJ+SPdwmNRN2q+BBE9dFUReFyakLunYD80AX5EmpDMhK dexjt6Kd8kOS9NyXslrUFOOxXrmLb/jNDFbPV75kKppd/llVbn9h0Cm4ilrT91xxD8MZb0 s6Ny/HEhCNLLUGoJiDBi83CsX/ITX3g= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-258-9jYwZqJfOySX1xGRUSsHWQ-1; Tue, 23 May 2023 15:43:17 -0400 X-MC-Unique: 9jYwZqJfOySX1xGRUSsHWQ-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-623a54565e6so29366d6.0 for ; Tue, 23 May 2023 12:43:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684870996; x=1687462996; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nVYTJz+ohmzpkMyzlI3AmODU2ocVBa7FPjYPfOIXS4Y=; b=gf3Dfvo5D/Dq1k7iU9naiuAXeFz2RExiaxzIR4thb9Lc+XfPJkTsh+yZACBQRa+uhj k/EKa5Vzzc3srug9aXo7iTJRGbBXxqR/bILggeRKvPwJRrTby9cm3hJgFRMF6dtI122W eXNzF1LuAHK7EiE1sxIElVLEI49fc4jgulM8+TqfdFj+oy8gxHLGoq6FQ57wmU0xIieD JOVbApHnOiRfq/abkrmekPf2OK3VbDxg4Sy1znyzeB7K5saEeMgqvSiMfxAbdiDCRD3Q 6Vsv2Fs/nsU0lKYZp5sD0+UQiiFXRpMad1aXZv+ARfYVspwpnHBLvtywNO7fI292SuDR 7w3g== X-Gm-Message-State: AC+VfDzwCz/XC0Vycy0fuX3+02glTzBVv3waMYRYKY1PI4sZf1wF/m/r oAgoFl6ToLst1xFT9TsS7aCMoJP1iFdDFCUKO0FMVlUzFNuZkImZYIp5HPoabKtOoDgqtDgSFMj MxqaODe/PQqc= X-Received: by 2002:a05:622a:1802:b0:3e3:c889:ecf9 with SMTP id t2-20020a05622a180200b003e3c889ecf9mr388753qtc.1.1684870996312; Tue, 23 May 2023 12:43:16 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5WPyPlIU7Z5o5ojcdnyYSYz4FQtg+28gg19z5Uhb8BtRa2nhggXZCu6QGD9c50PH29b6aRwg== X-Received: by 2002:a05:622a:1802:b0:3e3:c889:ecf9 with SMTP id t2-20020a05622a180200b003e3c889ecf9mr388715qtc.1.1684870995947; Tue, 23 May 2023 12:43:15 -0700 (PDT) Received: from x1n (bras-base-aurron9127w-grc-62-70-24-86-62.dsl.bell.ca. [70.24.86.62]) by smtp.gmail.com with ESMTPSA id ch13-20020a05622a40cd00b003ef573e24cfsm1919236qtb.12.2023.05.23.12.43.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 May 2023 12:43:15 -0700 (PDT) Date: Tue, 23 May 2023 15:43:13 -0400 From: Peter Xu To: Muhammad Usama Anjum Cc: linux-mm@kvack.org, Paul Gofman , Alexander Viro , Shuah Khan , Christian Brauner , Yang Shi , Vlastimil Babka , "Liam R . Howlett" , Yun Zhou , Cyrill Gorcunov , =?utf-8?B?TWljaGHFgiBNaXJvc8WCYXc=?= , Andrew Morton , Suren Baghdasaryan , Andrei Vagin , Alex Sierra , Matthew Wilcox , Pasha Tatashin , Danylo Mocherniuk , Axel Rasmussen , "Gustavo A . R . Silva" , David Hildenbrand , Dan Williams , linux-kernel@vger.kernel.org, Mike Rapoport , linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, Greg KH , kernel@collabora.com, Nadav Amit Subject: Re: [PATCH RESEND v15 2/5] fs/proc/task_mmu: Implement IOCTL to get and optionally clear info about PTEs Message-ID: References: <20230420060156.895881-1-usama.anjum@collabora.com> <20230420060156.895881-3-usama.anjum@collabora.com> <0edfaf12-66f2-86d3-df1c-f5dff10fb743@collabora.com> MIME-Version: 1.0 In-Reply-To: <0edfaf12-66f2-86d3-df1c-f5dff10fb743@collabora.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: DAE22180005 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: igtijx6qxt6mqw6q7dkiar69cjs75sd8 X-HE-Tag: 1684870998-251541 X-HE-Meta: U2FsdGVkX1+az62ep3wBPEGPwQbuHM29auxFpskP0z/sKxNFaPpWXUhi6uIj0F1ivnj/zrd/N3m3zoDdMuQ+E6GuGXFJxw49xiVQ8+cf8cr2DASSGTDcpaaoaAxPgE4JLB+6SktHnB4dnguJ86HrpI8ZnZDuAZXi9tdjjkhMg/ftYjwjwBfsGmVpy8R4zJYKp6SxDcfC7RYLxCLTzHKhhCUIOGusTVv+YemYi+YfkpD3lfUpgmvPXoATpOnepfMeBgdwj+1T5U6pmct5pC4FjzTR51pgcnF2wOrE+rlrAzTHJ8kVxvKehpGfZyHgZEQe0ARdAEkox9DomkMxCu1eKLVT1POLROdXJ3WH7hV9XCy4i6+8rdlIb1/bx7KQucqNCu/p9LBFa0w+eEapqKqRUtHGdBcVsLs+krM0oTDtg7V/uGB0RPymAvZTUGumNzlMLgJcUjebUikDzqXDsyFN53ZdS76g3erJQhAo8C/l9TLTPInFny+ZWpHxoMCA/TE8wpmgM27pJKRhT7nJvcROj11+H1l3XQHSgjmN4BKFNN4LQDQJr3OuXar24ngsKpuERdNRWvi+bVmDGsCRr2j4XkaPkXiZxfEY8YU4sI2uK+ZkgQIbr4/ZunFn9jfvrFrLRjfqyFxbwGUiSwbB32nKPSayHLUxKQ/flWfIM0cE9hV/4jYSS9wQ4lLBGnB6QBoo/wjZh/fOjPVlRIPmNcAV6HsKPM7SDWGGP9n0yYdnZD9xXnuMye4rG2Qp5F0TRXIHHRNLjMvBg/O/u7Zb/Z+asTgoC0BltJ6xkDR5zc1qw/w/34+sBUOw2KEhatXEyhv8hDYmPZVCTrR61/q7Zn8x7vjznmeTgYqvKN0PK7lOsIf3dq+fpbhmflqEyvT0qfWL4aw4ytMGkCe/2Ued6oAl5QuubSrjajP8LfhoLJzFtqWYFwTr+Du5eJ26fu3WEpQJqW47NIzQ8j3TlYc50UY Av72z31r IKTIPbmw9ByMo+T8+dyybzhMhOk/+yn75ONBuQ2Nh+g4Gd+co6ILAQLK84vCs04e0dzeAI2bho1MiiRqGA/D5isWofDFLwuY2lidlm0gqLManv0jlM78uK0THEGtfPvSP3SeMohY7ZoZjpajryhZOlMhu/JKVEaD+CfZXhh+H4PrqS2in7Dnk/gIggbyqZPA6n+3I9GcL8jXJ1+b30/5y8dqxRhBMA65PmxlXp6j0LlUQXjjCZ2sWdxIaokK3joDgkSKgvqkzJERVXHF22AKHNyzWUiwR1zEsJxZ34QL8pTuaHRQZc8RWejFiSqAJJWIRxWswOkk0jCZMWS1pgEcDtMcXqRaKSPmfDBGJfbklPHf24TkzVzlKWHjifN1dJx7/6pEjfQrHhBAHIGd7QfeBJZEY96Z/5Py5woNF8VWnJ68z9UW1dRYFxoabLQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, Muhammad, On Mon, May 22, 2023 at 04:26:07PM +0500, Muhammad Usama Anjum wrote: > On 5/22/23 3:24 PM, Muhammad Usama Anjum wrote: > > On 4/26/23 7:13 PM, Peter Xu wrote: > >> Hi, Muhammad, > >> > >> On Wed, Apr 26, 2023 at 12:06:23PM +0500, Muhammad Usama Anjum wrote: > >>> On 4/20/23 11:01 AM, Muhammad Usama Anjum wrote: > >>>> +/* Supported flags */ > >>>> +#define PM_SCAN_OP_GET (1 << 0) > >>>> +#define PM_SCAN_OP_WP (1 << 1) > >>> We have only these flag options available in PAGEMAP_SCAN IOCTL. > >>> PM_SCAN_OP_GET must always be specified for this IOCTL. PM_SCAN_OP_WP can > >>> be specified as need. But PM_SCAN_OP_WP cannot be specified without > >>> PM_SCAN_OP_GET. (This was removed after you had asked me to not duplicate > >>> functionality which can be achieved by UFFDIO_WRITEPROTECT.) > >>> > >>> 1) PM_SCAN_OP_GET | PM_SCAN_OP_WP > >>> vs > >>> 2) UFFDIO_WRITEPROTECT > >>> > >>> After removing the usage of uffd_wp_range() from PAGEMAP_SCAN IOCTL, we are > >>> getting really good performance which is comparable just like we are > >>> depending on SOFT_DIRTY flags in the PTE. But when we want to perform wp, > >>> PM_SCAN_OP_GET | PM_SCAN_OP_WP is more desirable than UFFDIO_WRITEPROTECT > >>> performance and behavior wise. > >>> > >>> I've got the results from someone else that UFFDIO_WRITEPROTECT block > >>> pagefaults somehow which PAGEMAP_IOCTL doesn't. I still need to verify this > >>> as I don't have tests comparing them one-to-one. > >>> > >>> What are your thoughts about it? Have you thought about making > >>> UFFDIO_WRITEPROTECT perform better? > >>> > >>> I'm sorry to mention the word "performance" here. Actually we want better > >>> performance to emulate Windows syscall. That is why we are adding this > >>> functionality. So either we need to see what can be improved in > >>> UFFDIO_WRITEPROTECT or can I please add only PM_SCAN_OP_WP back in > >>> pagemap_ioctl? > >> > >> I'm fine if you want to add it back if it works for you. Though before > >> that, could you remind me why there can be a difference on performance? > > I've looked at the code again and I think I've found something. Lets look > > at exact performance numbers: > > > > I've run 2 different tests. In first test UFFDIO_WRITEPROTECT is being used > > for engaging WP. In second test PM_SCAN_OP_WP is being used. I've measured > > the average write time to the same memory which is being WP-ed and total > > time of execution of these APIs: What is the steps of the test? Is it as simple as "writeprotect", "unprotect", then write all pages in a single thread? Is UFFDIO_WRITEPROTECT sent in one range covering all pages? Maybe you can attach the test program here too. > > > > **avg write time:** > > | No of pages | 2000 | 8192 | 100000 | 500000 | > > |------------------------|------|------|--------|--------| > > | UFFDIO_WRITEPROTECT | 2200 | 2300 | 4100 | 4200 | > > | PM_SCAN_OP_WP | 2000 | 2300 | 2500 | 2800 | > > > > **Execution time measured in rdtsc:** > > | No of pages | 2000 | 8192 | 100000 | 500000 | > > |------------------------|------|-------|--------|--------| > > | UFFDIO_WRITEPROTECT | 3200 | 14000 | 59000 | 58000 | > > | PM_SCAN_OP_WP | 1900 | 7000 | 38000 | 40000 | > > > > Avg write time for UFFDIO_WRITEPROTECT is 1.3 times slow. The execution > > time is 1.5 times slower in the case of UFFDIO_WRITEPROTECT. So > > UFFDIO_WRITEPROTECT is making writes slower to the pages and execution time > > is also slower. > > > > This proves that PM_SCAN_OP_WP is better than UFFDIO_WRITEPROTECT. Although > > PM_SCAN_OP_WP and UFFDIO_WRITEPROTECT have been implemented differently. We > > should have seen no difference in performance. But we have quite a lot of > > difference in performance here. PM_SCAN_OP_WP takes read mm lock, uses > > walk_page_range() to walk over pages which finds VMAs from address ranges > > to walk over them and pagemap_scan_pmd_entry() is handling most of the work > > including tlb flushing. UFFDIO_WRITEPROTECT is also taking the mm lock and > > iterating from all the different page directories until a pte is found and > > then flags are updated there and tlb is flushed for every pte. > > > > My next deduction would be that we are getting worse performance as we are > > flushing tlb for one page at a time in case of UFFDIO_WRITEPROTECT. While > > we flush tlb for 512 pages (moslty) at a time in case of PM_SCAN_OP_WP. > > I've just verified this by adding some logs to the change_pte_range() and > > pagemap_scan_pmd_entry(). Logs are attached. I've allocated memory of 1000 > > pages and write-protected it with UFFDIO_WRITEPROTECT and PM_SCAN_OP_WP. > > The logs show that UFFDIO_WRITEPROTECT has flushed tlb 1000 times of size 1 > > page each time. While PM_SCAN_OP_WP has flushed only 3 times of bigger > > sizes. I've learned over my last experience that tlb flush is very > > expensive. Probably this is what we need to improve if we don't want to add > > PM_SCAN_OP_WP? > > > > The UFFDIO_WRITEPROTECT uses change_pte_range() which is very generic > > function and I'm not sure if can try to not do tlb flushes if uffd_wp is > > true. We can try to do flush somewhere else and hopefully we should do only > > one flush if possible. It will not be so straight forward to move away from > > generic fundtion. Thoughts? > I've just tested this theory of not doing per pte flushes and only did one > flush on entire range in uffd_wp_range(). But it didn't improve the > situation either. I was wrong that tlb flushes may be the cause. I had a feeling that you were trapping tlb_flush_pte_range(), which is actually not really sending any TLB flushes but updating mmu_gather object for the addr range for future invalidations. That's probably why it didn't show an effect when you comment it out. I am not sure whether the wr-protect path difference can be caused by the arch hooks, namely arch_enter_lazy_mmu_mode() / arch_leave_lazy_mmu_mode(). On x86 I saw that it's actually hooked onto some PV calls. I had a feeling that this is for optimization only, but maybe it's still a good idea you also take that into your new code: static inline void arch_enter_lazy_mmu_mode(void) { PVOP_VCALL0(mmu.lazy_mode.enter); } The other thing is I think you're flushing tlb outside pgtable lock in your new code. IIUC that's racy, see: commit 6ce64428d62026a10cb5d80138ff2f90cc21d367 Author: Nadav Amit Date: Fri Mar 12 21:08:17 2021 -0800 mm/userfaultfd: fix memory corruption due to writeprotect So you may want to put it at least into pgtable lock critical section, or IIUC you can also do inc_tlb_flush_pending() then dec_tlb_flush_pending() just like __tlb_gather_mmu(), to make sure do_wp_page() will properly flush the page when unluckily hit some of the page. That's also the spot (the flush_tlb_page() in 6ce64428d) that made me think on whether it caused the slowness on writting to those pages. But it really depends on your test program, e.g. if it's a single threaded I don't think it'll trigger because when writting mm_tlb_flush_pending() should start to return 0 already, so the tlb should logically not be needed. If you want maybe you can double check that. So in short, I had a feeling that the new PM_SCAN_OP_WP just misses something here and there so it's faster - it means even if it's faster it may also be prone to race conditions etc so we'd better figure it out.. Thanks, -- Peter Xu