From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89FB3C5478C for ; Mon, 26 Feb 2024 08:46:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1AFDE6B0181; Mon, 26 Feb 2024 03:46:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1610D6B0182; Mon, 26 Feb 2024 03:46:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 028C36B0183; Mon, 26 Feb 2024 03:46:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E867F6B0181 for ; Mon, 26 Feb 2024 03:46:10 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8A5E21606E2 for ; Mon, 26 Feb 2024 08:46:10 +0000 (UTC) X-FDA: 81833322900.10.BB7B3EE Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf21.hostedemail.com (Postfix) with ESMTP id A9D6E1C001B for ; Mon, 26 Feb 2024 08:46:07 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708937168; a=rsa-sha256; cv=none; b=WwE/NAAnoCo5WJaGGgHwv7JPnFAO1RdY76YckpyaERKCa0VuY8z5NCueUc0wwxN6Eb02Bl no43hshTXFZTDM1rJowZv8DoTKC4YLZjlf0I8IeW4HSP3agSCKOmvK4E4ymLARGBXXZ19B lVnTds9JWFuU/HIEQ8H3ZE9Fx86GtWI= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708937168; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2az4OuzXnNs6K4sYFI7bfo6D9l+NjKlrG9ksWa/XVY4=; b=PaebMcfAZt9Ccos0lYqbXYPQkiSspSX7p3gRHwtJRcZFOXnn73fqnURYhPzJnYbZfuc5s4 fvKOTs7Ahpm2rrh3ScLSWyovZ+9S9bb3v5KkTaE1b3g1riiUslt/hEdEh975fiZIt07UAx NVYZe6/1cTX+3Q4BdKZrqYpFBsjcqsg= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4TjvKd5jkbz1h0Z1; Mon, 26 Feb 2024 16:43:49 +0800 (CST) Received: from kwepemm600020.china.huawei.com (unknown [7.193.23.147]) by mail.maildlp.com (Postfix) with ESMTPS id C007C18001A; Mon, 26 Feb 2024 16:46:02 +0800 (CST) Received: from [10.174.179.160] (10.174.179.160) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 26 Feb 2024 16:46:01 +0800 Message-ID: Date: Mon, 26 Feb 2024 16:46:00 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: [PATCH v2] filemap: avoid unnecessary major faults in filemap_fault() Content-Language: en-US To: David Hildenbrand , "Huang, Ying" CC: , , , , , , , , , Nanyong Sun References: <20240206092627.1421712-1-zhangpeng362@huawei.com> <87jznhypxy.fsf@yhuang6-desk2.ccr.corp.intel.com> <87frxfhibt.fsf@yhuang6-desk2.ccr.corp.intel.com> <43182940-ddaa-7073-001a-e95d0999c5ba@huawei.com> <87il2bek6f.fsf@yhuang6-desk2.ccr.corp.intel.com> From: "zhangpeng (AS)" In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.160] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600020.china.huawei.com (7.193.23.147) X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: A9D6E1C001B X-Stat-Signature: dggtdqxyueaxfpuya9mjp9y47mfprxwt X-Rspam-User: X-HE-Tag: 1708937167-734822 X-HE-Meta: U2FsdGVkX19Z7pCWTE1d44sKDl+NhNJtJLRubYMBERlfar5iaUopa+kSJz25ASVoIdh56O9CVL5Kv3O/puPQFIQqt4XtZg673jDDTrgiseE3E/XMKxS7DOUo9lS0GkNwv7dgwrj9v6z1Oan7ukCcBSIy8xI0qonD5ag03vdJl3/CxxFySRR9Y7MAKnQe1DSaztWrQzK5cSBJ7aunwepM2bfaWO4ypkCU57DYQrKvveWH33CBRW6h9aPW7cTZd0ezAPR5MoGKA7exgIn0a9vkc8lWvi17L8n6pA+zGCnD9oX4QTsHjbX1NIsgdoM2tvUiNzooq8/Xb4pr5ydQ7akeLOpHJl/9kbyc4LKocXoo7FnrArmy2wgM3V7IC5rEcBNhmlAG7HKwxLHvkHT3jeADI0sESByhSrHLB1mIu4gDqgsDPbMTiy0HJ/PN+jbyugDrAu/q0WgYXHIPWH3zqlLOPyugDqDIRSzB9fM2tmuy+XMBqu10MB/BTDaAHO03R4KrT8n8SAcLwtVQcuNV0kbZjllNfAcUIhtCqtLypcOjp4elXV5X+R0rvoZL5KTFStlWXdCPQ66ACAUv5g82Qui8gULYEh65CSqAKOaUr7PmeAnaif5tpTDY/likhwAumM3NmKzZfSAoQH3qI4xNe09rITTUw12ziGXZzvonhb5LPgPTdDokqaEEPuvnhp8rXJea8X98OKz3l8O8oYoznNJpSBIqP51C9iEQqua74ZfKTZzHVwwfwfea60fmHjrmkd3BV6yLohq8K5IR+nsAPXkbH3o4pPApMmNFlGFfwd6EYbctp9A1U9mCPo7k9hrebR5iOmP1IL0RT/CeY3Mfj+/IMu8EqdDFcfMdNM4Jdj4D3wI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/2/26 16:20, David Hildenbrand wrote: > On 26.02.24 08:52, Huang, Ying wrote: >> "zhangpeng (AS)" writes: >> >>> On 2024/2/26 14:04, Huang, Ying wrote: >>> >>>> "zhangpeng (AS)" writes: >>>> >>>>> On 2024/2/7 10:21, Huang, Ying wrote: >>>>> >>>>>> Peng Zhang writes: >>>>>>> From: ZhangPeng >>>>>>> >>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | >>>>>>> MCL_FUTURE) >>>>>>> in application, which leading to an unexpected performance >>>>>>> issue[1]. >>>>>>> >>>>>>> This caused by temporarily cleared PTE during a >>>>>>> read+clear/modify/write >>>>>>> update of the PTE, eg, do_numa_page()/change_pte_range(). >>>>>>> >>>>>>> For the data segment of the user-mode program, the global >>>>>>> variable area >>>>>>> is a private mapping. After the pagecache is loaded, the private >>>>>>> anonymous >>>>>>> page is generated after the COW is triggered. Mlockall can lock >>>>>>> COW pages >>>>>>> (anonymous pages), but the original file pages cannot be locked >>>>>>> and may >>>>>>> be reclaimed. If the global variable (private anon page) is >>>>>>> accessed when >>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be >>>>>>> triggered. >>>>>>> >>>>>>> At this time, the original private file page may have been >>>>>>> reclaimed. >>>>>>> If the page cache is not available at this time, a major fault >>>>>>> will be >>>>>>> triggered and the file will be read, causing additional overhead. >>>>>>> >>>>>>> Fix this by rechecking the PTE without acquiring PTL in >>>>>>> filemap_fault() >>>>>>> before triggering a major fault. >>>>>>> >>>>>>> Testing file anonymous page read and write page fault >>>>>>> performance in ext4 >>>>>>> and ramdisk using will-it-scale[2] on a x86 physical machine. >>>>>>> The data >>>>>>> is the average change compared with the mainline after the patch is >>>>>>> applied. The test results are within the range of fluctuation, >>>>>>> and there >>>>>>> is no obvious difference. The test results are as follows: >>>>>> You still claim that there's no difference in the test results.  >>>>>> If so, >>>>>> why do you implement the patch?  IMHO, you need to prove your >>>>>> patch can >>>>>> improve the performance in some cases. >>>>> I'm sorry that maybe I didn't express myself clearly. >>>>> >>>>> The purpose of this patch is to fix the issue that major fault may >>>>> still be triggered >>>>> with mlockall(), thereby improving a little performance. This >>>>> patch is more of a bugfix >>>>> than a performance improvement patch. >>>>> >>>>> This issue affects our traffic analysis service. The inbound >>>>> traffic is heavy. If a major >>>>> fault occurs, the I/O schedule is triggered and the original I/O >>>>> is suspended. Generally, >>>>> the I/O schedule is 0.7 ms. If other applications are operating >>>>> disks, the system needs >>>>> to wait for more than 10 ms. However, the inbound traffic is heavy >>>>> and the NIC buffer is >>>>> small. As a result, packet loss occurs. The traffic analysis >>>>> service can't tolerate packet >>>>> loss. >>>>> >>>>> To prevent packet loss, we use the mlockall() function to prevent >>>>> I/O. It is unreasonable >>>>> that major faults will still be triggered after mlockall() is used. >>>>> >>>>> In our service test environment, the baseline is 7 major faults/12 >>>>> hours. After applied the >>>>> unlock patch, the probability of triggering the major fault is 1 >>>>> major faults/12 hours. After >>>>> applied the lock patch, no major fault will be triggered. So only >>>>> the locked patch can actually >>>>> solve our problem. >>>> This is the data I asked for. >>>> >>>> But, you said that this is a feature bug fix instead of performance >>>> improvement.  So, I checked the mlock(2), and found the following >>>> words, >>>> >>>> " >>>>          mlockall() locks all pages mapped into the address space >>>> of the calling >>>>          process.  This includes the pages of the code, data, and >>>> stack segment, >>>>          as well as shared libraries, user space kernel data, >>>> shared memory, and >>>>          memory-mapped files.  All mapped pages are guaranteed to >>>> be resident in >>>>          RAM when the call returns successfully; the  pages are  >>>> guaranteed  to >>>>          stay in RAM until later unlocked. >>>> " >>>> >>>> In theory, the locked page are in RAM.  So, IIUC, we don't violate the >>>> ABI.  But, in effect, we does do that. >>> >>> "mlockall() locks all pages mapped into the address space of the >>> calling process." >>> For a private mapping, mlockall() can lock COW pages (anonymous >>> pages), but the >>> original file pages can't be locked. Maybe, we violate the ABI here. >> >> If so, please make it explicit and loudly. >> >>> This is also >>> the cause of this issue. The patch fix the impact of this issue: >>> prevent major >>> faults, reduce IO, and fix the service packet loss issue. >>> >>> Preventing major faults, and thus reducing IO, could be an important >>> reason to use >>> mlockall(). Could we fix this with the locked patch? Or is there >>> another way? >> >> Unfortunately, locked patch cause performance regressions for more >> common cases.  Is it possible for us to change ptep_modify_prot_start() >> to use some magic PTE value instead of 0?  That may be possible.  But, >> that isn't enough, you need to change all ptep_get_and_clear() users. > > Trigger (false) major faults for mlocked memory is suboptimal. > > Having such pages temporarily not mapped (e.g., page migration) is > acceptable (pages are in RAM but are getting moved). We handle that > using nonswap migration entries. > > Let me understand the issue first: > > 1) MAP_PRIVATE file mapping that is mlocked. > > 2) We caused COW, so we now have an anonymous page mapped. That anon >    page is mlocked. > > 3) Change of protection (under PT lock) will temporarily clear the PTE > > 4) Page fault will trigger and find the PTE still cleared (without PT >    lock) > > 5) We don't realize that there is a page mapped and, therefore, trigger >    a major fault. > > Using the PT lock would fix it properly. Doing it as in this patch can > only be considered an optimization, not a proper fix. > > Using a magic PTE to work around "just use the PT lock like everyone > else" feels a bit odd. The patch states "We don't hold PTL here as > acquiring PTL hurts performance" -- do we have any numbers on that? > Testing file anonymous page read and write page fault performance in ext4 , tmpfs and ramdisk using will-it-scale[2] on a x86 physical machine. The data is the average change compared with the mainline after the patch is applied. with the locked patch: processes processes_idle threads threads_idle ext4 private file write: -0.51% 0.08% -0.03% -0.04% ext4 shared file write: 0.135% -0.531% 2.883% -0.772% ramdisk private file write: -0.48% 0.23% -1.08% 0.27% ramdisk private file read: 0.07% -6.90% -5.85% -0.70% tmpfs private file write: -0.344% -0.110% 0.200% 0.145% tmpfs shared file write: 0.958% 0.101% 2.781% -0.337% tmpfs private file read: -0.16% 0.00% -0.12% 0.41% with the no locked patch: processes processes_idle threads threads_idle ext4 private file write: -1.14% -0.08% -1.87% 0.13% ext4 private file read: 0.03% -0.65% -0.51% -0.08% ramdisk private file write: -1.21% -0.21% -1.12% 0.11% ramdisk private file read: 0.00% -0.68% -0.33% -0.02% I could also run other tests if needed. > We could special-case that for MLOCK'ed VMAs with MCL_FUTURE, meaning, > take the PTL to double-check only in such VMAs. > Agreed. I think this solution is great. Thanks for your suggestion! -- Best Regards, Peng