From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B043C07E98 for ; Wed, 29 Nov 2023 01:25:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6D366B0371; Tue, 28 Nov 2023 20:25:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B1DA16B0372; Tue, 28 Nov 2023 20:25:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E4E56B0374; Tue, 28 Nov 2023 20:25:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8F3726B0371 for ; Tue, 28 Nov 2023 20:25:03 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 653F21A03E0 for ; Wed, 29 Nov 2023 01:25:03 +0000 (UTC) X-FDA: 81509248086.01.FEF6390 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf17.hostedemail.com (Postfix) with ESMTP id E319040011 for ; Wed, 29 Nov 2023 01:24:58 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701221100; a=rsa-sha256; cv=none; b=iz32KYphc+e//ClYk4aRoupodj+QK6ffhgodnNN4OM9HUwZmz2OJsdlYikNuZCrAxHCTQs E2vjbkNyam90KHTarg6oXuW3oZfMPVH4Dtjh4XDWj682+HOHGhq7dfWuk6gFkQ2wR+TVRN 7aM5geAOM4UH1Kb4MdNaNmTQu4NTEAI= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701221100; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pFoQq4vD+IvIuE9b2MSZMXF0WKkOIvk6wCxC8T3lwfk=; b=gX7VFGTe/ArL5410MViO6TVHYiRxffJ3ESqJmmy87EynfQhQx+59b7EVk6a+mD38sGuAHB wW8OqerT4Fycw8o+Qik/WTdFNal2AgPr72hAHdmdqXQU9Wcu8NndxxNb+tWq3ManB2JxKo 3Ej5FBIw2/3GM21rGknyyBkXGzmIXFI= Received: from kwepemm000020.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Sg1k06qXYzsRbp; Wed, 29 Nov 2023 09:21:12 +0800 (CST) Received: from [10.174.179.160] (10.174.179.160) by kwepemm000020.china.huawei.com (7.193.23.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 29 Nov 2023 09:24:50 +0800 Message-ID: Date: Wed, 29 Nov 2023 09:24:49 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: [RFC PATCH] mm: filemap: avoid unnecessary major faults in filemap_fault() Content-Language: en-US To: "Huang, Ying" CC: Yin Fengwei , , , , , , , , , , References: <20231122140052.4092083-1-zhangpeng362@huawei.com> <801bd0c9-7d0c-4231-93e5-7532e8231756@intel.com> <48235d73-3dc6-263d-7822-6d479b753d46@huawei.com> <87y1en7pq3.fsf@yhuang6-desk2.ccr.corp.intel.com> <87ttpb7p4z.fsf@yhuang6-desk2.ccr.corp.intel.com> <87lean7f2c.fsf@yhuang6-desk2.ccr.corp.intel.com> From: "zhangpeng (AS)" In-Reply-To: <87lean7f2c.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.160] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm000020.china.huawei.com (7.193.23.93) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E319040011 X-Stat-Signature: ukbk99sopgs5jw4yc6ty5auwqd9astsg X-Rspam-User: X-HE-Tag: 1701221098-821423 X-HE-Meta: U2FsdGVkX19PY4q+LjsgEOE5XoSC6FgPiyg1qGHgK8vOpYdhAUBHw9wef2v6dPgXYwlTCf8G4ApJ5URJmG92L0THghWKuUBgrIn4atdYEOFPFl8kk+1FUHybqeWm7Gp25GKRb2gyxzeq3zTibioN9I38J6mM2kEtj5YVIJ9Yio+7UKAo3LrUtnL2h3OUvu7YII4YlKNn4DFIeOVGar6Spu1j/ULoWpd2xtBtoXO40y+KziFLRcXoCJpXWZbBmd6OnwhRc01qdye3JAN0nrxwwoYYk5ifP4lY6qeGoDv1But5zgTg7mmIrqk83rSXI9DC/NPEiGS96jM3gLha+ZTiYE4F5cDhItBLmzjfJ5YryBNtksi18BMn8iSo1MHiY4Rxc9TMyXKNOHBo9l6FXORPAPqSJ/tY2cM1Qgp1dRG4rYdxc3xvGJV+Yrkjde8Ai1F1lxfd93CLz8BPiq5/7/LnLlIjgbPWSAWm1JkIB5G4clBS88pQMzsfITdh7YzqDcNR6jvLRyATNf5Q7kZ0VCcz6ziBNE7cpwJS/9Li4C7CB9aDGlQDFrYtGJHEqF/12ys+2nJFeJ0N4qjl4A90S/BaZP6SB5UzL+OyEH8tlPeOb9nXPQL32pX8wsRruOccDHCFUXjtEDm0aZtHm+/TRUC1B7ti9kkdVCdNva22CJmBqpUEfa317Kizq6dahFwZ2xvYiT51ncPEOV2YxPTZMsF+LYBFaZl2AJm0oE97TPgrHtZG+aPN3AXdEFiCTwF0aSGGkW96bT6JvzAW6DsiUZ/0f6w6UOpp02joaU9UH2HMBZ7FpzsZ7Q31Mb/zu0boOtVSsCF4yR+dYG9UQ3+91DTJT4gNJ7wBSF3tqZxoqEZJ89TcftynnEzvDCtlmOnGbv6tmQ+JJgB43qi5kE1sWRwa6Z7Qa/4oTHf9lWsetdUOZQ5ZHoctC96s79IUx5BlbTD40EzppD/44Gyvi/lrWSb dduFfJKj s9jIf2M/9Vvkppy6gnOnIUpcgS4XBlxKKGNiWia3UzkCt7AI9To2voSj0WQydwJEnwH92bksr9cyYsC2UjzroW9O5a6Ab8nQHyGBGb7Esc7+Xtc1+ySWrqNUgAfwOGtFurTpQUMxW26PXeSsGxm0M0SF653+xtdFRGDtFwXenNqUjxUN15PiTE8aWQJBWwvtEPsbcxyCu2YZr3V91Na5dWRxxlCm+/x8sm45PCHktqChHuXvlfjNhOGE9szOFZ5M/0ykdNJJoICAK46IIKcj7M54kaDsniite6sv/6SI16veGKQmE+bzCuEBhQhO0ZWZbm38Ut2JhNkXiZtRu3n6HyCHLMQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/11/24 16:04, Huang, Ying wrote: > "zhangpeng (AS)" writes: > >> On 2023/11/24 12:26, Huang, Ying wrote: >> >>> "Huang, Ying" writes: >>> >>>> "zhangpeng (AS)" writes: >>>> >>>>> On 2023/11/23 13:26, Yin Fengwei wrote: >>>>> >>>>>> On 11/23/23 12:12, zhangpeng (AS) wrote: >>>>>>> On 2023/11/23 9:09, Yin Fengwei wrote: >>>>>>> >>>>>>>> Hi Peng, >>>>>>>> >>>>>>>> On 11/22/23 22:00, Peng Zhang wrote: >>>>>>>>> From: ZhangPeng >>>>>>>>> >>>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE) >>>>>>>>> in application, which leading to an unexpected performance issue[1]. >>>>>>>>> >>>>>>>>> This caused by temporarily cleared pte during a read/modify/write update >>>>>>>>> of the pte, eg, do_numa_page()/change_pte_range(). >>>>>>>>> >>>>>>>>> For the data segment of the user-mode program, the global variable area >>>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous >>>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages >>>>>>>>> (anonymous pages), but the original file pages cannot be locked and may >>>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when >>>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered. >>>>>>>>> >>>>>>>>> At this time, the original private file page may have been reclaimed. >>>>>>>>> If the page cache is not available at this time, a major fault will be >>>>>>>>> triggered and the file will be read, causing additional overhead. >>>>>>>>> >>>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before >>>>>>>>> triggering a major fault. >>>>>>>>> >>>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/ >>>>>>>>> >>>>>>>>> Signed-off-by: ZhangPeng >>>>>>>>> Signed-off-by: Kefeng Wang >>>>>>>>> --- >>>>>>>>>   mm/filemap.c | 14 ++++++++++++++ >>>>>>>>>   1 file changed, 14 insertions(+) >>>>>>>>> >>>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c >>>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644 >>>>>>>>> --- a/mm/filemap.c >>>>>>>>> +++ b/mm/filemap.c >>>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) >>>>>>>>>               mapping_locked = true; >>>>>>>>>           } >>>>>>>>>       } else { >>>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, >>>>>>>>> +                          vmf->address, &vmf->ptl); >>>>>>>>> +        if (ptep) { >>>>>>>>> +            /* >>>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared >>>>>>>>> +             * temporarily during a read/modify/write update. >>>>>>>>> +             */ >>>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep)))) >>>>>>>>> +                ret = VM_FAULT_NOPAGE; >>>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl); >>>>>>>>> +            if (unlikely(ret)) >>>>>>>>> +                return ret; >>>>>>>>> +        } >>>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE? >>>>>>> Thank you for your reply. >>>>>>> >>>>>>> If we don't take PTL, the current use case won't trigger this issue either. >>>>>> Is this verified by testing or just in theory? >>>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(), >>>>> this issue will also trigger. Without delay, we haven't reproduced this problem >>>>> so far. >>>>> >>>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However, >>>>>>> there is still a possibility of triggering this issue. The corner case is that >>>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start() >>>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the >>>>>>> check whether the PTE is not NONE before task 1 updates PTE in >>>>>>> ptep_modify_prot_commit() without taking PTL. >>>>>> There is very limited operations between ptep_modify_prot_start() and >>>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is >>>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check >>>>>> here without hold PTL (This is my theory. :)). >>>>> Yes, there is a high probability that this issue won't occur without taking PTL. >>>>> >>>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may >>>>>> not be big deal because the IO operations in this code path. But it's better to >>>>>> collect some performance data IMHO. >>>>> We tested the performance of file private mapping page fault (page_fault2.c of >>>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale). >>>>> The difference in performance (in operations per second) before and after patch >>>>> applied is about 0.7% on a x86 physical machine. >>>> Whether is it improvement or reduction? >>> And I think that you need to test ramdisk cases too to verify whether >>> this will cause performance regression and how much. >> Yes, I will. >> In addition, are there any ramdisk test cases recommended? 😁 > I think that you can start with the will-it-scale test case you used > before. And you can try some workload with large number of major fault, > like file read with mmap. I used will-it-scale to test the page faults of ext4 files and tmpfs files. The data is the average change compared with the mainline after the patch is applied. The test results are within the range of fluctuation, and there is no obvious difference. The test results are as follows: processes processes_idle threads threads_idle ext4 private file write: -0.51% 0.08% -0.03% -0.04% ext4 shared file write: 0.135% -0.531% 2.883% -0.772% tmpfs private file write: -0.344% -0.110% 0.200% 0.145% tmpfs shared file write: 0.958% 0.101% 2.781% -0.337% tmpfs private file read: -0.16% 0.00% -0.12% 0.41% > -- > Best Regards, > Huang, Ying > >>> -- >>> Best Regards, >>> Huang, Ying >>> >>>> -- >>>> Best Regards, >>>> Huang, Ying >>>> >>>>> [1] https://github.com/antonblanchard/will-it-scale/tree/master >>>>> >>>>>> Regards >>>>>> Yin, Fengwei >>>>>> >>>>>>>> Regards >>>>>>>> Yin, Fengwei >>>>>>>> >>>>>>>>> + >>>>>>>>>           /* No page in the page cache at all */ >>>>>>>>>           count_vm_event(PGMAJFAULT); >>>>>>>>>           count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT); -- Best Regards, Peng