From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 977F0C61D97 for ; Fri, 24 Nov 2023 07:27:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2AB486B06A3; Fri, 24 Nov 2023 02:27:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 25BEA6B06A5; Fri, 24 Nov 2023 02:27:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14A906B06A7; Fri, 24 Nov 2023 02:27:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 07D136B06A3 for ; Fri, 24 Nov 2023 02:27:36 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DD80CB6715 for ; Fri, 24 Nov 2023 07:27:35 +0000 (UTC) X-FDA: 81492017670.28.B391308 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf23.hostedemail.com (Postfix) with ESMTP id 0580214000A for ; Fri, 24 Nov 2023 07:27:32 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700810853; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PZs87uEsu7S5o53NHVU8QF6UVakWOw1Db6Kk6MIB6yU=; b=mgBdMNOQTNT0VEXPw5ylyylhIgqElSqcP9oYUfN7LPwFisickKGr6bv+ZTERX2OLyYwNFY U1A609GzP26ktYqVf4NHJR30hfr4d3JOu1YRAGuHtM918JiEAg5Cp+Jq7V1E4ITnpdyTPp ccJTgin5BqOFuwRavawqSAkyMS2avSw= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700810853; a=rsa-sha256; cv=none; b=MiPCXkx2nQmkukcjL2FYqAY3EshjLo/4BY96r63hx/8skWExr7cqs8go67VnrM5q7nIVKX kV3UYBMqgWY6KzOnW8Tf/pk4ZaQU5FHNDs1CmbJVNjqoM0EfIsO8vcpRmYCuyRwJ4j1PXl SGtVWQg66PdDURoZWT9zwX7LppfcyVY= Received: from kwepemm000020.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Sc60q1xMqzsRRS; Fri, 24 Nov 2023 15:23:55 +0800 (CST) Received: from [10.174.179.160] (10.174.179.160) by kwepemm000020.china.huawei.com (7.193.23.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 24 Nov 2023 15:27:26 +0800 Message-ID: Date: Fri, 24 Nov 2023 15:27:25 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: [RFC PATCH] mm: filemap: avoid unnecessary major faults in filemap_fault() Content-Language: en-US To: "Huang, Ying" CC: Yin Fengwei , , , , , , , , , , References: <20231122140052.4092083-1-zhangpeng362@huawei.com> <801bd0c9-7d0c-4231-93e5-7532e8231756@intel.com> <48235d73-3dc6-263d-7822-6d479b753d46@huawei.com> <87y1en7pq3.fsf@yhuang6-desk2.ccr.corp.intel.com> <87ttpb7p4z.fsf@yhuang6-desk2.ccr.corp.intel.com> From: "zhangpeng (AS)" In-Reply-To: <87ttpb7p4z.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.160] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm000020.china.huawei.com (7.193.23.93) X-CFilter-Loop: Reflected X-Rspam-User: X-Stat-Signature: rp4bh59ggugramk7x45taj9nyppfus99 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0580214000A X-HE-Tag: 1700810852-597152 X-HE-Meta: U2FsdGVkX19e6P/TFG5+ZiGJSoF+daLG3MXBGZPB6+bBu4l1e8rVQXyqtqRKJPSsRwCxTg55BLAK7p/WOYgx7zfXR9oVztq3+wqWcf2f9P/uBWlwm36dQJRv/Zgasg7PSeNPV/it87aYfcYfhD9xrblYEhx+N/A40GGawYucQ2Pa4DLy3Mkf/LkXg2m9vfsOopB9lvaIIXead4jdCV2HUz3rRAA/SoUdrwJiUFB/mj9SpInN6X1qu+ywdk9tBvA7RDxxnnKO27M3hDnFHzMbqUJ/DioK2MIKt9yFRxiUpYhu9ar9F99Qld/4DFx8KMWiQ21z0wPte42rRHF8ZKzvmajINXuJxCr5cNiPlYGGmz7lDy5qLJ9sPPUPsiIwXCcXUBnkEv520hlTtASqk5LjciiAmVNJhDMIUrHkFbNZHhriPAYec8pN33hzQaBaa6cjATcg1K4ONPJmvhTMQhE9OMZaJBkhwHmcEssMlI5OV91SmcASI7nl68V9qvHezjs3wBMsNEHR4cJeOYSFpBlfVP3TB3/7WU6dyGm1YdHnSeTgdOE/g2SNvoZbO+GvMvN0s7GTklIVtPBwg3KYTu9Gt7LswFhXcpuBB3UpJ9Ue3xQF/uqBk026bWOLJwGVkPIJnSAPcHXCPPISGhmA55FrbTvPKZCABt2Mp0Mmobsps0m6atkjHCgpV8zE6iN9F6WaYMYi2UHUe2QYm4cPgBhSJ0sbf9N1xxdqStOY51e0Ha2+ILL+GDdwBQOe9xylbNCcJUFzwjdyKZXRbi7yMHMF5DqN8rvVE4++RlMk5Z21I2ttcU9Iw7S04QMDkPNpPtW9zjVzOz8ZtqAGUQTnJSohgzooAYAEW1ET/nP5PxxNLic9IzZPPVKS/Yscn7DbxeR4Hw8RkGMfLr0cCdYBnOHH6Hl5aO7dpUZ0BaFHbXCfwuwql7Xhqh8C+XTqYCpvg7MtPvQdxlh9CXkmHdXmirn Errb69c+ a1KRV9ChFISLKkETolBeAW60/v1k0wp3alrX1ZOMCJZ47v5TN7qI+HlJ339t9HZ/MQO9N3RyMsfghoCDXzTdWu7oqTSG1myZ3k6E4lbPa7066R6FBRbRMye/2l3CSKWQMzP6WBRGZgPLnsEIMdchP7n4pFnuhy+ZxtbB4t3C3pTGs1awuYNC77IEva1yQ4EaveqgE+5btye5DXgOlTV5B85jfHfwM+mlN1g9CVs1k3BWH1hToa22XYmWxgRy55yC5NIToTCVqEDBdWFq37gYa85htGx3gHZo5aMs249LCDBXTURNzIxBpMvbDL5P1FgZUd5RBQCbIvSME1BZZwRWHt/4NIw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/11/24 12:26, Huang, Ying wrote: > "Huang, Ying" writes: > >> "zhangpeng (AS)" writes: >> >>> On 2023/11/23 13:26, Yin Fengwei wrote: >>> >>>> On 11/23/23 12:12, zhangpeng (AS) wrote: >>>>> On 2023/11/23 9:09, Yin Fengwei wrote: >>>>> >>>>>> Hi Peng, >>>>>> >>>>>> On 11/22/23 22:00, Peng Zhang wrote: >>>>>>> From: ZhangPeng >>>>>>> >>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE) >>>>>>> in application, which leading to an unexpected performance issue[1]. >>>>>>> >>>>>>> This caused by temporarily cleared pte during a read/modify/write update >>>>>>> of the pte, eg, do_numa_page()/change_pte_range(). >>>>>>> >>>>>>> For the data segment of the user-mode program, the global variable area >>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous >>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages >>>>>>> (anonymous pages), but the original file pages cannot be locked and may >>>>>>> be reclaimed. If the global variable (private anon page) is accessed when >>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered. >>>>>>> >>>>>>> At this time, the original private file page may have been reclaimed. >>>>>>> If the page cache is not available at this time, a major fault will be >>>>>>> triggered and the file will be read, causing additional overhead. >>>>>>> >>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before >>>>>>> triggering a major fault. >>>>>>> >>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/ >>>>>>> >>>>>>> Signed-off-by: ZhangPeng >>>>>>> Signed-off-by: Kefeng Wang >>>>>>> --- >>>>>>>   mm/filemap.c | 14 ++++++++++++++ >>>>>>>   1 file changed, 14 insertions(+) >>>>>>> >>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c >>>>>>> index 71f00539ac00..bb5e6a2790dc 100644 >>>>>>> --- a/mm/filemap.c >>>>>>> +++ b/mm/filemap.c >>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) >>>>>>>               mapping_locked = true; >>>>>>>           } >>>>>>>       } else { >>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, >>>>>>> +                          vmf->address, &vmf->ptl); >>>>>>> +        if (ptep) { >>>>>>> +            /* >>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared >>>>>>> +             * temporarily during a read/modify/write update. >>>>>>> +             */ >>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep)))) >>>>>>> +                ret = VM_FAULT_NOPAGE; >>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl); >>>>>>> +            if (unlikely(ret)) >>>>>>> +                return ret; >>>>>>> +        } >>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE? >>>>> Thank you for your reply. >>>>> >>>>> If we don't take PTL, the current use case won't trigger this issue either. >>>> Is this verified by testing or just in theory? >>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(), >>> this issue will also trigger. Without delay, we haven't reproduced this problem >>> so far. >>> >>>>> In most cases, if we don't take PTL, this issue won't be triggered. However, >>>>> there is still a possibility of triggering this issue. The corner case is that >>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start() >>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the >>>>> check whether the PTE is not NONE before task 1 updates PTE in >>>>> ptep_modify_prot_commit() without taking PTL. >>>> There is very limited operations between ptep_modify_prot_start() and >>>> ptep_modify_prot_commit(). While the code path from page fault to this check is >>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check >>>> here without hold PTL (This is my theory. :)). >>> Yes, there is a high probability that this issue won't occur without taking PTL. >>> >>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may >>>> not be big deal because the IO operations in this code path. But it's better to >>>> collect some performance data IMHO. >>> We tested the performance of file private mapping page fault (page_fault2.c of >>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale). >>> The difference in performance (in operations per second) before and after patch >>> applied is about 0.7% on a x86 physical machine. >> Whether is it improvement or reduction? > And I think that you need to test ramdisk cases too to verify whether > this will cause performance regression and how much. Yes, I will. In addition, are there any ramdisk test cases recommended? 😁 > -- > Best Regards, > Huang, Ying > >> -- >> Best Regards, >> Huang, Ying >> >>> [1] https://github.com/antonblanchard/will-it-scale/tree/master >>> >>>> Regards >>>> Yin, Fengwei >>>> >>>>>> Regards >>>>>> Yin, Fengwei >>>>>> >>>>>>> + >>>>>>>           /* No page in the page cache at all */ >>>>>>>           count_vm_event(PGMAJFAULT); >>>>>>>           count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT); -- Best Regards, Peng