linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "zhangpeng (AS)" <zhangpeng362@huawei.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Yin Fengwei <fengwei.yin@intel.com>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>, <akpm@linux-foundation.org>,
	<willy@infradead.org>, <aneesh.kumar@linux.ibm.com>,
	<shy828301@gmail.com>, <hughd@google.com>, <david@redhat.com>,
	<wangkefeng.wang@huawei.com>, <sunnanyong@huawei.com>
Subject: Re: [RFC PATCH] mm: filemap: avoid unnecessary major faults in filemap_fault()
Date: Thu, 1 Feb 2024 20:10:24 +0800	[thread overview]
Message-ID: <4da573ec-a2f9-84f4-f729-540492192957@huawei.com> (raw)
In-Reply-To: <87plzt464d.fsf@yhuang6-desk2.ccr.corp.intel.com>

On 2023/11/29 10:59, Huang, Ying wrote:

> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>
>> On 2023/11/24 16:04, Huang, Ying wrote:
>>
>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>
>>>> On 2023/11/24 12:26, Huang, Ying wrote:
>>>>
>>>>> "Huang, Ying" <ying.huang@intel.com> writes:
>>>>>
>>>>>> "zhangpeng (AS)" <zhangpeng362@huawei.com> writes:
>>>>>>
>>>>>>> On 2023/11/23 13:26, Yin Fengwei wrote:
>>>>>>>
>>>>>>>> On 11/23/23 12:12, zhangpeng (AS) wrote:
>>>>>>>>> On 2023/11/23 9:09, Yin Fengwei wrote:
>>>>>>>>>
>>>>>>>>>> Hi Peng,
>>>>>>>>>>
>>>>>>>>>> On 11/22/23 22:00, Peng Zhang wrote:
>>>>>>>>>>> From: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>>
>>>>>>>>>>> The major fault occurred when using mlockall(MCL_CURRENT | MCL_FUTURE)
>>>>>>>>>>> in application, which leading to an unexpected performance issue[1].
>>>>>>>>>>>
>>>>>>>>>>> This caused by temporarily cleared pte during a read/modify/write update
>>>>>>>>>>> of the pte, eg, do_numa_page()/change_pte_range().
>>>>>>>>>>>
>>>>>>>>>>> For the data segment of the user-mode program, the global variable area
>>>>>>>>>>> is a private mapping. After the pagecache is loaded, the private anonymous
>>>>>>>>>>> page is generated after the COW is triggered. Mlockall can lock COW pages
>>>>>>>>>>> (anonymous pages), but the original file pages cannot be locked and may
>>>>>>>>>>> be reclaimed. If the global variable (private anon page) is accessed when
>>>>>>>>>>> vmf->pte is zeroed in numa fault, a file page fault will be triggered.
>>>>>>>>>>>
>>>>>>>>>>> At this time, the original private file page may have been reclaimed.
>>>>>>>>>>> If the page cache is not available at this time, a major fault will be
>>>>>>>>>>> triggered and the file will be read, causing additional overhead.
>>>>>>>>>>>
>>>>>>>>>>> Fix this by rechecking the pte by holding ptl in filemap_fault() before
>>>>>>>>>>> triggering a major fault.
>>>>>>>>>>>
>>>>>>>>>>> [1] https://lore.kernel.org/linux-mm/9e62fd9a-bee0-52bf-50a7-498fa17434ee@huawei.com/
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
>>>>>>>>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>>>>>>>>> ---
>>>>>>>>>>>       mm/filemap.c | 14 ++++++++++++++
>>>>>>>>>>>       1 file changed, 14 insertions(+)
>>>>>>>>>>>
>>>>>>>>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>>>>>>>>> index 71f00539ac00..bb5e6a2790dc 100644
>>>>>>>>>>> --- a/mm/filemap.c
>>>>>>>>>>> +++ b/mm/filemap.c
>>>>>>>>>>> @@ -3226,6 +3226,20 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
>>>>>>>>>>>                   mapping_locked = true;
>>>>>>>>>>>               }
>>>>>>>>>>>           } else {
>>>>>>>>>>> +        pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
>>>>>>>>>>> +                          vmf->address, &vmf->ptl);
>>>>>>>>>>> +        if (ptep) {
>>>>>>>>>>> +            /*
>>>>>>>>>>> +             * Recheck pte with ptl locked as the pte can be cleared
>>>>>>>>>>> +             * temporarily during a read/modify/write update.
>>>>>>>>>>> +             */
>>>>>>>>>>> +            if (unlikely(!pte_none(ptep_get(ptep))))
>>>>>>>>>>> +                ret = VM_FAULT_NOPAGE;
>>>>>>>>>>> +            pte_unmap_unlock(ptep, vmf->ptl);
>>>>>>>>>>> +            if (unlikely(ret))
>>>>>>>>>>> +                return ret;
>>>>>>>>>>> +        }
>>>>>>>>>> I am curious. Did you try not to take PTL here and just check whether PTE is not NONE?
>>>>>>>>> Thank you for your reply.
>>>>>>>>>
>>>>>>>>> If we don't take PTL, the current use case won't trigger this issue either.
>>>>>>>> Is this verified by testing or just in theory?
>>>>>>> If we add a delay between ptep_modify_prot_start() and ptep_modify_prot_commit(),
>>>>>>> this issue will also trigger. Without delay, we haven't reproduced this problem
>>>>>>> so far.
>>>>>>>
>>>>>>>>> In most cases, if we don't take PTL, this issue won't be triggered. However,
>>>>>>>>> there is still a possibility of triggering this issue. The corner case is that
>>>>>>>>> task 2 triggers a page fault when task 1 is between ptep_modify_prot_start()
>>>>>>>>> and ptep_modify_prot_commit() in do_numa_page(). Furthermore,task 2 passes the
>>>>>>>>> check whether the PTE is not NONE before task 1 updates PTE in
>>>>>>>>> ptep_modify_prot_commit() without taking PTL.
>>>>>>>> There is very limited operations between ptep_modify_prot_start() and
>>>>>>>> ptep_modify_prot_commit(). While the code path from page fault to this check is
>>>>>>>> long. My understanding is it's very likely the PTE is not NONE when do PTE check
>>>>>>>> here without hold PTL (This is my theory. :)).
>>>>>>> Yes, there is a high probability that this issue won't occur without taking PTL.
>>>>>>>
>>>>>>>> In the other side, acquiring/releasing PTL may bring performance impaction. It may
>>>>>>>> not be big deal because the IO operations in this code path. But it's better to
>>>>>>>> collect some performance data IMHO.
>>>>>>> We tested the performance of file private mapping page fault (page_fault2.c of
>>>>>>> will-it-scale [1]) and file shared mapping page fault (page_fault3.c of will-it-scale).
>>>>>>> The difference in performance (in operations per second) before and after patch
>>>>>>> applied is about 0.7% on a x86 physical machine.
>>>>>> Whether is it improvement or reduction?
>>>>> And I think that you need to test ramdisk cases too to verify whether
>>>>> this will cause performance regression and how much.
>>>> Yes, I will.
>>>> In addition, are there any ramdisk test cases recommended? 😁
>>> I think that you can start with the will-it-scale test case you used
>>> before.  And you can try some workload with large number of major fault,
>>> like file read with mmap.
>> I used will-it-scale to test the page faults of ext4 files and
>> tmpfs files. The data is the average change compared with the
>> mainline after the patch is applied. The test results are within
>> the range of fluctuation, and there is no obvious difference.
>> The test results are as follows:
>>
>>                            processes processes_idle threads threads_idle
>> ext4  private file write: -0.51%    0.08%          -0.03%  -0.04%
>> ext4  shared  file write:  0.135%  -0.531%          2.883% -0.772%
>> tmpfs private file write: -0.344%  -0.110%          0.200%  0.145%
>> tmpfs shared  file write:  0.958%   0.101%          2.781% -0.337%
>> tmpfs private file read:  -0.16%    0.00%          -0.12%   0.41%
> Thank you very much for test results!
>
> We shouldn't use tmpfs, because there will be no major faults.  Please
> check your major faults number to verify that.  IIUC, ram disk + disk
> file system should be used.
>
> And, please make sure that there's no heavy lock contention in the base
> kernel.  Because if some heavy lock contention kills performance, there
> will no performance difference between based and patched kernel.

I'm so sorry I was so late to finish the test and reply.

I used will-it-scale to test the page faults of ramdisk files. The
data is the average change compared with the mainline after the patch
is applied. The test results are as follows:

                           processes processes_idle threads threads_idle
ramdisk private file write: -0.48%   0.23%          -1.08%   0.27%
ramdisk private file  read:  0.07%  -6.90%          -5.85%  -0.70%


Applied patch:

diff --git a/mm/filemap.c b/mm/filemap.c
index 32eedf3afd45..2db9ccfbd5e3 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3226,6 +3226,22 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
                         mapping_locked = true;
                 }
         } else {
+               if (!pmd_none(*vmf->pmd)) {
+                       pte_t *ptep = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd,
+                                                         vmf->address, &vmf->ptl);
+                       if (unlikely(!ptep))
+                               return VM_FAULT_NOPAGE;
+                       /*
+                        * Recheck pte with ptl locked as the pte can be cleared
+                        * temporarily during a read/modify/write update.
+                        */
+                       if (unlikely(!pte_none(ptep_get(ptep))))
+                               ret = VM_FAULT_NOPAGE;
+                       pte_unmap_unlock(ptep, vmf->ptl);
+                       if (unlikely(ret))
+                               return ret;
+               }
+
                 /* No page in the page cache at all */
                 count_vm_event(PGMAJFAULT);
                 count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);

> --
> Best Regards,
> Huang, Ying

-- 
Best Regards,
Peng



  reply	other threads:[~2024-02-01 12:10 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-22 14:00 Peng Zhang
2023-11-23  1:09 ` Yin Fengwei
2023-11-23  4:12   ` zhangpeng (AS)
2023-11-23  5:26     ` Yin Fengwei
2023-11-23  7:57       ` zhangpeng (AS)
2023-11-23  8:29         ` Yin Fengwei
2023-11-23  9:09           ` zhangpeng (AS)
2023-11-24  4:13         ` Huang, Ying
2023-11-24  4:26           ` Huang, Ying
2023-11-24  7:27             ` zhangpeng (AS)
2023-11-24  8:04               ` Huang, Ying
2023-11-29  1:24                 ` zhangpeng (AS)
2023-11-29  2:59                   ` Huang, Ying
2024-02-01 12:10                     ` zhangpeng (AS) [this message]
2024-02-02  0:39                       ` Huang, Ying
2024-02-02  3:31                         ` zhangpeng (AS)
2023-11-24  7:26           ` zhangpeng (AS)
2023-11-23  8:36 ` Huang, Ying
2023-11-23  9:09   ` zhangpeng (AS)
2023-11-23 15:33     ` Matthew Wilcox
2023-11-24  2:04       ` zhangpeng (AS)
2023-11-24  7:26   ` zhangpeng (AS)
2023-11-24  7:59     ` Huang, Ying
2023-11-24  6:05 ` Matthew Wilcox
2023-11-24  7:43   ` zhangpeng (AS)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4da573ec-a2f9-84f4-f729-540492192957@huawei.com \
    --to=zhangpeng362@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=david@redhat.com \
    --cc=fengwei.yin@intel.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shy828301@gmail.com \
    --cc=sunnanyong@huawei.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox