From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29AB6C4345F for ; Wed, 24 Apr 2024 09:26:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8712D6B025A; Wed, 24 Apr 2024 05:26:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7F9996B025B; Wed, 24 Apr 2024 05:26:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 699BC6B025C; Wed, 24 Apr 2024 05:26:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 473EE6B025A for ; Wed, 24 Apr 2024 05:26:27 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B755C1402A3 for ; Wed, 24 Apr 2024 09:26:26 +0000 (UTC) X-FDA: 82043894772.28.AA464B4 Received: from out30-111.freemail.mail.aliyun.com (out30-111.freemail.mail.aliyun.com [115.124.30.111]) by imf30.hostedemail.com (Postfix) with ESMTP id 844F380004 for ; Wed, 24 Apr 2024 09:26:23 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=nzDhW7ej; spf=pass (imf30.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.111 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713950785; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=asuta3TdqTQKC4TQYKxp0fcNemSgX8yGS7cyyZty+I0=; b=C0MZlMG10pKVf2WNgSLiBccgLq32/9A70BMdAZxDQSD90jWnDQnbAaHPzvZlkP5YqbT4MW YcyoCE0DnFa90oaSZBfSDQP94olRJ3x6nGzmltRTFjesgNyIsUvgA7Yz1VgnO7ksG/y/cH vBurBhIAya+xK2YoeIw26tIpTDEF0NY= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=nzDhW7ej; spf=pass (imf30.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.111 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713950785; a=rsa-sha256; cv=none; b=LcLBIJaKdYaWnnG2yaI0h6UFfkxFsdOqPWom6lvAlcuHa2e8RhmtIhjfoe7WW93GRCOcPU CmSEKm1og5DgVOubUTL4lZT/avXZMuvhqyMHnHn2K8Zv3DbTF7LZSmwzVRrZ4hG6nCAEQR JAUGKzjNqs83N+9evVj0FZ84cWsSzU4= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1713950780; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=asuta3TdqTQKC4TQYKxp0fcNemSgX8yGS7cyyZty+I0=; b=nzDhW7ejtkr33KCkuzY1HA+DADQN2SESFypUtoOw6jrdkwWPTI690PJszIJCXkJylX6KYVdNGN1hXfXgnW9EMiIm3hNvvovkyO+cEkRRH4wwVmuZzbcBV92HN7KiYn7WEcMhJxDn7tCH7MqPbFeWjQQ91Lcn6LEDgR+FEnGxYso= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R471e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033032014016;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---0W5BsIsB_1713950777; Received: from 30.97.56.58(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W5BsIsB_1713950777) by smtp.aliyun-inc.com; Wed, 24 Apr 2024 17:26:18 +0800 Message-ID: <5f949c1f-c56e-4227-af60-05a2a19f4c2e@linux.alibaba.com> Date: Wed, 24 Apr 2024 17:26:17 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH 1/5] mm: memory: extend finish_fault() to support large folio To: Ryan Roberts , akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ying.huang@intel.com, shy828301@gmail.com, ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <358aefb1858b63164894d7d8504f3dae0b495366.1713755580.git.baolin.wang@linux.alibaba.com> <6aa25e2a-a6b6-4ab7-8300-053ca3c0d748@arm.com> <6c418d70-a75d-4019-a0f5-56a61002d37a@arm.com> From: Baolin Wang In-Reply-To: <6c418d70-a75d-4019-a0f5-56a61002d37a@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 844F380004 X-Stat-Signature: 3z5zn4wsxjqsaksf9rfw81wis63u4m3m X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1713950783-222089 X-HE-Meta: U2FsdGVkX1/wcf9PrwMwSXV7kKSjTfG4hfvcvUuA5z/iEO8ao2jg8pLGSKhVlznc8RuDydP1s/OM1Nm7w7U9U7ri/wOeixuxO5eJzWWorpS9mFvsoAWMZEuVgLQjzjYI+47EWuVfQP1NYq6skT93U534HXGjk+ET999sRZr51rB9FsS6MGTksGdc+AdZhAZkTo+zuNlhvrcvUkK2tISq46Jqlwb4R4IxW4v8RW/P+q0N6oMrxURe6eRgnCMzKFRhsjGAnX7A46i49zKPTdKTRAAHUoCWHXznmODjDdH60MTq+2kFDHPJ8jCDFkoP3qEHiNDwbeKOCIHedvKtADxIv5t4oGPXX2Mvqx8YGh8enyNLYmn3/qHBn/6fttvV3cnzK6DfzD4G5VTF5XLh8vaPRgAgqfnfgffyWy01QhtWYdPHGXKV3xSMnUcJ8FmJN2ASjk/4SJXTr5xe19jqCCWL04ya+geE0qkcZt4+whLx7paKKNndcHG9mhy/p/nxJgMxMSGnucLaUrVevA7g3io0ArBBm0glPsu/6suTlH0BY+bXehlLGLRj4skbxdAeeMWBq3aeiO0p+/M/3VDWBRuA/GUp8ulLSh9EjbSRxG+eItkQ+o8jbBVvOVFIXI8JxwPAFSM8uKe+x7N+5emr2+vKy65YCA81U7O65ayZi8LtZtaVlTXKz0XWYXZZZ+v5nJ2RNj6gEitok8P+xSZH9Uoym6ds9D3lHhT9CYezIxoB4y9f0z34mV16E7ZuyKsNruWb/fvcCIG7aH7/bHfJypE1P1V7RW6+k18MHIWfHPesb35AsNTUUZkBZmeAmVsVoVCCCANi6WssMoJQT0b4IdHX4h2Pp9jLyfeVJ1IDjfjYt4Z8RKi4vDgsyVj/PAswJnrNNvp4/LplUpvcz89y+34rRJ3jJKimGKeIwnWmdpn0fe7MC9sxa6ZXPPK7acC+4muhggGGR04jxSrO/RifwW0 4EiCnwRe Ltfscd0HjWUi04sgBL/hynQnmf0YDm5cD2ErvpiokueAtH+0ZwCL5XOPlamKD384CeB8wejkWniZoWEyeRTGlfGYuqkNo2DMtCV61hgs34f9rMm9Lo7vQpy9j/LrphvBANC5MWZekzwyMuXxlgoNnznUhzLko1g4xu6D6f1BqeqtulJWabS5JTL4sIb2jk7omrsVaa+Xeir8Ly8dQT/YdhqRhrdFdUz900auq2xb+tJBp/MtKQu0wVR2SjEua8xoUyq8o1aKB0ihs4/CSIMTkHVuL9gepjReEK9s8DNob0HRSqJJ8ZGQxfbCcK6Ooy5lMU5RBm49q7xTVrhluIrnhf92k/TvkG74BfyNcIzu/bsw/u8AowPasAYYV8jOWYlQ/Yhq3GqWGgUWEmHc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/4/24 16:07, Ryan Roberts wrote: > On 24/04/2024 04:23, Baolin Wang wrote: >> >> >> On 2024/4/23 19:03, Ryan Roberts wrote: >>> On 22/04/2024 08:02, Baolin Wang wrote: >>>> Add large folio mapping establishment support for finish_fault() as a >>>> preparation, >>>> to support multi-size THP allocation of anonymous shared pages in the following >>>> patches. >>>> >>>> Signed-off-by: Baolin Wang >>>> --- >>>>   mm/memory.c | 25 ++++++++++++++++++------- >>>>   1 file changed, 18 insertions(+), 7 deletions(-) >>>> >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index b6fa5146b260..094a76730776 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -4766,7 +4766,10 @@ vm_fault_t finish_fault(struct vm_fault *vmf) >>>>   { >>>>       struct vm_area_struct *vma = vmf->vma; >>>>       struct page *page; >>>> +    struct folio *folio; >>>>       vm_fault_t ret; >>>> +    int nr_pages, i; >>>> +    unsigned long addr; >>>>         /* Did we COW the page? */ >>>>       if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) >>>> @@ -4797,22 +4800,30 @@ vm_fault_t finish_fault(struct vm_fault *vmf) >>>>               return VM_FAULT_OOM; >>>>       } >>>>   +    folio = page_folio(page); >>>> +    nr_pages = folio_nr_pages(folio); >>>> +    addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); >>> >>> I'm not sure this is safe. IIUC, finish_fault() is called for any file-backed >>> mapping. So you could have a situation where part of a (regular) file is mapped >>> in the process, faults and hits in the pagecache. But the folio returned by the >>> pagecache is bigger than the portion that the process has mapped. So you now end >>> up mapping beyond the VMA limits? In the pagecache case, you also can't assume >>> that the folio is naturally aligned in virtual address space. >> >> Good point. Yes, I think you are right, I need consider the VMA limits, and I >> should refer to the calculations of the start pte and end pte in do_fault_around(). > > You might also need to be careful not to increase reported RSS. I have a vague > recollection that David once mentioned a problem with fault-around because it > causes the reported RSS to increase for the process and this could lead to > different decisions in other places. IIRC Redhat had an advisory somewhere with > suggested workaround being to disable fault-around. For the anon-shared memory > case, it shouldn't be a problem because the user has opted into allocating > bigger blocks, but there may be a need to ensure we don't also start eagerly > mapping regular files beyond what fault-around is configured for. Thanks for reminding. And I also agree with you that this should not be a problem since user has selected the larger folio, which is not the same as fault-around. >>>>       vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, >>>> -                      vmf->address, &vmf->ptl); >>>> +                       addr, &vmf->ptl); >>>>       if (!vmf->pte) >>>>           return VM_FAULT_NOPAGE; >>>>         /* Re-check under ptl */ >>>> -    if (likely(!vmf_pte_changed(vmf))) { >>>> -        struct folio *folio = page_folio(page); >>>> - >>>> -        set_pte_range(vmf, folio, page, 1, vmf->address); >>>> -        ret = 0; >>>> -    } else { >>>> +    if (nr_pages == 1 && vmf_pte_changed(vmf)) { >>>>           update_mmu_tlb(vma, vmf->address, vmf->pte); >>>>           ret = VM_FAULT_NOPAGE; >>>> +        goto unlock; >>>> +    } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { >>> >>> I think you have grabbed this from do_anonymous_page()? But I'm not sure it >>> works in the same way here as it does there. For the anon case, if userfaultfd >>> is armed, alloc_anon_folio() will only ever allocate order-0. So we end up in >> >> IMO, the userfaultfd validation should do in the vma->vm_ops->fault() callback, >> to make sure the nr_pages is always 1 if userfaultfd is armed. > > OK. Are you saying there is already logic to do that today? Great! I mean I should add the userfaultfd validation in shmem_fault(), and may be need add a warning in finish_fault() to catch this issue if other vma->vm_ops->fault() will support large folio allocation? WARN_ON(nr_pages > 1 && userfaultfd_armed(vma));