From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7C96C25B75 for ; Mon, 3 Jun 2024 09:37:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 582426B0098; Mon, 3 Jun 2024 05:37:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 531226B009B; Mon, 3 Jun 2024 05:37:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D1F36B00A1; Mon, 3 Jun 2024 05:37:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 217CB6B0098 for ; Mon, 3 Jun 2024 05:37:23 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C37CB80B13 for ; Mon, 3 Jun 2024 09:37:22 +0000 (UTC) X-FDA: 82189074324.25.71ADDA8 Received: from out30-124.freemail.mail.aliyun.com (out30-124.freemail.mail.aliyun.com [115.124.30.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 302F0180009 for ; Mon, 3 Jun 2024 09:37:18 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=vvRGgbnq; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717407440; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Iva+9WQvYLiNTKeFAevO7LbMxV0uR7UoFIatWKMudCU=; b=O0zjX5wKBnhZWOZx0qgCUaQsnY5WqCuIv+58JUn6sGA932xDTjJfZz45JrZ6MPL3hqLvCF VP9JyO3bOc0Pw/FeXTKb//K5bpFk5l2UgrD43GGD4VTc9eWiTiYhuyLj6S1wSdDtmVopr8 A7iT2/j74OGzZthwiPxhJRKWpwk2ewQ= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=vvRGgbnq; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717407440; a=rsa-sha256; cv=none; b=GIs1Twcams++DJVSCCqJzErefCwV8rRZFAHZMG9UxTFVpPxYv94unR2//dWD2IyT7Y1Fwx pyc4uv5ar/yLeyMi30mExc+NXcMhrjjnG6P11UV5mLUNloGxuIoNDaexSr6u9OmyxYzgaS Rq/cjnACqQfWcWAIZMRDwx1dKX7jRpI= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1717407436; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=Iva+9WQvYLiNTKeFAevO7LbMxV0uR7UoFIatWKMudCU=; b=vvRGgbnqg1529Wp5QMBSPk54AwtDAdi3tkxLoyNGBr/C5VtF318jlwPaEXuHo30b0PKyOaO42m/IoBbXZGZ4EgrESUECdONBHYirbmC8G8rife55p4gnLRBjwW5YZ1D1bXAau/9FhuZcmSfwg4KM0be4oYEqDPEugpqzlKaOvr0= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033022160150;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0W7lUb2m_1717407434; Received: from 30.97.56.74(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0W7lUb2m_1717407434) by smtp.aliyun-inc.com; Mon, 03 Jun 2024 17:37:15 +0800 Message-ID: <90fa2110-a74b-4445-b93d-63110a4a9f8a@linux.alibaba.com> Date: Mon, 3 Jun 2024 17:37:13 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 1/6] mm: memory: extend finish_fault() to support large folio To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, hughd@google.com, willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, ying.huang@intel.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 302F0180009 X-Stat-Signature: 7yfk3r96ymdaejxtdd1ef3j97crz7kfd X-HE-Tag: 1717407438-717132 X-HE-Meta: U2FsdGVkX18ewdKCkoV/ZeASY2LApa/SxTCYXjsK6GVppd1idItCEtGQ5emSAoA4ojOrDCAGajybRNs2VR8mxxIxUMJLHjQ1mwAyQIN1IziD5aVreJ+9K/9/Z9I8HOQISNvZcnXAwfE2QIcjnBfQIDFXEReWJZlWhPMBh0zBDWX5eseoBZFYaX3HTSlaCGbF/kGi9b18yr3PwgAcHSwtw8RhlgH/SzT/kPR9XIlYy8jbJ3yKOydaJyhQ8dixsmTBL5RSd4Ss9ATW6z0tESvvrwVapomc2BZelnIzmpmgMd+8vkoiMABvkPpkz8HdgSx3ydH9jMx9toJFvccDSVA+VDBQkmumrunqTZr5+8/CdeteM75QyH0+DvhUxL+/dyMANYi7koh9A+9ENQEzlNMvqq6TOOv1AEZGEBCfAJ/EMU880Ih9xEs2OAopX+Kty9WheuuD4rtdfA+WYCadEi8Duc05b5ObPG7aEPdAFQ/IyaeFlbjHXxZ8dD473rYzGq/kRnSphjIHX1M69jRPhx4LHx2yrm/X4E4gmHFenWsqhq/eVasMLg4wAoCgzWps9l5m0V5KCAgdGU4iqu6bjQSScQkJ3rBRsh9sA7YGk24Y52dlxZHX71Mv8OJS7rgkpUYetzFbqhfpOp4yjENNjCfFeYejuLpEAeTPfPVd+pIACBQDa1+xcvAFpgGq7vaDf74o03hcLoK6hRgO8poNpaonQsF0j2HWT9SMpM+FKm56D9s7M6fN0MIfY3lb2KBzQC1ZnMazKTZNb+u7XqTDXnxpnJHhl4u4obqRuOmO2QHC1JD6K9GkTzPMQwZiBMyO5PmLyGaw7LDAzuE4MFrSxO6Rj6InyHbEElXPzeTyIxo6/rLqdmODbt9+QHuS/zUN0KORYO0yLG7DJZhawlsd2NW3T4Bfgxxn8578A0LL8FJXFYLRvlSimNUcbnHiG1gV8/lt3ZKttkwUXXudd3Nzb4G /EmdA3+c sntxB82e3pcSOGEkJvyXPj9cJgoHRYedjyuDZcJNyUDJ7qhDaV4n6Kwu6oZKCleFU8y3nUN+TwP0GFxbDCJ7JL5LI5TYrOC5Z3Wfafu54kv5LWhIHdbOrFQ6KPeO0uZ4V77uVUHckwyYiAenmEkMHmJsqTnVYFUsVxzpKIumxDnGkEWzdRXUlfV/vPvZP6X2wS599LdD0bvj8tp7Wf7xuUrN4hhPdlvAyQzfJAiCrvjlVLKLF23y1MmH08XtCl9Z3Pk9pyPgjDaMIcYz9yI3aFyE3z4oz/ie7bsMXCg6TI8ZoDCA6U5Xuc9F9TrZpKIcuB9loiQtw/qKQ+01gaax3VW2X1MpA42xZ6UVaY1lrr5zy7erXO4Gm9VadO9d5bBMzqvgaVoVqCtuD3ebp2LhmEqTr7HlmrmdRB6+J X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/6/3 17:01, Barry Song wrote: > On Mon, Jun 3, 2024 at 8:58 PM Barry Song <21cnbao@gmail.com> wrote: >> >> On Mon, Jun 3, 2024 at 8:29 PM Baolin Wang >> wrote: >>> >>> >>> >>> On 2024/6/3 13:28, Barry Song wrote: >>>> On Thu, May 30, 2024 at 2:04 PM Baolin Wang >>>> wrote: >>>>> >>>>> Add large folio mapping establishment support for finish_fault() as a preparation, >>>>> to support multi-size THP allocation of anonymous shmem pages in the following >>>>> patches. >>>>> >>>>> Signed-off-by: Baolin Wang >>>>> --- >>>>> mm/memory.c | 58 ++++++++++++++++++++++++++++++++++++++++++++--------- >>>>> 1 file changed, 48 insertions(+), 10 deletions(-) >>>>> >>>>> diff --git a/mm/memory.c b/mm/memory.c >>>>> index eef4e482c0c2..435187ff7ea4 100644 >>>>> --- a/mm/memory.c >>>>> +++ b/mm/memory.c >>>>> @@ -4831,9 +4831,12 @@ vm_fault_t finish_fault(struct vm_fault *vmf) >>>>> { >>>>> struct vm_area_struct *vma = vmf->vma; >>>>> struct page *page; >>>>> + struct folio *folio; >>>>> vm_fault_t ret; >>>>> bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) && >>>>> !(vma->vm_flags & VM_SHARED); >>>>> + int type, nr_pages, i; >>>>> + unsigned long addr = vmf->address; >>>>> >>>>> /* Did we COW the page? */ >>>>> if (is_cow) >>>>> @@ -4864,24 +4867,59 @@ vm_fault_t finish_fault(struct vm_fault *vmf) >>>>> return VM_FAULT_OOM; >>>>> } >>>>> >>>>> + folio = page_folio(page); >>>>> + nr_pages = folio_nr_pages(folio); >>>>> + >>>>> + /* >>>>> + * Using per-page fault to maintain the uffd semantics, and same >>>>> + * approach also applies to non-anonymous-shmem faults to avoid >>>>> + * inflating the RSS of the process. >>>> >>>> I don't feel the comment explains the root cause. >>>> For non-shmem, anyway we have allocated the memory? Avoiding inflating >>>> RSS seems not so useful as we have occupied the memory. the memory footprint >>> >>> This is also to keep the same behavior as before for non-anon-shmem, and >>> will be discussed in the future. >> >> OK. >> >>> >>>> is what we really care about. so we want to rely on read-ahead hints of subpage >>>> to determine read-ahead size? that is why we don't map nr_pages for non-shmem >>>> files though we can potentially reduce nr_pages - 1 page faults? >>> >>> IMHO, there is 2 cases for non-anon-shmem: >>> (1) read mmap() faults: we can rely on the 'fault_around_bytes' >>> interface to determin what size of mapping to build. >>> (2) writable mmap() faults: I want to keep the same behavior as before >>> (per-page fault), but we can talk about this when I send new patches to >>> use mTHP to control large folio allocation for writable mmap(). >> >> OK. >> >>> >>>>> + */ >>>>> + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) { >>>>> + nr_pages = 1; >>>>> + } else if (nr_pages > 1) { >>>>> + pgoff_t idx = folio_page_idx(folio, page); >>>>> + /* The page offset of vmf->address within the VMA. */ >>>>> + pgoff_t vma_off = vmf->pgoff - vmf->vma->vm_pgoff; >>>>> + >>>>> + /* >>>>> + * Fallback to per-page fault in case the folio size in page >>>>> + * cache beyond the VMA limits. >>>>> + */ >>>>> + if (unlikely(vma_off < idx || >>>>> + vma_off + (nr_pages - idx) > vma_pages(vma))) { >>>>> + nr_pages = 1; >>>>> + } else { >>>>> + /* Now we can set mappings for the whole large folio. */ >>>>> + addr = vmf->address - idx * PAGE_SIZE; >>>>> + page = &folio->page; >>>>> + } >>>>> + } >>>>> + >>>>> vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, >>>>> - vmf->address, &vmf->ptl); >>>>> + addr, &vmf->ptl); >>>>> if (!vmf->pte) >>>>> return VM_FAULT_NOPAGE; >>>>> >>>>> /* Re-check under ptl */ >>>>> - if (likely(!vmf_pte_changed(vmf))) { >>>>> - struct folio *folio = page_folio(page); >>>>> - int type = is_cow ? MM_ANONPAGES : mm_counter_file(folio); >>>>> - >>>>> - set_pte_range(vmf, folio, page, 1, vmf->address); >>>>> - add_mm_counter(vma->vm_mm, type, 1); >>>>> - ret = 0; >>>>> - } else { >>>>> - update_mmu_tlb(vma, vmf->address, vmf->pte); >>>>> + if (nr_pages == 1 && unlikely(vmf_pte_changed(vmf))) { >>>>> + update_mmu_tlb(vma, addr, vmf->pte); >>>>> ret = VM_FAULT_NOPAGE; >>>>> + goto unlock; >>>>> + } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { >>>> >>>> In what case we can't use !pte_range_none(vmf->pte, 1) for nr_pages == 1 >>>> then unify the code for nr_pages==1 and nr_pages > 1? >>>> >>>> It seems this has been discussed before, but I forget the reason. >>> >>> IIUC, this is for uffd case, which is not a none pte entry. >> >> Is it possible to have a COW case for shmem? For example, if someone >> maps a shmem >> file as read-only and then writes to it, would that prevent the use of >> pte_range_none? > > sorry, i mean PRIVATE but not READ-ONLY. Yes, I think so. Now CoW case still use per-page fault in do_cow_fault(). >> Furthermore, if we encounter a large folio in shmem while reading, >> does it necessarily >> mean we can map the entire folio? Is it possible for some processes to Now this will depend on the 'fault_around_bytes' interface. >> only map part >> of large folios? For instance, if process A allocates large folios and >> process B maps >> only part of this shmem file or partially unmaps a large folio, how >> would that be handled? This is certainly possible. For tmpfs: (1) If 'fault_around_bytes' is enabled, filemap_map_pages() will handle partially mapping of the large folio for process B. (2) If 'fault_around_bytes' is set to 0, finish_fault() will fallback to per-page fault. For Anonomous shmem, process B should be the child of process A in your case, then: (1) If 'fault_around_bytes' is enabled, behavior is same with tmpfs. (2) If 'fault_around_bytes' is set to 0, finish_fault() will build the whole large folio mapping for process B. Since process B will copy the same shared VMA from parent process A, which means a mTHP mapping to share. >> Apologies for not debugging this thoroughly, but these two corner >> cases seem worth >> considering. If these scenarios have already been addressed, please disregard my >> comments. No worries:) Thanks for your valuable input.