From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0779C4332F for ; Tue, 12 Dec 2023 11:52:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D9806B02B2; Tue, 12 Dec 2023 06:52:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 389396B02B3; Tue, 12 Dec 2023 06:52:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 277866B02B4; Tue, 12 Dec 2023 06:52:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 16AD06B02B2 for ; Tue, 12 Dec 2023 06:52:01 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 74C8C160937 for ; Tue, 12 Dec 2023 11:52:00 +0000 (UTC) X-FDA: 81558002400.16.D1DBE40 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP id 8FD4C20017 for ; Tue, 12 Dec 2023 11:51:58 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf03.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702381918; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VtYi1jHOeVq0w9au6f5BcCNX53HXvpXbJSu0gxIg3Ww=; b=QZWdwxw+wA9JNVhl62jj/U91VggnUZScCYJyM/jL4ViFOj49URDALb//HLIemZ/bpsV0vl il9aF09y7Su+YB/l+ZzxVD5R3byNQ0rw0RK0sHD7LFl3O4aFj1fCOiBEZsBmYnNEmr/EJc EeQSPlRUeUSbc1BoOO0lhYLNAFVIJf8= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf03.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702381918; a=rsa-sha256; cv=none; b=LDaTmh5bDB3DlhhtAPdxzpOmGSro4ptoHWWFGKDw06I6ZLD2XbmiL2jdw0hrqbhOpszwUs 2NsjYzoUVfU5GKmVNvVF2Y1lAcrTyPvhyrZrLkK3x9me3cMAMnTPM1VUOsNhq/u8rok+bl 1io8lgI3lyGGQCryyqp0B2U6+n26brM= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8611143D; Tue, 12 Dec 2023 03:52:43 -0800 (PST) Received: from [10.1.39.183] (XHFQ2J9959.cambridge.arm.com [10.1.39.183]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 44CDF3F762; Tue, 12 Dec 2023 03:51:54 -0800 (PST) Message-ID: Date: Tue, 12 Dec 2023 11:51:53 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 01/15] mm: Batch-copy PTE ranges during fork() Content-Language: en-GB To: Alistair Popple Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Yang Shi , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231204105440.61448-1-ryan.roberts@arm.com> <20231204105440.61448-2-ryan.roberts@arm.com> <87lea5a5qm.fsf@nvdebian.thelocal> From: Ryan Roberts In-Reply-To: <87lea5a5qm.fsf@nvdebian.thelocal> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 8FD4C20017 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 9sfspnic8s3ncikxgfbacc76ga4xwfzg X-HE-Tag: 1702381918-718299 X-HE-Meta: U2FsdGVkX18YXXGp/oaKmWmKdiXrlrTUjPXxJbGiwV/yM2rHbA91XX93GgGnThgwAD6OI3AN51ZL8rqI2ZB0hgbi18s8Dvucs77D0FJUm2+0PLjVJAIrGaZCeRUUV3GCIlHgQU37FFoikyIGOL5jwiPj8NmHSVDAWmnrh9mp6CIeM+GT4zadiOVBfvqoprUdIRDL/OGMUatq0eSoZcIhsctM6uCXbr3Y3tsL2YXUb1sLaKLX+UbSh2ZX58HcTFKTPdWylzC5myzG/t3Xl3/RBTNbpYpOi5peADIAivwZDnK4xHu/EO00YFG0HH/DtLomLMMrjQM1kOXgyGV08MZvDiUUhSaQLYbGjmxEYwR6R1cHkyMhRTvrEN98cF354Ip6PEQdMTvcL0SfsHjKbeUJgUA4UjACzixxf1ygkrJPqmjbxLdObojQG1dIXTM5dqyyAZu68mEilSOhZR8WbHJA9q338Xogtmwzels+/wiuITCSEUzFb6+N9HhTpabvd/Lbt1Z5uI8K11vIRyQpCcMD/XnMJoEetL4nuMQjVrMFo7YT5NacG3ZlBoezP+KNiTK5acyPu9v2VMKMnuzEUku5LueRyjzSLoFW5rsFdfnZ6JSprHMTDt8VXu2d/ppFO7axsqAVnOnTSZv49joBQtwDpFuGzIqmaPX/Qc4G64dSJEyvQhdsApSdX5nQfElzOwzTAbDH2DigHihaSsLrQmz8jaG91Vyn7zSLqb1DTzmOcpjlKo0tpgFZqlISTTgz4u68Eh+xT5z6d+0RaMorxzQavlyT+z+uPgVcsnkwSgQeGKF7i3uzA9nDImpEHCs25qClf/UW48x3Qd76sYrU1Q+MZRjeHt81h7fQAb0ai8pls+YapPTv/EGNm+OKi7raKmV/pC05DmvocAEaKENC2l6l+47biQFFr/ZQ40B4w9i3bjXIjMepo6M+ikhe4AqAa/c3DLXVRQ3QN1RgEuJ96x8 diqWu5H0 6hKGydnEk8TR0VTTmSMHMysu4qKGbGBFcQcfAl1srDC7/qgD/+gSdeAwK4EerXXMGO1XHHOVJVseZqGPboXPb8iFVr7+TG2WSNTtGvF1+CxRFjXJXmrqvbkRSExZkWnS02H0iLSxACzRUtEEEqb2vVgHzVXASbOkmHjIea2Kd9GgiP+fwuXR5iUBGeBevaoezN5Zm3JvAPIOgP2U37oyt8ry9U4PiMCYea5yf6Zw/lt7Q2upaLXIKS3rAnlxUmSNg2g7nKVT49z9uRiTmBFA9XCPiQvf0xyYcYI7TX6HHEqp0Zfc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 08/12/2023 00:32, Alistair Popple wrote: > > Ryan Roberts writes: > > > >> /* >> * On some architectures hardware does not set page access bit when accessing >> * memory page, it is responsibility of software setting this bit. It brings >> diff --git a/mm/memory.c b/mm/memory.c >> index 1f18ed4a5497..8a87a488950c 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -924,68 +924,162 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma >> return 0; >> } >> >> +static int folio_nr_pages_cont_mapped(struct folio *folio, >> + struct page *page, pte_t *pte, >> + unsigned long addr, unsigned long end, >> + pte_t ptent, bool enforce_uffd_wp, >> + int *dirty_nr, int *writable_nr) >> +{ >> + int floops; >> + int i; >> + unsigned long pfn; >> + bool prot_none; >> + bool uffd_wp; >> + >> + if (!folio_test_large(folio)) >> + return 1; >> + >> + /* >> + * Loop either to `end` or to end of folio if its contiguously mapped, >> + * whichever is smaller. >> + */ >> + floops = (end - addr) >> PAGE_SHIFT; >> + floops = min_t(int, floops, >> + folio_pfn(folio_next(folio)) - page_to_pfn(page)); > > Much better, thanks for addressing my comments here. > >> + >> + pfn = page_to_pfn(page); >> + prot_none = pte_protnone(ptent); >> + uffd_wp = pte_uffd_wp(ptent); >> + >> + *dirty_nr = !!pte_dirty(ptent); >> + *writable_nr = !!pte_write(ptent); >> + >> + pfn++; >> + pte++; >> + >> + for (i = 1; i < floops; i++) { >> + ptent = ptep_get(pte); >> + >> + if (!pte_present(ptent) || pte_pfn(ptent) != pfn || >> + prot_none != pte_protnone(ptent) || >> + (enforce_uffd_wp && uffd_wp != pte_uffd_wp(ptent))) >> + break; >> + >> + if (pte_dirty(ptent)) >> + (*dirty_nr)++; >> + if (pte_write(ptent)) >> + (*writable_nr)++; >> + >> + pfn++; >> + pte++; >> + } >> + >> + return i; >> +} >> + >> /* >> - * Copy one pte. Returns 0 if succeeded, or -EAGAIN if one preallocated page >> - * is required to copy this pte. >> + * Copy set of contiguous ptes. Returns number of ptes copied if succeeded >> + * (always gte 1), or -EAGAIN if one preallocated page is required to copy the >> + * first pte. >> */ >> static inline int >> -copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, >> - pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, >> - struct folio **prealloc) >> +copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, >> + pte_t *dst_pte, pte_t *src_pte, >> + unsigned long addr, unsigned long end, >> + int *rss, struct folio **prealloc) >> { >> struct mm_struct *src_mm = src_vma->vm_mm; >> unsigned long vm_flags = src_vma->vm_flags; >> pte_t pte = ptep_get(src_pte); >> struct page *page; >> struct folio *folio; >> + int nr = 1; >> + bool anon = false; >> + bool enforce_uffd_wp = userfaultfd_wp(dst_vma); >> + int nr_dirty = !!pte_dirty(pte); >> + int nr_writable = !!pte_write(pte); >> + int i, ret; >> >> page = vm_normal_page(src_vma, addr, pte); >> - if (page) >> + if (page) { >> folio = page_folio(page); >> - if (page && folio_test_anon(folio)) { >> - /* >> - * If this page may have been pinned by the parent process, >> - * copy the page immediately for the child so that we'll always >> - * guarantee the pinned page won't be randomly replaced in the >> - * future. >> - */ >> - folio_get(folio); >> - if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) { >> - /* Page may be pinned, we have to copy. */ >> - folio_put(folio); >> - return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, >> - addr, rss, prealloc, page); >> + anon = folio_test_anon(folio); >> + nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr, end, >> + pte, enforce_uffd_wp, &nr_dirty, >> + &nr_writable); >> + folio_ref_add(folio, nr); >> + >> + for (i = 0; i < nr; i++, page++) { >> + if (anon) { >> + /* >> + * If this page may have been pinned by the >> + * parent process, copy the page immediately for >> + * the child so that we'll always guarantee the >> + * pinned page won't be randomly replaced in the >> + * future. >> + */ >> + if (unlikely(page_try_dup_anon_rmap( >> + page, false, src_vma))) { >> + if (i != 0) >> + break; >> + /* Page may be pinned, we have to copy. */ >> + folio_ref_sub(folio, nr); >> + ret = copy_present_page( >> + dst_vma, src_vma, dst_pte, >> + src_pte, addr, rss, prealloc, >> + page); >> + return ret == 0 ? 1 : ret; >> + } >> + rss[MM_ANONPAGES]++; >> + VM_BUG_ON(PageAnonExclusive(page)); >> + } else { >> + page_dup_file_rmap(page, false); >> + rss[mm_counter_file(page)]++; >> + } >> } >> - rss[MM_ANONPAGES]++; >> - } else if (page) { >> - folio_get(folio); >> - page_dup_file_rmap(page, false); >> - rss[mm_counter_file(page)]++; >> - } >> >> - /* >> - * If it's a COW mapping, write protect it both >> - * in the parent and the child >> - */ >> - if (is_cow_mapping(vm_flags) && pte_write(pte)) { >> - ptep_set_wrprotect(src_mm, addr, src_pte); >> - pte = pte_wrprotect(pte); >> + if (i < nr) { >> + folio_ref_sub(folio, nr - i); >> + nr = i; >> + } >> } >> - VM_BUG_ON(page && folio_test_anon(folio) && PageAnonExclusive(page)); >> >> /* >> - * If it's a shared mapping, mark it clean in >> - * the child >> + * If it's a shared mapping, mark it clean and write protected in the >> + * child, and rely on a write fault to fix up the permissions. This >> + * allows determining batch size without having to consider RO/RW >> + * permissions. As an optimization, skip wrprotect if all ptes in the >> + * batch have the same permissions. >> + * >> + * If its a private (CoW) mapping, mark it dirty in the child if _any_ >> + * of the parent mappings in the block were marked dirty. The contiguous >> + * block of mappings are all backed by the same folio, so if any are >> + * dirty then the whole folio is dirty. This allows determining batch >> + * size without having to consider the dirty bit. Further, write protect >> + * it both in the parent and the child so that a future write will cause >> + * a CoW. As as an optimization, skip the wrprotect if all the ptes in >> + * the batch are already readonly. >> */ >> - if (vm_flags & VM_SHARED) >> + if (vm_flags & VM_SHARED) { >> pte = pte_mkclean(pte); >> - pte = pte_mkold(pte); >> + if (nr_writable > 0 && nr_writable < nr) >> + pte = pte_wrprotect(pte); >> + } else { >> + if (nr_dirty) >> + pte = pte_mkdirty(pte); >> + if (nr_writable) { >> + ptep_set_wrprotects(src_mm, addr, src_pte, nr); >> + pte = pte_wrprotect(pte); >> + } >> + } >> >> - if (!userfaultfd_wp(dst_vma)) >> + pte = pte_mkold(pte); >> + pte = pte_clear_soft_dirty(pte); >> + if (!enforce_uffd_wp) >> pte = pte_clear_uffd_wp(pte); >> >> - set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); >> - return 0; >> + set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr); >> + return nr; > > I don't have any further comments and you have addressed my previous > ones so feel free to add: > > Reviewed-by: Alistair Popple > > However whilst I think the above CoW sequence looks correct it would be > nice if someone else could take a look as well. Thanks for the RB! David has taken a look at the CoW part and helped develop the logic, so I'm pretty confident in it. However, David also sent me some microbenchmarks for fork, DONTNEED, munmap, etc for order-0 and PTE-mapped THP (2M). I'm seeing a ferw performance regressions with those, which I'm currently trying to resolve. At the moment it's looking like I'll have to expose some function to allow the core code to skip forward a number of ptes so that in the contpte-mapped case, the core code only does ptep_get() once per contpte block. As a result there will be some churn here. I'm currently working out some bugs and hope to post an updated series with perf numbers for those microbenchmarks by the end of the week, all being well. > >> } >> >> static inline struct folio *page_copy_prealloc(struct mm_struct *src_mm, >> @@ -1021,6 +1115,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, >> int rss[NR_MM_COUNTERS]; >> swp_entry_t entry = (swp_entry_t){0}; >> struct folio *prealloc = NULL; >> + int nr_ptes; >> >> again: >> progress = 0; >> @@ -1051,6 +1146,8 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, >> arch_enter_lazy_mmu_mode(); >> >> do { >> + nr_ptes = 1; >> + >> /* >> * We are holding two locks at this point - either of them >> * could generate latencies in another task on another CPU. >> @@ -1086,16 +1183,21 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, >> * the now present pte. >> */ >> WARN_ON_ONCE(ret != -ENOENT); >> + ret = 0; >> } >> - /* copy_present_pte() will clear `*prealloc' if consumed */ >> - ret = copy_present_pte(dst_vma, src_vma, dst_pte, src_pte, >> - addr, rss, &prealloc); >> + /* copy_present_ptes() will clear `*prealloc' if consumed */ >> + nr_ptes = copy_present_ptes(dst_vma, src_vma, dst_pte, src_pte, >> + addr, end, rss, &prealloc); >> + >> /* >> * If we need a pre-allocated page for this pte, drop the >> * locks, allocate, and try again. >> */ >> - if (unlikely(ret == -EAGAIN)) >> + if (unlikely(nr_ptes == -EAGAIN)) { >> + ret = -EAGAIN; >> break; >> + } >> + >> if (unlikely(prealloc)) { >> /* >> * pre-alloc page cannot be reused by next time so as >> @@ -1106,8 +1208,9 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, >> folio_put(prealloc); >> prealloc = NULL; >> } >> - progress += 8; >> - } while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end); >> + progress += 8 * nr_ptes; >> + } while (dst_pte += nr_ptes, src_pte += nr_ptes, >> + addr += PAGE_SIZE * nr_ptes, addr != end); >> >> arch_leave_lazy_mmu_mode(); >> pte_unmap_unlock(orig_src_pte, src_ptl); >