From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41A7CC61DA4 for ; Tue, 14 Feb 2023 17:39:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF40A6B0071; Tue, 14 Feb 2023 12:39:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AA35B6B0075; Tue, 14 Feb 2023 12:39:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 943946B0078; Tue, 14 Feb 2023 12:39:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8546D6B0071 for ; Tue, 14 Feb 2023 12:39:30 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 521071A0F60 for ; Tue, 14 Feb 2023 17:39:30 +0000 (UTC) X-FDA: 80466609300.19.B35B639 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf19.hostedemail.com (Postfix) with ESMTP id D563F1A0017 for ; Tue, 14 Feb 2023 17:39:27 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eA9w+CMp; spf=pass (imf19.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676396368; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4QKLllZC1uEUOE2BdO7bzLxdbv09hiQiyoC40FRYKDo=; b=CvtCLiSS0mF2UoRri3QM3skfRECXEheEQjpHgvN1kQ+7TukMSkOdO/egy82bxICcL56d53 +QptlPs/Zs67idnaoIMu+ZoUr/EKLniht23zpxSye76nVP6GT2r/4Gma3ES/762ciPTggI gO/yOYl+9sf5GjitK77BNBblCWSoMqU= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eA9w+CMp; spf=pass (imf19.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676396368; a=rsa-sha256; cv=none; b=I5vAaN4VBd/eTa4n6pML+HIPppExHM4UK9H7u5yGKueGBGaP7lsV7R4CnAKafdo6Jg7x88 HyxfIPThZ5sfonjv7hafZPt6ptBPtD+yMxpjpyyktH6qdr4unqxBtgTUyN4USqEtqJ0QB/ v8XrSrThtKhS9qPoqPM/Sg6F4Q3qeOA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676396367; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4QKLllZC1uEUOE2BdO7bzLxdbv09hiQiyoC40FRYKDo=; b=eA9w+CMpT8mDhMRFM3sgpnbzyQYzY6gAmPhY5DFoks7eL6nFc2djF/n2MmCkPYoBY6ZZit /wqASv2UFmNH4fsqmomGC3smxEsGj5X99vsNf6j5ir6RAFHd5qiT0qdE4kFX5c9C6LDGrP goX5YwB/2bsuEIs817gVs7hKQNd11Hs= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-675-wmaYmdBZMsSFklyFj0yN0Q-1; Tue, 14 Feb 2023 12:39:26 -0500 X-MC-Unique: wmaYmdBZMsSFklyFj0yN0Q-1 Received: by mail-wm1-f69.google.com with SMTP id p14-20020a05600c468e00b003e0107732f4so8104151wmo.1 for ; Tue, 14 Feb 2023 09:39:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4QKLllZC1uEUOE2BdO7bzLxdbv09hiQiyoC40FRYKDo=; b=sqm+j3gA7wm4CxHtr1PMp4qewbSog0o3Au3eF7kCRZVAIXkN9QbSAo02wH62wJi+Sm yYhzYk4r1+2OJGzBFnTKSbC1rh0y/NIETdMyR5NAogcRcvgoRjz51bY3Jh0OBMgtU/CP CZNqm1BoFSM6AQSN/3C/6vfuOesPcERGyyPqdiMSyISHU2UJmeURkycVB4kR+ni9ouYT uj1gui53/33P2hemqDskX4QbQfbQdjGpdmiiF1LGMuoG21qgi76+GG/JyrbidHsg1tkZ s23srLgSKOjAuc+KiXNIzOOexPXUWYvlx8va9pbWB3z/u8b7rVkH4ZlKppbCjMG8HOAY gjzQ== X-Gm-Message-State: AO0yUKVXOkMA/JH1+ePtrTExsNSBHJz/yIxK+yyWzAkbxxXaY8IshIwe GY64sXMMySPOiga0olQyntvknlza+vHKEZU6Hd5bluIz9LdLN8+BwMZIo6typdlWNex2wm+X5JF WlcG19B9X0jo= X-Received: by 2002:a5d:6806:0:b0:2c4:57d3:396 with SMTP id w6-20020a5d6806000000b002c457d30396mr2716947wru.40.1676396364812; Tue, 14 Feb 2023 09:39:24 -0800 (PST) X-Google-Smtp-Source: AK7set/r0DQ7+ma2F6crvNFnBEefDNPcQ1q0jtIdkDICXCjvMBFXmslFeTySfwfESBVH/z6uWOQnhg== X-Received: by 2002:a5d:6806:0:b0:2c4:57d3:396 with SMTP id w6-20020a5d6806000000b002c457d30396mr2716932wru.40.1676396364345; Tue, 14 Feb 2023 09:39:24 -0800 (PST) Received: from [192.168.3.108] (p5b0c60e7.dip0.t-ipconnect.de. [91.12.96.231]) by smtp.gmail.com with ESMTPSA id u14-20020adff88e000000b002c56046a3b5sm3582125wrp.53.2023.02.14.09.39.21 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 14 Feb 2023 09:39:23 -0800 (PST) Message-ID: Date: Tue, 14 Feb 2023 18:39:21 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 To: Yang Shi Cc: Chih-En Lin , Pasha Tatashin , Andrew Morton , Qi Zheng , "Matthew Wilcox (Oracle)" , Christophe Leroy , John Hubbard , Nadav Amit , Barry Song , Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Peter Xu , Vlastimil Babka , Zach O'Keefe , Yun Zhou , Hugh Dickins , Suren Baghdasaryan , Yu Zhao , Juergen Gross , Tong Tiangen , Liu Shixin , Anshuman Khandual , Li kunyu , Minchan Kim , Miaohe Lin , Gautam Menghani , Catalin Marinas , Mark Brown , Will Deacon , Vincenzo Frascino , Thomas Gleixner , "Eric W. Biederman" , Andy Lutomirski , Sebastian Andrzej Siewior , "Liam R. Howlett" , Fenghua Yu , Andrei Vagin , Barret Rhoden , Michal Hocko , "Jason A. Donenfeld" , Alexey Gladkov , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng References: <20230207035139.272707-1-shiyn.lin@gmail.com> <62c44d12-933d-ee66-ef50-467cd8d30a58@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v4 00/14] Introduce Copy-On-Write to Page Table In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: D563F1A0017 X-Stat-Signature: hu4m7ff8kysno3g7gq4aptiqccmq1eb1 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1676396367-122738 X-HE-Meta: U2FsdGVkX1+b0rTZgXEq3TE62FXnr5OqHOr7H4CyOQ+mLW/8fbi2ev353tNGEjIZJ/NLBBhVcAelKkUx7kNJlaQwY96MuNUcpMDYbCHJ8CKWy9zwWNBCdK7oMVwgBeyknKl+BqI/f+5VpfWyyxa4dU5c0AMvo+fHES5UCBERhg085exZS/lHVTXQBAPORXB/S3iIqhjYH/6NecBxKdDv7D2od9o5rfFtRFQrIp5uIlMzg/vFFWsReiiuZhCdMfwVUHh5SUbL9sduIzRvR+/p5ULo1OZ9VmpjuX9vUXqmdd0GDyIuSahqzx081GGTCHBxAxQr8fZQB2bO/hgn+vXXUHhAECaYoA+2fKMNWNMgZfXdGjaqJg5PMra3iP65VJHoz+aLvbkeJPoTYHzjf113V7CsjhLKitEaMwc+TG+cbwn+32APpCGzyumqwxJbEsJkCcsROdL2XmuzY04jmwT1HepaVCZUKOYZJZdCF+10m5hVHz2lvxfYcc4m31q3sVpNRgArCzUwYRdPGQHVSc0pLdWT9eSoHGYcNyZSG7f/arDkaL5poA9Ukqi84JVu+O9YTCsR6cZxdST3i8+RVgxxY2UQCBxRug1++3FgopPEqnVG//gfw/VyJ7WwPtHgN0xt6ORzia5IFGOoWUwV9ub7KNPZDwA0Bk9aDg+0IrqPY1KFNEW3GEFvj/IBp/iyFjsW/IvfNZviA80ekVbUgZUVRliBZct3Tcd80Cn+UU9743XCNqLDOdAZWoBfL6mv84X4Jx65K/s6ftjy8VPurjez5W+U9tNbRCZv33zJ07ikTWzEAdDfnAjwg8iw93+7EU3VTr3OySqNnqDXYNd1D/Glk9gLVegg1aSiCGkJK3o1SrSVWdpsnCV1Rf3+wJwH09NTnivfeR8tMnWwfQgeiulTYui0G7k9DONe4Wh3Y4zzmtALXMTQDhNkiXmoQWeUQuQHTAZoZwZHiNwoTRxCxQZ SqN/n6BK QKDJF/41L9znc2Xp/anvPTK5l7QAO43bFuKyv2AUjDC3C1tRX8SPSKmqW2v0o+pkppPHZRUPVVuUfyqXyZzkXr9+CquiOobrcUUpHfHT6kkdIzBqxJR4smhJHUzOJ7J7VYQUu5jug7hZIKq+K1UBLX2uH/ifsz/GqlYVrK4xj8IG1o7KsdBtoi770W/0n+vF3cwysu8pdHcKAmHySnkTz0V3gKSCrjsKq9ZU2NKktUJa19IslXBXthLU6Y9OOA7LGNyZomHTbNHnW5nomnLqIqKdt13k7sDtq6Tb2IMlEnwifOLcWrr14NkWR0Ve3wIfbfYZBPxICmfBFeU5+V1MgJYdz+5L5GMOMZCp6XfTDbnE5rpSCHblFV8fLmN1CYqZLO/xkF70+a36mqtXq4HBP+8EWvV6zqFOpHNsqXXOKpZn7KpSX0TkuyHKBnsYE94C2rNVTahpAozu7NEoWyy2rWs86x1EQ9iGCQ0pO6G9Mc9JKI6t6Gu/BZzIVKGDWciJfN0j9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 14.02.23 18:23, Yang Shi wrote: > On Tue, Feb 14, 2023 at 1:58 AM David Hildenbrand wrote: >> >> On 10.02.23 18:20, Chih-En Lin wrote: >>> On Fri, Feb 10, 2023 at 11:21:16AM -0500, Pasha Tatashin wrote: >>>>>>> Currently, copy-on-write is only used for the mapped memory; the child >>>>>>> process still needs to copy the entire page table from the parent >>>>>>> process during forking. The parent process might take a lot of time and >>>>>>> memory to copy the page table when the parent has a big page table >>>>>>> allocated. For example, the memory usage of a process after forking with >>>>>>> 1 GB mapped memory is as follows: >>>>>> >>>>>> For some reason, I was not able to reproduce performance improvements >>>>>> with a simple fork() performance measurement program. The results that >>>>>> I saw are the following: >>>>>> >>>>>> Base: >>>>>> Fork latency per gigabyte: 0.004416 seconds >>>>>> Fork latency per gigabyte: 0.004382 seconds >>>>>> Fork latency per gigabyte: 0.004442 seconds >>>>>> COW kernel: >>>>>> Fork latency per gigabyte: 0.004524 seconds >>>>>> Fork latency per gigabyte: 0.004764 seconds >>>>>> Fork latency per gigabyte: 0.004547 seconds >>>>>> >>>>>> AMD EPYC 7B12 64-Core Processor >>>>>> Base: >>>>>> Fork latency per gigabyte: 0.003923 seconds >>>>>> Fork latency per gigabyte: 0.003909 seconds >>>>>> Fork latency per gigabyte: 0.003955 seconds >>>>>> COW kernel: >>>>>> Fork latency per gigabyte: 0.004221 seconds >>>>>> Fork latency per gigabyte: 0.003882 seconds >>>>>> Fork latency per gigabyte: 0.003854 seconds >>>>>> >>>>>> Given, that page table for child is not copied, I was expecting the >>>>>> performance to be better with COW kernel, and also not to depend on >>>>>> the size of the parent. >>>>> >>>>> Yes, the child won't duplicate the page table, but fork will still >>>>> traverse all the page table entries to do the accounting. >>>>> And, since this patch expends the COW to the PTE table level, it's not >>>>> the mapped page (page table entry) grained anymore, so we have to >>>>> guarantee that all the mapped page is available to do COW mapping in >>>>> the such page table. >>>>> This kind of checking also costs some time. >>>>> As a result, since the accounting and the checking, the COW PTE fork >>>>> still depends on the size of the parent so the improvement might not >>>>> be significant. >>>> >>>> The current version of the series does not provide any performance >>>> improvements for fork(). I would recommend removing claims from the >>>> cover letter about better fork() performance, as this may be >>>> misleading for those looking for a way to speed up forking. In my >>> >>> From v3 to v4, I changed the implementation of the COW fork() part to do >>> the accounting and checking. At the time, I also removed most of the >>> descriptions about the better fork() performance. Maybe it's not enough >>> and still has some misleading. I will fix this in the next version. >>> Thanks. >>> >>>> case, I was looking to speed up Redis OSS, which relies on fork() to >>>> create consistent snapshots for driving replicates/backups. The O(N) >>>> per-page operation causes fork() to be slow, so I was hoping that this >>>> series, which does not duplicate the VA during fork(), would make the >>>> operation much quicker. >>> >>> Indeed, at first, I tried to avoid the O(N) per-page operation by >>> deferring the accounting and the swap stuff to the page fault. But, >>> as I mentioned, it's not suitable for the mainline. >>> >>> Honestly, for improving the fork(), I have an idea to skip the per-page >>> operation without breaking the logic. However, this will introduce the >>> complicated mechanism and may has the overhead for other features. It >>> might not be worth it. It's hard to strike a balance between the >>> over-complicated mechanism with (probably) better performance and data >>> consistency with the page status. So, I would focus on the safety and >>> stable approach at first. >> >> Yes, it is most probably possible, but complexity, robustness and >> maintainability have to be considered as well. >> >> Thanks for implementing this approach (only deduplication without other >> optimizations) and evaluating it accordingly. It's certainly "cleaner", >> such that we only have to mess with unsharing and not with other >> accounting/pinning/mapcount thingies. But it also highlights how >> intrusive even this basic deduplication approach already is -- and that >> most benefits of the original approach requires even more complexity on top. >> >> I am not quite sure if the benefit is worth the price (I am not to >> decide and I would like to hear other options). >> >> My quick thoughts after skimming over the core parts of this series >> >> (1) forgetting to break COW on a PTE in some pgtable walker feels quite >> likely (meaning that it might be fairly error-prone) and forgetting >> to break COW on a PTE table, accidentally modifying the shared >> table. >> (2) break_cow_pte() can fail, which means that we can fail some >> operations (possibly silently halfway through) now. For example, >> looking at your change_pte_range() change, I suspect it's wrong. >> (3) handle_cow_pte_fault() looks quite complicated and needs quite some >> double-checking: we temporarily clear the PMD, to reset it >> afterwards. I am not sure if that is correct. For example, what >> stops another page fault stumbling over that pmd_none() and >> allocating an empty page table? Maybe there are some locking details >> missing or they are very subtle such that we better document them. I >> recall that THP played quite some tricks to make such cases work ... >> >>> >>>>> Actually, at the RFC v1 and v2, we proposed the version of skipping >>>>> those works, and we got a significant improvement. You can see the >>>>> number from RFC v2 cover letter [1]: >>>>> "In short, with 512 MB mapped memory, COW PTE decreases latency by 93% >>>>> for normal fork" >>>> >>>> I suspect the 93% improvement (when the mapcount was not updated) was >>>> only for VAs with 4K pages. With 2M mappings this series did not >>>> provide any benefit is this correct? >>> >>> Yes. In this case, the COW PTE performance is similar to the normal >>> fork(). >> >> >> The thing with THP is, that during fork(), we always allocate a backup >> PTE table, to be able to PTE-map the THP whenever we have to. Otherwise >> we'd have to eventually fail some operations we don't want to fail -- >> similar to the case where break_cow_pte() could fail now due to -ENOMEM >> although we really don't want to fail (e.g., change_pte_range() ). >> >> I always considered that wasteful, because in many scenarios, we'll >> never ever split a THP and possibly waste memory. > > When you say "split THP", do you mean split the compound page to base > pages? IIUC the backup PTE table page is used to guarantee the PMD > split (just convert pmd mapped THP to PTE-mapped but not split the > compound page) succeed. You may already notice there is no return > value for PMD split. Yes, as I raised in my other reply. > > The PMD split may be called quite often, for example, MADV_DONTNEED, > mbind, mlock, and even in memory reclamation context (THP swap). Yes, but with a single MADV_DONTNEED call you cannot PTE-map more than 2 THP (all other overlapped THP will get zapped). Same with most other operations. There are corner cases, though. I recall that s390x/kvm wants to break all THP in a given VMA range. But that operation could safely fail if we can't do that. Certainly needs some investigation, that's most probably why it hasn't been done yet. > >> >> Optimizing that for THP (e.g., don't always allocate backup THP, have >> some global allocation backup pool for splits + refill when >> close-to-empty) might provide similar fork() improvements, both in speed >> and memory consumption when it comes to anonymous memory. > > It might work. But may be much more complicated than what you thought > when handling multiple parallel PMD splits. I consider the whole PTE-table linking to THPs complicated enough to eventually replace it by something differently complicated that wastes less memory ;) -- Thanks, David / dhildenb