From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04F0FC05027 for ; Tue, 14 Feb 2023 17:23:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 55B9F6B007B; Tue, 14 Feb 2023 12:23:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E4E86B0081; Tue, 14 Feb 2023 12:23:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 385346B0085; Tue, 14 Feb 2023 12:23:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 22B506B007B for ; Tue, 14 Feb 2023 12:23:23 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E578C120898 for ; Tue, 14 Feb 2023 17:23:22 +0000 (UTC) X-FDA: 80466568644.20.2C6A036 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by imf22.hostedemail.com (Postfix) with ESMTP id 198A4C0022 for ; Tue, 14 Feb 2023 17:23:20 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=OuAM2UqR; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of shy828301@gmail.com designates 209.85.215.175 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676395401; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hUVBPsH3oAEXAQofX8yIP+/K7/mZIMg7RLPUFsCWbDA=; b=WdhSMgxm40C3PYuxbKUTPFpLeS+BWXtD2O4bdzOzfYEUrDS4+Fp5QKkKML/sfz+Ja1txJW bdiQh4xdZMG2R1SwdkRUzw6MxKWSj9C99dCwEuO/gwF5o15Mr7XulOCBSr7D4zCkzd0zxn DNK2dA7Lkj6o4PgM9DQJNZg6WlLFSEY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=OuAM2UqR; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of shy828301@gmail.com designates 209.85.215.175 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676395401; a=rsa-sha256; cv=none; b=1nEAG91USdW5s7ikOTM9pdM65q/Hhp6z7nnZKZNk2dTm5/Y2ebi4z4sERCU2JPw1s3lyPJ z77hhIxBR5qxjcZvq01fh/GfL+rEA8VqhUxJQIPrCAUXNFihZ/aQmQA3ob4jCmEiwjByDq LE0HooDYhFHyH22VMR4dZTVGePkJkwc= Received: by mail-pg1-f175.google.com with SMTP id v3so10698638pgh.4 for ; Tue, 14 Feb 2023 09:23:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1676395400; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=hUVBPsH3oAEXAQofX8yIP+/K7/mZIMg7RLPUFsCWbDA=; b=OuAM2UqRAkLgip9Xw2DlRqYepaFNzc4qFNmiNv7cRaMP0yyakh9floBQOK5T1o36j4 brCm1ifKbS1O8/yLto56LAsH2Nty135AsDT2jME+fRQTxB+AHQsT/Bs6mSDG5mK6iCkT q8Ebz/iwkfEpoCDePpDnJLLWEo14/SmThvfomwYGeUSw6pe1+wpMpPq4w/IUq0vsEgPV RMYAaMifbEMCaY1mtkdBVeE/qwE1dpX/cpTJoVVJU/CVWL/Hax3hLNyGCgXLY55qEKRv Vglf1d2v3x+nZ/z2P5UL2tBCr3/3GMBCCuHzItWHnd6KS4t8/0hVuoMlUN6gR2MD/bTy eQwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1676395400; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hUVBPsH3oAEXAQofX8yIP+/K7/mZIMg7RLPUFsCWbDA=; b=aCW0Rxykvt5vtQUSb0fpC91yiPh2yTNA2YSKY8hQHbhPGyj7nxhrTrQtF8APXc/JoK TAaKr4ZMD5QKP0Bg2cnIxn/75IjKgL74UZ1d4wRNo3Df6sTYS6bxi+Sp5i3TevAkLWP1 llGGikoL/NTEbj+8Do3pJXEWYB7U47zb7zOjfmXZkiyQ8Nl8sf0IU+f0PW0G1vpOPW30 FKnAWxj0070rhpnSwgTDDJeqzePeILGiWZimnxfguVZASloRhGPxOONEfeCbFVZmR0R9 RmJymJkbXG72kCUNf94r82bEy1cFv/cJ+6HZCXHlzUE9qYEk1Afyn0ihr3+oHUcqxHpx 3BKw== X-Gm-Message-State: AO0yUKUrlbtBLUvBg4wPnYzuYya7BaCbyH9ojDZoRVJfZZ5vPBVllycK zLaKOo9tlRTsZvQqK7P2iMShxUWY5Oeo1LYcTrs= X-Google-Smtp-Source: AK7set99U5fDx0SzpjwyakcvBeVGYFyQEhDgFEtZ5IV98UZn563/hsCar30TCq32sx4Xxg5/i5E3Zr7bgyHYpeZ0EOQ= X-Received: by 2002:a62:1957:0:b0:5a8:b987:b71f with SMTP id 84-20020a621957000000b005a8b987b71fmr649576pfz.20.1676395399593; Tue, 14 Feb 2023 09:23:19 -0800 (PST) MIME-Version: 1.0 References: <20230207035139.272707-1-shiyn.lin@gmail.com> <62c44d12-933d-ee66-ef50-467cd8d30a58@redhat.com> In-Reply-To: <62c44d12-933d-ee66-ef50-467cd8d30a58@redhat.com> From: Yang Shi Date: Tue, 14 Feb 2023 09:23:08 -0800 Message-ID: Subject: Re: [PATCH v4 00/14] Introduce Copy-On-Write to Page Table To: David Hildenbrand Cc: Chih-En Lin , Pasha Tatashin , Andrew Morton , Qi Zheng , "Matthew Wilcox (Oracle)" , Christophe Leroy , John Hubbard , Nadav Amit , Barry Song , Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Peter Xu , Vlastimil Babka , "Zach O'Keefe" , Yun Zhou , Hugh Dickins , Suren Baghdasaryan , Yu Zhao , Juergen Gross , Tong Tiangen , Liu Shixin , Anshuman Khandual , Li kunyu , Minchan Kim , Miaohe Lin , Gautam Menghani , Catalin Marinas , Mark Brown , Will Deacon , Vincenzo Frascino , Thomas Gleixner , "Eric W. Biederman" , Andy Lutomirski , Sebastian Andrzej Siewior , "Liam R. Howlett" , Fenghua Yu , Andrei Vagin , Barret Rhoden , Michal Hocko , "Jason A. Donenfeld" , Alexey Gladkov , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 198A4C0022 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: wbqa94pcbwppx4gt5fkm3thmokquqp8m X-HE-Tag: 1676395400-242470 X-HE-Meta: U2FsdGVkX19V+e4BqS3lVE979ddmY+r3ZL5PceWjr/vPDxTRga6u2GHgPcMMRUk3pc1xxMPXXAZUqs2jABnqjA76FfWUkt73YrYChkzduftLEqf/Ne60hFV5/ZrgEDhT+2JTJHCjlYcD9qBC9bFZaYSXrlPUJBTXmqvFB4esPe/wIEKqgWyFX4u+8xTbOGMgW7lfEqxDScho/JJEVjzXv0HAc57v1B5Nq5jtfgMunNio6BqTrfecvWQ2xSD4uIbm2y0LQO4zF975e12agRDDQqTVXFbiI9iWBsfF7E9qBKf4a3nKpHk431+oZQY1I0j+sm0MdDeS1owjfj034P5hyzWIcL40s6j2sC58boIQ8q6wZ7iDGO5dWggCTbtJ+SikCGER/0pGjaGciC98OACPJCciruNCKkf59kW1aQbfedUYOM/XHB8pupCkDz6k5JQ27b4Kzz2jfPAsb6xfOVhy2UzpUroQvuXYYEZGR++LpHHXZixWzC0diEu7Y1ruyrbELpIgg9sp05Fv+OMDKlWq5CIgOdPeXHd+q0cF4avuMLIBhvwWwwsee6fQ4i1GEgz4ChWrw/s0hK6+hR/xfE3pniKUMPal4+yxgwLXZhCWLQtLJfFBF+HyiFALqHXuqbCSCn+UPcKdROjw/Am3La7FY+578vY622KZrTQvVgNiKwrm3wWFQQ+mrCATjkdUK3zjEpND9lWFqRWn3BMCWLeq2sdlFX/uMIRI4yM0YsEo+ojjbYfx0Qj1VV3+UbkUlsJUgxOXEiIgCrrdGmyxATbsoRA/v7Bb+/LKK96saBKa0NyrjUTH31pBZCGNxieDjorVinxgz4sDxu/SHnPg/Sh+T9glqzlTP4pJKQV7FMFGxLVSI2qwNU2By4BVYuPuDoxU1tVzp2z0SwEcVf4Jx5rgFAdpnz1nJi7Tczi6ihbEtMQqPLz2PYzIia9vlS6EKHLReljTSTXLTl47f0e7QSP A2eIzzeo DIWjRbk4w8DSJa5jPfddsxGQJkfwcicxdQ/epqsacCz6HwR6yvhpDK/rqT6FiklEirZCADg3zgKwHM6r1zJ7K/1FJXuaISFACUvUK1A6toUFSbTVjW1vz4aC5Kv1Tfgi3xNtb4tDkFmwVB/ymrvntDxVoINEwnubkk0HLycWhBO1y3Sdt56nw0tpifSu4p0110tcvppVj2ZUjVcWCAljyJkJMBSfo3hftghuMNPtTBnS3PXgtpY7tQG2rm7ZfgszTtBOol+BTnL04Ns/rrB8lPsYeG7YbBNo33kSUrO+p/4Cv07F527+nJiUKJWmtqFSTSHu7F+/vPwBDtmjkaYHopncxI8R/liQP2wbec8V3TQIfSQbpnkdWx3j4s5kmeu/uMWT0Afs5MXU78epEQ3wlw6UmfjhLR+5pOvUABZ8wBKK+dQbeUKCcIHw1IMAXEQzsiYY+EkuRxkB0iORNzxXi0ze2AA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 14, 2023 at 1:58 AM David Hildenbrand wrote: > > On 10.02.23 18:20, Chih-En Lin wrote: > > On Fri, Feb 10, 2023 at 11:21:16AM -0500, Pasha Tatashin wrote: > >>>>> Currently, copy-on-write is only used for the mapped memory; the child > >>>>> process still needs to copy the entire page table from the parent > >>>>> process during forking. The parent process might take a lot of time and > >>>>> memory to copy the page table when the parent has a big page table > >>>>> allocated. For example, the memory usage of a process after forking with > >>>>> 1 GB mapped memory is as follows: > >>>> > >>>> For some reason, I was not able to reproduce performance improvements > >>>> with a simple fork() performance measurement program. The results that > >>>> I saw are the following: > >>>> > >>>> Base: > >>>> Fork latency per gigabyte: 0.004416 seconds > >>>> Fork latency per gigabyte: 0.004382 seconds > >>>> Fork latency per gigabyte: 0.004442 seconds > >>>> COW kernel: > >>>> Fork latency per gigabyte: 0.004524 seconds > >>>> Fork latency per gigabyte: 0.004764 seconds > >>>> Fork latency per gigabyte: 0.004547 seconds > >>>> > >>>> AMD EPYC 7B12 64-Core Processor > >>>> Base: > >>>> Fork latency per gigabyte: 0.003923 seconds > >>>> Fork latency per gigabyte: 0.003909 seconds > >>>> Fork latency per gigabyte: 0.003955 seconds > >>>> COW kernel: > >>>> Fork latency per gigabyte: 0.004221 seconds > >>>> Fork latency per gigabyte: 0.003882 seconds > >>>> Fork latency per gigabyte: 0.003854 seconds > >>>> > >>>> Given, that page table for child is not copied, I was expecting the > >>>> performance to be better with COW kernel, and also not to depend on > >>>> the size of the parent. > >>> > >>> Yes, the child won't duplicate the page table, but fork will still > >>> traverse all the page table entries to do the accounting. > >>> And, since this patch expends the COW to the PTE table level, it's not > >>> the mapped page (page table entry) grained anymore, so we have to > >>> guarantee that all the mapped page is available to do COW mapping in > >>> the such page table. > >>> This kind of checking also costs some time. > >>> As a result, since the accounting and the checking, the COW PTE fork > >>> still depends on the size of the parent so the improvement might not > >>> be significant. > >> > >> The current version of the series does not provide any performance > >> improvements for fork(). I would recommend removing claims from the > >> cover letter about better fork() performance, as this may be > >> misleading for those looking for a way to speed up forking. In my > > > > From v3 to v4, I changed the implementation of the COW fork() part to do > > the accounting and checking. At the time, I also removed most of the > > descriptions about the better fork() performance. Maybe it's not enough > > and still has some misleading. I will fix this in the next version. > > Thanks. > > > >> case, I was looking to speed up Redis OSS, which relies on fork() to > >> create consistent snapshots for driving replicates/backups. The O(N) > >> per-page operation causes fork() to be slow, so I was hoping that this > >> series, which does not duplicate the VA during fork(), would make the > >> operation much quicker. > > > > Indeed, at first, I tried to avoid the O(N) per-page operation by > > deferring the accounting and the swap stuff to the page fault. But, > > as I mentioned, it's not suitable for the mainline. > > > > Honestly, for improving the fork(), I have an idea to skip the per-page > > operation without breaking the logic. However, this will introduce the > > complicated mechanism and may has the overhead for other features. It > > might not be worth it. It's hard to strike a balance between the > > over-complicated mechanism with (probably) better performance and data > > consistency with the page status. So, I would focus on the safety and > > stable approach at first. > > Yes, it is most probably possible, but complexity, robustness and > maintainability have to be considered as well. > > Thanks for implementing this approach (only deduplication without other > optimizations) and evaluating it accordingly. It's certainly "cleaner", > such that we only have to mess with unsharing and not with other > accounting/pinning/mapcount thingies. But it also highlights how > intrusive even this basic deduplication approach already is -- and that > most benefits of the original approach requires even more complexity on top. > > I am not quite sure if the benefit is worth the price (I am not to > decide and I would like to hear other options). > > My quick thoughts after skimming over the core parts of this series > > (1) forgetting to break COW on a PTE in some pgtable walker feels quite > likely (meaning that it might be fairly error-prone) and forgetting > to break COW on a PTE table, accidentally modifying the shared > table. > (2) break_cow_pte() can fail, which means that we can fail some > operations (possibly silently halfway through) now. For example, > looking at your change_pte_range() change, I suspect it's wrong. > (3) handle_cow_pte_fault() looks quite complicated and needs quite some > double-checking: we temporarily clear the PMD, to reset it > afterwards. I am not sure if that is correct. For example, what > stops another page fault stumbling over that pmd_none() and > allocating an empty page table? Maybe there are some locking details > missing or they are very subtle such that we better document them. I > recall that THP played quite some tricks to make such cases work ... > > > > >>> Actually, at the RFC v1 and v2, we proposed the version of skipping > >>> those works, and we got a significant improvement. You can see the > >>> number from RFC v2 cover letter [1]: > >>> "In short, with 512 MB mapped memory, COW PTE decreases latency by 93% > >>> for normal fork" > >> > >> I suspect the 93% improvement (when the mapcount was not updated) was > >> only for VAs with 4K pages. With 2M mappings this series did not > >> provide any benefit is this correct? > > > > Yes. In this case, the COW PTE performance is similar to the normal > > fork(). > > > The thing with THP is, that during fork(), we always allocate a backup > PTE table, to be able to PTE-map the THP whenever we have to. Otherwise > we'd have to eventually fail some operations we don't want to fail -- > similar to the case where break_cow_pte() could fail now due to -ENOMEM > although we really don't want to fail (e.g., change_pte_range() ). > > I always considered that wasteful, because in many scenarios, we'll > never ever split a THP and possibly waste memory. When you say "split THP", do you mean split the compound page to base pages? IIUC the backup PTE table page is used to guarantee the PMD split (just convert pmd mapped THP to PTE-mapped but not split the compound page) succeed. You may already notice there is no return value for PMD split. The PMD split may be called quite often, for example, MADV_DONTNEED, mbind, mlock, and even in memory reclamation context (THP swap). > > Optimizing that for THP (e.g., don't always allocate backup THP, have > some global allocation backup pool for splits + refill when > close-to-empty) might provide similar fork() improvements, both in speed > and memory consumption when it comes to anonymous memory. It might work. But may be much more complicated than what you thought when handling multiple parallel PMD splits. > > -- > Thanks, > > David / dhildenb >