From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73603C48BC4 for ; Wed, 21 Feb 2024 02:21:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C5F56B0078; Tue, 20 Feb 2024 21:21:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0748C6B0082; Tue, 20 Feb 2024 21:21:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E57F16B0087; Tue, 20 Feb 2024 21:21:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D75446B0082 for ; Tue, 20 Feb 2024 21:21:10 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B7DDC805E6 for ; Wed, 21 Feb 2024 02:21:10 +0000 (UTC) X-FDA: 81814208700.09.2DD9B85 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf29.hostedemail.com (Postfix) with ESMTP id 0C603120011 for ; Wed, 21 Feb 2024 02:21:07 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=lRpihpo2; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf29.hostedemail.com: domain of zhangpeng.00@bytedance.com designates 209.85.215.179 as permitted sender) smtp.mailfrom=zhangpeng.00@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708482068; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=l1R6afuU9qtN5hIRvEmeA6FfNqFs/docJ+nZZc128Tk=; b=06GpsjdNIBBSMFbS9Wf4oN6X1PxG11y9TrGCz8jq39jGtmAlyrF7NLSM0j7z/KGkqjQg8Y 6koctdK5IgqBKkMOd/9gRNMbcxnsNdM8Gl/VjGuROnbtN6DVUumu9ANUvaczZclv6RdHLF s1xDjoMTohVNV4kIrx+GFpd9JNa4+aI= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=lRpihpo2; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf29.hostedemail.com: domain of zhangpeng.00@bytedance.com designates 209.85.215.179 as permitted sender) smtp.mailfrom=zhangpeng.00@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708482068; a=rsa-sha256; cv=none; b=nPwjaaG/7tcbEPtqnmkQoVfVi46Xw96UMDEiPuzyKXgxIco36zRMvvcgiKqvs0PrcWabcn tURfPvwjnLqmCxrfSPupchMTXBV/CbmzDQcGqSaGqpa9O95lxUsALiqaejDjnYvHbAc2d7 mOAIJRmEx58kICWI9qvkLZGxZls873M= Received: by mail-pg1-f179.google.com with SMTP id 41be03b00d2f7-5ca29c131ebso74723a12.0 for ; Tue, 20 Feb 2024 18:21:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1708482066; x=1709086866; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:references:cc:to:subject :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=l1R6afuU9qtN5hIRvEmeA6FfNqFs/docJ+nZZc128Tk=; b=lRpihpo2NqgZd6ho0ku0hnZYe+uYbV2Mn4OyF6qqilWCj9LA2zBWJMWM//T444agT0 FOEKvTFqIckJSj7MUT9WgcaxLlxEWtbMEplN9hARKYjOruaV2QUekSfHGyPKy+IL241Q jQgmtFTxCMNIhpNw5kWZFZ2ZCGAJicKQ7Ve1FuAwVSo46H6sPaXPM3QB3y4EZ9ULsLTK qUFlHNAqzH3YeGXfmVZkUb0/rAPANpHkBHN6Aa97ePp6TIN6cd9hbDD9fWZAiYxHH6St 7X5IoeDx87hdQrLZG2xmZfkBkE6ixh/nXkcTXWK32kcFIQvFeXiOFCWE8hXLw30/j7QL 1rPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708482066; x=1709086866; h=content-transfer-encoding:in-reply-to:from:references:cc:to:subject :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=l1R6afuU9qtN5hIRvEmeA6FfNqFs/docJ+nZZc128Tk=; b=A6qQPuy0MCSWj1ZUpcCvrH5zSMkp+4BX9ndaFWFSj7tsq0IhtzzrSvUb6cfWIewJSq imEV4jcJnMtHvn3iGJoHnqBzKJkl/klPlj8b+xcQ/qVwRgzbiLG93euIOIWIAR5fdUGD ekpxtu+zoJHy8fEOeIvFbVASgDBOMxK8bmKcy2HQ5jAMX9MGXYvUWSQbafx4ZnozlbJr yu3PWQFLAlk3OIILldTSvvJcuZNRgtS+FooxHl/IAy81Q5x+EccjEYC6+Rg/vpEiNb32 NxcPbFD9717+Tn6MzvW7lDEguaEBlJF1oH5pCe+HD9RYGca8YkM41rPy66RmsN9cdmd1 n8fA== X-Forwarded-Encrypted: i=1; AJvYcCUGna5QQGTbPplhgxECqQSEs4ZBTbkoKNVI/S/yL/6Lxuf8PQ+o+L3v3XFkeO2s6z6HKIQR6mDVbuGmmxvmaFjHyls= X-Gm-Message-State: AOJu0YxOzZddGijQDiHhn4Ihg7hFlO1jKEDEpWFt340tHHwCX3etpjjY 2OHEKYmJaj8l5Qd58u/XPrvTgz/iNCncq8kr/ZKkz1XoEyDPo5pjdDC4y0xtQRk= X-Google-Smtp-Source: AGHT+IEO5o3QuZQacGlqYRaf8RZYwatPw3qTBMwsPeyn8POJuDm6UYyWHFwJTW83lpUYBJt3hHx5Bw== X-Received: by 2002:a05:6a21:3482:b0:1a0:c3e6:314f with SMTP id yo2-20020a056a21348200b001a0c3e6314fmr64380pzb.18.1708482066592; Tue, 20 Feb 2024 18:21:06 -0800 (PST) Received: from [10.84.145.15] ([203.208.167.152]) by smtp.gmail.com with ESMTPSA id jd20-20020a170903261400b001d94a3f3987sm6907230plb.184.2024.02.20.18.21.00 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 20 Feb 2024 18:21:06 -0800 (PST) Message-ID: Date: Wed, 21 Feb 2024 10:20:58 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v7 10/10] fork: Use __mt_dup() to duplicate maple tree in dup_mmap() To: David Hildenbrand Cc: maple-tree@lists.infradead.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, brauner@kernel.org, michael.christie@oracle.com, npiggin@gmail.com, Peng Zhang , corbet@lwn.net, Liam.Howlett@oracle.com, willy@infradead.org, surenb@google.com, mjguzik@gmail.com, mathieu.desnoyers@efficios.com, peterz@infradead.org, oliver.sang@intel.com, akpm@linux-foundation.org, mst@redhat.com References: <20231027033845.90608-1-zhangpeng.00@bytedance.com> <20231027033845.90608-11-zhangpeng.00@bytedance.com> <6058742c-26e5-4600-85ad-0a21d8fd2e42@redhat.com> From: Peng Zhang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 0C603120011 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: yacmfx7cnnftdk66ouo7rfy6otojjqyr X-HE-Tag: 1708482067-961398 X-HE-Meta: U2FsdGVkX18cYxTiVxYP2eFVmvgU73L/8rCTPeE0dxi+wriA/JWa+GRYPykYhRTXlOSwUwWTI/IuelBhfiHUQa5ncgzafCZ5VQcg/Vm7Hdlxahehh9DT8lrFUoh62LzJ3E1xRJL30H3SabcgqSguk92fLa3D1R9XpL1AOafyHurV+2LkSe1/VbCqtzi6DSnH2IeJ3FEqWREVBp1uPDlv6a/vKbTkPjjKFsJsJX+QO8ZtmIarVvlPRJpn+ElMaoprNytnCVLa+LKuFhR9Z4usQzBdxMt8MH21CIRW58t1QLBb2iKpOA1Y9XYvOyyY90sLjUh1bFnMicYPrH6Y4IGgBW2/Xg+QYJ6puR0pX8tWpb/08fanLEXCx57VVF/hHYv2RYZMyXDNOy9VUI6vV8qZTazOff4XCTRidSZKn14i0gukNIj09rusnA7ImPAYlnlUCvwHQKKHTjhIP8dP6pTJPUQv/TFWrNuxm352JWr0fuRf2zzhLStgodmuV1rcTshFLLQcHKCGx4gFmkUJc11X7r17dPEAarUYquIsizWO8ZEyICduZwbrbQvse1r+rekU2hj/2IED2ExqCBZC9CtWbzZ3CacFyIyQXirDAV+p1OS3swwRfODSscC7ySqxju399hhR0IA4zqRdA+92J4IDkp/923ELSELFkM5X9+uJApMzVDjRSLu3rUEitkaKYDZN7p1sNuU/cRoWfQ7ZjArrnZQbyIQUvWmPMS2cQSSLV3qnke+fQiJS8IpL1bU6N56tk8WF3FY9WkiXK7/IXk9nz71nlcYLzD9TUXHngfPuAGejldrFUjKOz49/LK8OFLAwXYDqXwqlWCzgwgV4xn9KSxDT/PELmSLIk9Y/zPMDKnkxfZII4cjnTJ2xFRnqrQlv/yHaQFqmhHuqEM1qgSlimTh1TMuZPJjdnkj5H713GW1OrNCXTH3KFjcyTbWBYhfgAgpO9nx5LlpQ2FqI6X/ cd4gHutM 5AIn7ueEWCBvXh6XrdqE0ojharPgIWpWRVabDVTcuPmpUMDbO0rr9T/S+ixnSJ2Q/gsUEr6mKRybdjxWmXAdZ5iuC9Yc5RyQblE+gnrsPUecYhmrm7uOrtgvcTgeKn4DDde1efUjY3RR+gVcKrmLHRowIvAE+1Swlq7QkHN+hKL4/PaRa+QdrOIgBRfcJfDaGysqwCLK8cjoWTJZn6a4rCMAKNRDTim4E7lqKkhUlpSoDpinRROCetvJeVe9i2Wvz8Fe5ffcLc6ERczpbjaFbIEfkF2NlDcAhRNpvWsdBcQ2t5v7TsGGUoC4QY+OkDS1EoY+DszQgB7UgWGMJnYTzFYdsezaUN4nonfV9dtFNse8sAIaS2AQZjlIlUvHjNmVP9KYQvV4dmuLvbzsy0zyFxgNNfYsntcOhd2MhNKFtWtvw1j6CrAIYHDemXGU5vKKH9ta0lCuYHyOz9YNOqsAFmm3xpXPzd0SxtfVOnUcDgK/eyH9Po8RuBLSC7SchRiJ+OCjDi3L2/dB+9ckINzWOtVAVZy+W2E3OVNBabZInG6mRkUXs69Cvqne/1ZCNs6Ill/fdNqzihzQWMwO33nNVJvUfcLDjFwEU9vGzYGawHSnFHoYfCwzNrm2OXd48D5fPqG6lMdu2tHiMQ6I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2024/2/21 01:31, David Hildenbrand 写道: > On 20.02.24 18:24, David Hildenbrand wrote: >> On 27.10.23 05:38, Peng Zhang wrote: >>> In dup_mmap(), using __mt_dup() to duplicate the old maple tree and then >>> directly replacing the entries of VMAs in the new maple tree can result >>> in better performance. __mt_dup() uses DFS pre-order to duplicate the >>> maple tree, so it is efficient. >>> >>> The average time complexity of __mt_dup() is O(n), where n is the number >>> of VMAs. The proof of the time complexity is provided in the commit log >>> that introduces __mt_dup(). After duplicating the maple tree, each element >>> is traversed and replaced (ignoring the cases of deletion, which are rare). >>> Since it is only a replacement operation for each element, this process is >>> also O(n). >>> >>> Analyzing the exact time complexity of the previous algorithm is >>> challenging because each insertion can involve appending to a node, pushing >>> data to adjacent nodes, or even splitting nodes. The frequency of each >>> action is difficult to calculate. The worst-case scenario for a single >>> insertion is when the tree undergoes splitting at every level. If we >>> consider each insertion as the worst-case scenario, we can determine that >>> the upper bound of the time complexity is O(n*log(n)), although this is a >>> loose upper bound. However, based on the test data, it appears that the >>> actual time complexity is likely to be O(n). >>> >>> As the entire maple tree is duplicated using __mt_dup(), if dup_mmap() >>> fails, there will be a portion of VMAs that have not been duplicated in >>> the maple tree. To handle this, we mark the failure point with >>> XA_ZERO_ENTRY. In exit_mmap(), if this marker is encountered, stop >>> releasing VMAs that have not been duplicated after this point. >>> >>> There is a "spawn" in byte-unixbench[1], which can be used to test the >>> performance of fork(). I modified it slightly to make it work with >>> different number of VMAs. >>> >>> Below are the test results. The first row shows the number of VMAs. >>> The second and third rows show the number of fork() calls per ten seconds, >>> corresponding to next-20231006 and the this patchset, respectively. The >>> test results were obtained with CPU binding to avoid scheduler load >>> balancing that could cause unstable results. There are still some >>> fluctuations in the test results, but at least they are better than the >>> original performance. >>> >>> 21     121   221    421    821    1621   3221   6421   12821  25621  51221 >>> 112100 76261 54227  34035  20195  11112  6017   3161   1606   802    393 >>> 114558 83067 65008  45824  28751  16072  8922   4747   2436   1233   599 >>> 2.19%  8.92% 19.88% 34.64% 42.37% 44.64% 48.28% 50.17% 51.68% 53.74% 52.42% >>> >>> [1] https://github.com/kdlucas/byte-unixbench/tree/master >>> >>> Signed-off-by: Peng Zhang >>> Suggested-by: Liam R. Howlett >>> Reviewed-by: Liam R. Howlett >>> --- >>>    include/linux/mm.h | 11 +++++++++++ >>>    kernel/fork.c      | 40 +++++++++++++++++++++++++++++----------- >>>    mm/internal.h      | 11 ----------- >>>    mm/memory.c        |  7 ++++++- >>>    mm/mmap.c          |  9 ++++++--- >>>    5 files changed, 52 insertions(+), 26 deletions(-) >>> >>> diff --git a/include/linux/mm.h b/include/linux/mm.h >>> index 14d5aaff96d0..e9111ec5808c 100644 >>> --- a/include/linux/mm.h >>> +++ b/include/linux/mm.h >>> @@ -996,6 +996,17 @@ static inline int vma_iter_bulk_alloc(struct vma_iterator *vmi, >>>        return mas_expected_entries(&vmi->mas, count); >>>    } >>> +static inline int vma_iter_clear_gfp(struct vma_iterator *vmi, >>> +            unsigned long start, unsigned long end, gfp_t gfp) >>> +{ >>> +    __mas_set_range(&vmi->mas, start, end - 1); >>> +    mas_store_gfp(&vmi->mas, NULL, gfp); >>> +    if (unlikely(mas_is_err(&vmi->mas))) >>> +        return -ENOMEM; >>> + >>> +    return 0; >>> +} >>> + >>>    /* Free any unused preallocations */ >>>    static inline void vma_iter_free(struct vma_iterator *vmi) >>>    { >>> diff --git a/kernel/fork.c b/kernel/fork.c >>> index 1e6c656e0857..1552ee66517b 100644 >>> --- a/kernel/fork.c >>> +++ b/kernel/fork.c >>> @@ -650,7 +650,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, >>>        int retval; >>>        unsigned long charge = 0; >>>        LIST_HEAD(uf); >>> -    VMA_ITERATOR(old_vmi, oldmm, 0); >>>        VMA_ITERATOR(vmi, mm, 0); >>>        uprobe_start_dup_mmap(); >>> @@ -678,16 +677,22 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, >>>            goto out; >>>        khugepaged_fork(mm, oldmm); >>> -    retval = vma_iter_bulk_alloc(&vmi, oldmm->map_count); >>> -    if (retval) >>> +    /* Use __mt_dup() to efficiently build an identical maple tree. */ >>> +    retval = __mt_dup(&oldmm->mm_mt, &mm->mm_mt, GFP_KERNEL); >>> +    if (unlikely(retval)) >>>            goto out; >>>        mt_clear_in_rcu(vmi.mas.tree); >>> -    for_each_vma(old_vmi, mpnt) { >>> +    for_each_vma(vmi, mpnt) { >>>            struct file *file; >>>            vma_start_write(mpnt); >> >> We used to call vma_start_write() on the *old* VMA, to prevent any kind of page faults in >> the old MM while we are duplicating PTEs (and COW-share pages). >> >> See >> >> commit fb49c455323ff8319a123dd312be9082c49a23a5 >> Author: Suren Baghdasaryan >> Date:   Sat Jul 8 12:12:12 2023 -0700 >> >>       fork: lock VMAs of the parent process when forking >>       When forking a child process, the parent write-protects anonymous pages >>       and COW-shares them with the child being forked using copy_present_pte(). >>       We must not take any concurrent page faults on the source vma's as they >>       are being processed, as we expect both the vma and the pte's behind it >>       to be stable.  For example, the anon_vma_fork() expects the parents >>       vma->anon_vma to not change during the vma copy. >> >> >> Unless I am missing something, we now call vma_start_write() on the *new* VMA? >> >> If that is the case, this is broken and needs fixing; likely, going over all >> VMAs in the old_mm and calling vma_start_write(). >> >> But maybe there is some magic going on that I am missing :) > > ... likely the magic is that the new tree links the same VMAs (we are not duplicating the VMAs before vm_area_dup()), so we are indeed locking the MM in the old_mm (that is temporarily linked into the new MM). Thanks for reminding. Yes, the VMAs in the tree built via __mt_dup() are the same as those in the old tree, so there won't be a problem here. > > If that's the case, all good :) >