From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A681EE6456 for ; Fri, 15 Sep 2023 10:51:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 141BE6B00B7; Fri, 15 Sep 2023 06:51:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F2F46B00C3; Fri, 15 Sep 2023 06:51:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAD056B00D0; Fri, 15 Sep 2023 06:51:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D69F96B00B7 for ; Fri, 15 Sep 2023 06:51:17 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9E621121018 for ; Fri, 15 Sep 2023 10:51:17 +0000 (UTC) X-FDA: 81238514994.14.697C5FE Received: from mail-oi1-f174.google.com (mail-oi1-f174.google.com [209.85.167.174]) by imf17.hostedemail.com (Postfix) with ESMTP id E423D40021 for ; Fri, 15 Sep 2023 10:51:14 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=BG6R9jfb; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf17.hostedemail.com: domain of zhangpeng.00@bytedance.com designates 209.85.167.174 as permitted sender) smtp.mailfrom=zhangpeng.00@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694775075; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FMemAtij3QQ2cXjoF+WzKzZ8e+4oePAFLp1g/05J6tM=; b=EuUKAwAfc+x/5p2VGwaXlOpzSyfuqENitAm+kIBIMjdzrJgtzlWE0GkOLapd5YhUPGitbB RGpioOp/F5AYh9MBxjup1B9oH3FOTX7MPzteK2Fw+YYSXR8Z5Ky89YC8nBJdDpG3a0TQKH UwZmGSpb8lBHlCesM0N17bzkE7qMbgA= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=BG6R9jfb; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf17.hostedemail.com: domain of zhangpeng.00@bytedance.com designates 209.85.167.174 as permitted sender) smtp.mailfrom=zhangpeng.00@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694775075; a=rsa-sha256; cv=none; b=3CaQwD1ZqCEtBMohT0bjGSVNMToI0tHIvsu3acuoLSwA2VDoWu/wdDuf/M9WrWJvxViP6i JTN4L/dBDZe/NJJuSGAW00xVXMeON+CXZmn9r/s78gRnJy9SFmcpOoBYGcMMxUZi4Qym+k vOmoKr2rgtlWEazEyX1vjXc++asDkZ4= Received: by mail-oi1-f174.google.com with SMTP id 5614622812f47-3ab244f2c89so1278591b6e.3 for ; Fri, 15 Sep 2023 03:51:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1694775074; x=1695379874; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:references:to:subject :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=FMemAtij3QQ2cXjoF+WzKzZ8e+4oePAFLp1g/05J6tM=; b=BG6R9jfb8YOGmSRaUCrEeiMkrv+0otfnpuVI8PIKx/0OlVi1xCEtzycj2Zjh6BrfRK j4BmwcwdnMvzDsYjynnJdrh7p6xV9OmZSFq9OTFayAodsY9UWuN0cV10PBbxH58rtl5p SRx6wb+5+CI1dJglplV/ZhD9myzavgpOqAComBMc36fH98NeyO6cADzjcV6fvT8uWX7i rv/rYH+VzHPqjYNHC1nbMytuxk11mMZvC1mmc4Rm2KfZQ3exeFiEp0St50wbbkxSHOSn x0PWYh1NIHAW6pDRx2VCn94SffYqYTuTCakGxoCDJXh6Z59yuRJccw3D4xr0mWoB+E1x UHEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694775074; x=1695379874; h=content-transfer-encoding:in-reply-to:from:references:to:subject :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=FMemAtij3QQ2cXjoF+WzKzZ8e+4oePAFLp1g/05J6tM=; b=M7gNQRtqCsj7iCWq+JnDZqKf4lQVLAyzJAJCQohsGFjzK+t89vKuCuWyMTFhhySww5 YcIB3sWVU+5bj84W/18cJp+ZC3Jx0p+jpzy4mNo6fLCJMr0gdLTbOZ5NUbY05jz80x+i V9D0ay2garYgupDJKYWVQQ8RetFgbqUYUJsLzKp+12vg1qzhfDFv9sgPbGTdR0mxO3AZ DL1U21A5gsszc+ADQupCHRM/+4ZC7dhE6IBv/W8cP+YOP72YgQQozdd5JDBJDZ0izc4H ZD4D4HG7Qc6fVdxO4FQ4ir4ddK+yCgNw/rSiRbH5W9oezM645jLd6DLCiRDy63kyTYuo 0VXQ== X-Gm-Message-State: AOJu0YwQgZSg7Fdjgb1WCF4eczertBudeLf2by2D+xhwVeAz6rtbGH25 gHkQPgwg+GuLIhD8xKxx7VYyTw== X-Google-Smtp-Source: AGHT+IE8hoOsQCR8UcTdFU89pxisuTMaAPXw+AksO/nfKr2adb7Bh7PbfzrJejxqE39Bhf5wqZsPkA== X-Received: by 2002:aca:1319:0:b0:3ab:8431:8037 with SMTP id e25-20020aca1319000000b003ab84318037mr1265084oii.32.1694775073742; Fri, 15 Sep 2023 03:51:13 -0700 (PDT) Received: from [10.254.225.85] ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id v14-20020aa7850e000000b00682c864f35bsm2837978pfn.140.2023.09.15.03.51.07 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 Sep 2023 03:51:13 -0700 (PDT) Message-ID: <2868f2d5-1abe-7af6-4196-ee53cfae76a9@bytedance.com> Date: Fri, 15 Sep 2023 18:51:04 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH v2 6/6] fork: Use __mt_dup() to duplicate maple tree in dup_mmap() To: "Liam R. Howlett" , Peng Zhang , corbet@lwn.net, akpm@linux-foundation.org, willy@infradead.org, brauner@kernel.org, surenb@google.com, michael.christie@oracle.com, peterz@infradead.org, mathieu.desnoyers@efficios.com, npiggin@gmail.com, avagin@gmail.com, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org References: <20230830125654.21257-1-zhangpeng.00@bytedance.com> <20230830125654.21257-7-zhangpeng.00@bytedance.com> <20230907201414.dagnqxfnu7f7qzxd@revolver> From: Peng Zhang In-Reply-To: <20230907201414.dagnqxfnu7f7qzxd@revolver> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: E423D40021 X-Stat-Signature: 8q3wb7mtaaw3nu8ui1yweqafkpex6kck X-Rspam-User: X-HE-Tag: 1694775074-433264 X-HE-Meta: U2FsdGVkX19lcA8uEkxjHQb1UWJiu/rDlKYllFhw7FvR+0CjyCFGDtPk9YEFxR9q05SplgAaDNA1b0lTNKFEcFnRbtKn410SW1zmFMnP5rtSRb8aZRFBdXoaUIPweNGsbjSLNxM330cWkaTqFA5EKtDz1tcpmW70D1k2p1UyKO134O3T7kgsJVebmHr458GER3ygSRaKN1ZSF5W4UBIxvalAbgqDJX8+ViTSAlQC0SrwG8yqxIK4mi/3XcijkpaxkWO/dU12kYnMhsvhddxVqgr31uwpry4JQLvUNzn1b+2wFwG92wNq325lyueK27anmfuodyAFS0SbJwznSAj6UNzC+cF6OgTiMDMc1LgG5NiAb2tpRvtHLJ5+jENugpjeX57t92DLyeQC6mOZAm4YsNXspA6kJwf8/EPqPs3GSz4Y3N+zAUHAXVW2A+Zjs+0lYDsamY/TG9einaqD6mY8uQHL3gOohtEKo+/RrkKw2vnajOy7SdIqBFH8IpH9dXP1TYgHyKcCrNhgDjlgcDGqiDpUQp8IJlBJAGD2qaKvSlv3f0K7ollkvYwLvuMOHjl1obbGO9HleDpV2ovlsQ3SNnPohQuz7fmdpcxXdvOZRnk2MSQR11a+ZZ5aG2WZ6JCvRXUjhIQRpw3uGB/5oAGeMaqSgXSi5lPUQBHDwwkddt15N6Of1fPJAKgPWV53tk56skYQ8WQbXMx1DnSESiB6C3hEDZrZ339rHOsrDeYrV7vhPnOKfyOacUyG9u2uiQuOybjCLt443HbNJnSd2r6a1OrqlNQpq/lj4cXVtbL4mraI7kEG1Q3IC+NK89D9AIFveG9wYiZM7dyAQU5WQsctesrJSw/CXEmwA0tIxcrH4/GKZoyKMsxNbqmCihHn6E3KMfYKqiZBaMazQWfZ4cqOXTtDSTz+N3RdJWqeZd38FdWM2URb5SjaEKtPVPf/5p+BcnywDVK58bNANMS/rFE PiwSqhvb Tz4M7BO0jps0BjhVku3Ynfl4/Wo6dRG8Ei1c4E1fW518X2caA1Mcw6NI7uP2uhbOoD8X2g8DaUzSXUWuL3jAUahCBprqH2+lKmh8W/rcbg33xbZP41a55N/f+lChpoqHErufPVsjpAMozzw735fpfBQ3WypQXhf9qf0cwkDd0wqbbiRbUcQJqhvf7nHPEJMBsd47d0+6Wv/SCRYReJfy4ypg/eJC29JAkNP10nVdYdZrtYHaRHZYjX9w4z8jFQXhjjW+IZ598DiagMmhBFT02GPuuuy/0AUttf6mrJKN91TQ2tSUjlErr5zr8B1UZOPr3tko5IsnzjCBOkiWiPf6r8fJhfTI6Fqd6RIxBcAwjHAleBkMLq35mLSvI3uuLAurIIeNgUHLz4h0Z3VTerUAjv4LbCTXGXVqjy7lxp1z9eOni4gLijHNYFhu9cEEl1uugpJ6RWSFdbaPdL1SS6WcSjibx7/s1ClKKiOsFFtysPeHdh2cOUbCF6V/GkhEkcqZFE9I1M1W6l6tSf6uaJCAEce8HQA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 在 2023/9/8 04:14, Liam R. Howlett 写道: > * Peng Zhang [230830 08:58]: >> Use __mt_dup() to duplicate the old maple tree in dup_mmap(), and then >> directly modify the entries of VMAs in the new maple tree, which can >> get better performance. The optimization effect is proportional to the >> number of VMAs. >> >> There is a "spawn" in byte-unixbench[1], which can be used to test the >> performance of fork(). I modified it slightly to make it work with >> different number of VMAs. >> >> Below are the test numbers. There are 21 VMAs by default. The first row >> indicates the number of added VMAs. The following two lines are the >> number of fork() calls every 10 seconds. These numbers are different >> from the test results in v1 because this time the benchmark is bound to >> a CPU. This way the numbers are more stable. >> >> Increment of VMAs: 0 100 200 400 800 1600 3200 6400 >> 6.5.0-next-20230829: 111878 75531 53683 35282 20741 11317 6110 3158 >> Apply this patchset: 114531 85420 64541 44592 28660 16371 9038 4831 >> +2.37% +13.09% +20.23% +26.39% +38.18% +44.66% +47.92% +52.98% > > Thanks! > > Can you include 21 in this table since it's the default? > >> >> [1] https://github.com/kdlucas/byte-unixbench/tree/master >> >> Signed-off-by: Peng Zhang >> --- >> kernel/fork.c | 34 ++++++++++++++++++++++++++-------- >> mm/mmap.c | 14 ++++++++++++-- >> 2 files changed, 38 insertions(+), 10 deletions(-) >> >> diff --git a/kernel/fork.c b/kernel/fork.c >> index 3b6d20dfb9a8..e6299adefbd8 100644 >> --- a/kernel/fork.c >> +++ b/kernel/fork.c >> @@ -650,7 +650,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, >> int retval; >> unsigned long charge = 0; >> LIST_HEAD(uf); >> - VMA_ITERATOR(old_vmi, oldmm, 0); >> VMA_ITERATOR(vmi, mm, 0); >> >> uprobe_start_dup_mmap(); >> @@ -678,17 +677,39 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, >> goto out; >> khugepaged_fork(mm, oldmm); >> >> - retval = vma_iter_bulk_alloc(&vmi, oldmm->map_count); >> - if (retval) >> + /* Use __mt_dup() to efficiently build an identical maple tree. */ >> + retval = __mt_dup(&oldmm->mm_mt, &mm->mm_mt, GFP_NOWAIT | __GFP_NOWARN); > > Apparently the flags should be GFP_KERNEL here so that compaction can > run. > >> + if (unlikely(retval)) >> goto out; >> >> mt_clear_in_rcu(vmi.mas.tree); >> - for_each_vma(old_vmi, mpnt) { >> + for_each_vma(vmi, mpnt) { >> struct file *file; >> >> vma_start_write(mpnt); >> if (mpnt->vm_flags & VM_DONTCOPY) { >> vm_stat_account(mm, mpnt->vm_flags, -vma_pages(mpnt)); >> + >> + /* >> + * Since the new tree is exactly the same as the old one, >> + * we need to remove the unneeded VMAs. >> + */ >> + mas_store(&vmi.mas, NULL); >> + >> + /* >> + * Even removing an entry may require memory allocation, >> + * and if removal fails, we use XA_ZERO_ENTRY to mark >> + * from which VMA it failed. The case of encountering >> + * XA_ZERO_ENTRY will be handled in exit_mmap(). >> + */ >> + if (unlikely(mas_is_err(&vmi.mas))) { >> + retval = xa_err(vmi.mas.node); >> + mas_reset(&vmi.mas); >> + if (mas_find(&vmi.mas, ULONG_MAX)) >> + mas_store(&vmi.mas, XA_ZERO_ENTRY); >> + goto loop_out; >> + } >> + > > Storing NULL may need extra space as you noted, so we need to be careful > what happens if we don't have that space. We should have a testcase to > test this scenario. > > mas_store_gfp() should be used with GFP_KERNEL. The VMAs use GFP_KERNEL > in this function, see vm_area_dup(). > > Don't use the exit_mmap() path to undo a failed fork. You've added > checks and complications to the exit path for all tasks in the very > unlikely event that we run out of memory when we hit a very unlikely > VM_DONTCOPY flag. > > I see the issue with having a portion of the tree with new VMAs that are > accounted and a portion of the tree that has old VMAs that should not be > looked at. It was clever to use the XA_ZERO_ENTRY as a stop point, but > we cannot add that complication to the exit path and then there is the > OOM race to worry about (maybe, I am not sure since this MM isn't > active yet). I encountered some errors after implementing the scheme you mentioned below. This would also clutter fork.c and mmap.c, as some internal functions would need to be made global. I thought of another way to put everything into maple tree. In non-RCU mode, we can remove the last half of the tree without allocating any memory. This requires modifications to the internal implementation of mas_store(). Then remove the second half of the tree like this: mas.index = 0; mas.last = ULONGN_MAX; mas_store(&mas, NULL). At least in non-RCU mode, we can do this, since we only need to merge some nodes, or move some items to adjacent nodes. However, this will increase the workload significantly. > > Using what is done in exit_mmap() and do_vmi_align_munmap() as a > prototype, we can do something like the *untested* code below: > > if (unlikely(mas_is_err(&vmi.mas))) { > unsigned long max = vmi.index; > > retval = xa_err(vmi.mas.node); > mas_set(&vmi.mas, 0); > tmp = mas_find(&vmi.mas, ULONG_MAX); > if (tmp) { /* Not the first VMA failed */ > unsigned long nr_accounted = 0; > > unmap_region(mm, &vmi.mas, vma, NULL, mpnt, 0, max, max, > true); > do { > if (vma->vm_flags & VM_ACCOUNT) > nr_accounted += vma_pages(vma); > remove_vma(vma, true); > cond_resched(); > vma = mas_find(&vmi.mas, max - 1); > } while (vma != NULL); > > vm_unacct_memory(nr_accounted); > } > __mt_destroy(&mm->mm_mt); > goto loop_out; > } > > Once exit_mmap() is called, the check for OOM (no vma) will catch that > nothing is left to do. > > It might be worth making an inline function to do this to keep the fork > code clean. We should test this by detecting a specific task name and > returning a failure at a given interval: > > if (!strcmp(current->comm, "fork_test") { > ... > } > > >> continue; >> } >> charge = 0; >> @@ -750,8 +771,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, >> hugetlb_dup_vma_private(tmp); >> >> /* Link the vma into the MT */ >> - if (vma_iter_bulk_store(&vmi, tmp)) >> - goto fail_nomem_vmi_store; >> + mas_store(&vmi.mas, tmp); >> >> mm->map_count++; >> if (!(tmp->vm_flags & VM_WIPEONFORK)) >> @@ -778,8 +798,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, >> uprobe_end_dup_mmap(); >> return retval; >> >> -fail_nomem_vmi_store: >> - unlink_anon_vmas(tmp); >> fail_nomem_anon_vma_fork: >> mpol_put(vma_policy(tmp)); >> fail_nomem_policy: >> diff --git a/mm/mmap.c b/mm/mmap.c >> index b56a7f0c9f85..dfc6881be81c 100644 >> --- a/mm/mmap.c >> +++ b/mm/mmap.c >> @@ -3196,7 +3196,11 @@ void exit_mmap(struct mm_struct *mm) >> arch_exit_mmap(mm); >> >> vma = mas_find(&mas, ULONG_MAX); >> - if (!vma) { >> + /* >> + * If dup_mmap() fails to remove a VMA marked VM_DONTCOPY, >> + * xa_is_zero(vma) may be true. >> + */ >> + if (!vma || xa_is_zero(vma)) { >> /* Can happen if dup_mmap() received an OOM */ >> mmap_read_unlock(mm); >> return; >> @@ -3234,7 +3238,13 @@ void exit_mmap(struct mm_struct *mm) >> remove_vma(vma, true); >> count++; >> cond_resched(); >> - } while ((vma = mas_find(&mas, ULONG_MAX)) != NULL); >> + vma = mas_find(&mas, ULONG_MAX); >> + /* >> + * If xa_is_zero(vma) is true, it means that subsequent VMAs >> + * donot need to be removed. Can happen if dup_mmap() fails to >> + * remove a VMA marked VM_DONTCOPY. >> + */ >> + } while (vma != NULL && !xa_is_zero(vma)); >> >> BUG_ON(count != mm->map_count); >> >> -- >> 2.20.1 >> >