From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 001D5EB64DB for ; Mon, 19 Jun 2023 15:55:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F2D38D0002; Mon, 19 Jun 2023 11:55:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 07DC48D0001; Mon, 19 Jun 2023 11:55:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5E6B8D0002; Mon, 19 Jun 2023 11:55:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D15258D0001 for ; Mon, 19 Jun 2023 11:55:15 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 715C840E40 for ; Mon, 19 Jun 2023 15:55:15 +0000 (UTC) X-FDA: 80919946590.24.FB31163 Received: from mail-qv1-f41.google.com (mail-qv1-f41.google.com [209.85.219.41]) by imf14.hostedemail.com (Postfix) with ESMTP id 130D210001C for ; Mon, 19 Jun 2023 15:55:10 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=joelfernandes.org header.s=google header.b=J21aIURf; dmarc=none; spf=pass (imf14.hostedemail.com: domain of joel@joelfernandes.org designates 209.85.219.41 as permitted sender) smtp.mailfrom=joel@joelfernandes.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687190112; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fAulbMfMpDXOXrQhWQKRdEBfIDfNhNI868UDiXdYbqk=; b=BVtUXZYjDwB9rq6h9pTA78+yo/nqnfYRsrRYFYvLhcaV2Io5YCdUIeg1RvIgxML4SwBJTq FwGYrBfqxgGB9ovkIC3Ns0tANQPs9UuN95xEJE3y1q/geuwCjGED6RbGu8ALLV1lQTWBHv VHCNKOhzzlz1CirDYjmL/D8RDdkHE1w= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=joelfernandes.org header.s=google header.b=J21aIURf; dmarc=none; spf=pass (imf14.hostedemail.com: domain of joel@joelfernandes.org designates 209.85.219.41 as permitted sender) smtp.mailfrom=joel@joelfernandes.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687190112; a=rsa-sha256; cv=none; b=ivJtzKMy16es9a1dKMMNaMmy0nf9daYTDQ8/BXCBXrI2Sfe8sFI2xv4Rtu2DR+nKxe6Ops 3t7LaEj/9aXODkrvIlD4ZByyovW3QkwXwHmqyKCIeLclhWHlcL+P9l7+R6kuB/Z/HmljFA HPGiYYjToqvD2anNFxqxug7j8eO+ZX8= Received: by mail-qv1-f41.google.com with SMTP id 6a1803df08f44-62fe188255eso15902366d6.0 for ; Mon, 19 Jun 2023 08:55:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; t=1687190111; x=1689782111; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=fAulbMfMpDXOXrQhWQKRdEBfIDfNhNI868UDiXdYbqk=; b=J21aIURfxr0PHqjQEpdo9c3Up+Iv/8gQBL9OnX3hdsnf7tM+k+LhVosCg2PfJFh3Xt nseVcgmIV91PHhZdvtjzN6F1qgQnoRznzheClwirwvtFEweOFznM4Sl2znDkRBAQK7w9 UbPZS9ZXXRxUDK/D5kyJWtXVhvCGm5HZ3S7z8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687190111; x=1689782111; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fAulbMfMpDXOXrQhWQKRdEBfIDfNhNI868UDiXdYbqk=; b=AnKV+Se9k9W8erTxpvPYkD6GB0swTyvqhegTEJuJUfDl6bd/a7qRqD9qsPw84doTng B402AnZ/Z53JfOcUa8C6+clE3PscA60I40EeRyn/fGxW9eagl/yqwien5mpLkmGMMNKk 3MQmJOFbzfyuZbD3g8PjCyHtDmbVoD2TgivkUu2APGFdmnTNBh5N4EfpeEzh6rxAjN87 sVs3RacRbM2TWPNXTy/HYMr595+2mrFKc+AZ/QE0xXiWkcXKvJ+/XR9GXo76FLpRCgec 82lckNw+YW16KQzLerGP2v1lMctw1StWEUfVFdAJrxgHY3SRHM2dkRfARDOFQMeaoff1 xI1A== X-Gm-Message-State: AC+VfDyesBr+IGFaNVlhZWEPVPbXhCmrivKe2z1HxCT3J1mcSnM2NmhH uMVF4m4d60boD59F/v11s4PbjQ== X-Google-Smtp-Source: ACHHUZ7Mm/mS8yycEZryo5UiQFPM2cy4ZwmfV9hrd1i0Ft3uSDBphowqZBJq7TSGdXVtXimCwDtuHA== X-Received: by 2002:a05:6214:4019:b0:62d:ed72:87b0 with SMTP id kd25-20020a056214401900b0062ded7287b0mr19884144qvb.2.1687190110754; Mon, 19 Jun 2023 08:55:10 -0700 (PDT) Received: from [192.168.0.140] (c-98-249-43-138.hsd1.va.comcast.net. [98.249.43.138]) by smtp.gmail.com with ESMTPSA id b12-20020a0cc98c000000b00630044b530esm80169qvk.83.2023.06.19.08.55.09 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 19 Jun 2023 08:55:10 -0700 (PDT) Message-ID: Date: Mon, 19 Jun 2023 11:55:08 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Subject: Re: [PATCH v4 1/7] mm/mremap: Optimize the start addresses in move_page_tables() Content-Language: en-US To: Lorenzo Stoakes Cc: linux-kernel@vger.kernel.org, Linus Torvalds , linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Shuah Khan , Vlastimil Babka , Michal Hocko , Kirill A Shutemov , "Liam R. Howlett" , "Paul E. McKenney" , Suren Baghdasaryan , Kalesh Singh , Lokesh Gidra , Vineeth Pillai References: <20230531220807.2048037-1-joel@joelfernandes.org> <20230531220807.2048037-2-joel@joelfernandes.org> From: Joel Fernandes In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 130D210001C X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: sjttzb1w83njteazytfx9zin41dniag1 X-HE-Tag: 1687190110-541389 X-HE-Meta: U2FsdGVkX19Cpu1Iq2klY/BH6BNNAX+0p1Of/8lDtFl7/HDDw4acbuclhKZ7dyZRLQXTOYyeYKccCMiUnwLw0jFZ4+cihkMUPe/+UbrlY+Hy9rCtYRdoUYU53klsFe0MYIJBf5bfn8GXaJkljqhLda3w28KoyFTVJDd5z2bda2TYWgFaTrR22T0itiJaxhX5stjS4f/EYL+MMUl4rf6hOOBAgqznOPN7+L28qZCAwOMb4HdhMph8o1i82R6+4eKIfNGYyxqslFNVxvv/n8y6vznEkuHvPOCCfm0vBhzQ2Bk/qRGI2M1KfRBjMEvdBvy8oQuPY/Ip5qMCR3ppZmuY0Irxtxekv6uwOSJm+9MNone+AFInO1yOaXBcKSyatgV3/k6rJNwh8DBWA97h5aZbCSIr4ij2EiD/1e1rqgU/vyeB0YmF8P9khIoWh3uPpyuJ0UwKXJA50gfQfnHUOBXxkOSqjvDkj7Aqw8LBuMfZK0JPhUUlldiIuNmjyY5TA5jaGn475nTh+6nFQsKtBHXXdOZ/LS9V/itZgQ5J2pJxNL+SRYPSg23/wcUECNqWR220P6MRr/mxh6N9QkiOBEZ8mX8qjaSce9kD6IHjeQOiB4J4z37NTD4caxmUH9GCg8hNx5WfGLMPRhHm4zYAWMPKsxh7/Yfdut02sMrNsdt8ZP5lvrAuKzVdXtUyJ1oJGXx47yXACrnZ64bFX/pOFRqgLOxESrVS8+/jWk1T0kiYSQPVf1cDmUl22nCoxUGqQhkEk3/fHNkHuUOXq9t93ifbPllwFuoasgj61ugzyI0LOdjS2BdfsjAYjLb3l8uMnpSASrTiT5hJl/YXRSm9UYZLoxBwzMEehYFc9o77fO231mLw70ypl2iBOHWJr3VT+GVFQc2XCIJe1QMYM+zUjZlG/s+8L5yZypRCVCvRxSb+7eMpMMvLUNMfjIxzqbNceIOIQoEh2KBAcmpuYgIl+ct 9cZ0LHCf JGgVOgU52WyupdPwbGf1aMdan324WrGhSOa67zBqeLV3MTffg5lAw/HqzOMevfJTYGOs6mvfBYW0AMxelwqB5SOY0TTpE2mVK3t8S/GmTrcNa10cNguRwu9AyCy0jti+raQazInayAw0YeM2xUIyw68PsO889fgPsnKWksPZjoIWAzO5pkrt+eqSNacdkIVP3MmDQSYqy1fzC707NpLc6dmQDmLFgJYBAummpLsmc4aDf/u4bMe/hkE0gGhquzqMPBfG8UDcLv6BOPUIAGhqMaOhXDsD0jKOpAQIQU+lQNR5ozCAcr5Zwmjio+VcXe1GAtmqvE+oWZybsG8rYtv/12DHKjopMaITspteq91ExoPJnSTZjlzJ0rKCTIJ1y7qdlVo/v1w6cQFpHQK1uEK9eGzbVI4vED9PyCWxblrFxqFe/zDG5bIkctzJj7385mJua9qVlN1Qflk5BEMxGRI/pEnWnzf+ScSZvNVYE794BSFCjBuNwLU4JMelHK7FkNGQTZWf4o2KcW+sL+/4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Lorenzo, Thanks for the review! I replied below: On 6/17/23 18:49, Lorenzo Stoakes wrote: > On Wed, May 31, 2023 at 10:08:01PM +0000, Joel Fernandes (Google) wrote: >> Recently, we see reports [1] of a warning that triggers due to >> move_page_tables() doing a downward and overlapping move on a >> mutually-aligned offset within a PMD. By mutual alignment, I >> mean the source and destination addresses of the mremap are at >> the same offset within a PMD. >> >> This mutual alignment along with the fact that the move is downward is >> sufficient to cause a warning related to having an allocated PMD that >> does not have PTEs in it. >> >> This warning will only trigger when there is mutual alignment in the >> move operation. A solution, as suggested by Linus Torvalds [2], is to >> initiate the copy process at the PMD level whenever such alignment is >> present. Implementing this approach will not only prevent the warning >> from being triggered, but it will also optimize the operation as this >> method should enhance the speed of the copy process whenever there's a [...] >> Suggested-by: Linus Torvalds >> Signed-off-by: Joel Fernandes (Google) >> --- >> mm/mremap.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 61 insertions(+) >> >> diff --git a/mm/mremap.c b/mm/mremap.c >> index 411a85682b58..bf355e4d6bd4 100644 >> --- a/mm/mremap.c >> +++ b/mm/mremap.c >> @@ -478,6 +478,51 @@ static bool move_pgt_entry(enum pgt_entry entry, struct >> return moved; >> } >> >> +/* >> + * A helper to check if a previous mapping exists. Required for >> + * move_page_tables() and realign_addr() to determine if a previous mapping >> + * exists before we can do realignment optimizations. >> + */ >> +static bool can_align_down(struct vm_area_struct *vma, unsigned long addr_to_align, >> + unsigned long mask) >> +{ >> + unsigned long addr_masked = addr_to_align & mask; >> + struct vm_area_struct *prev = NULL, *cur = NULL; >> + >> + /* >> + * If @addr_to_align of either source or destination is not the beginning >> + * of the corresponding VMA, we can't align down or we will destroy part >> + * of the current mapping. >> + */ >> + if (vma->vm_start != addr_to_align) >> + return false; > > See below, I think we can eliminate this check. > >> + >> + /* >> + * Find the VMA before @vma to see if it subsumes the masked address. >> + * The mmap write lock is held here so the lookup is safe. >> + */ >> + cur = find_vma_prev(vma->vm_mm, vma->vm_start, &prev); >> + if (WARN_ON_ONCE(cur != vma)) >> + return false; >> + >> + return !prev || prev->vm_end <= addr_masked; > > This is a bit clunky, and I don't think we need the WARN_ON_ONCE() check if > we're under the mmap_lock. > > How about something like:- > > return find_vma_intersection(vma->mm, addr_masked, vma->vm_start) == NULL; > > Which explicitly asserts that the range in [addr_masked, vma->vm_start) is > empty. > > But actually, we should be able to go further and replace the previous > check with:- > > return find_vma_intersection(vma->mm, addr_masked, addr_to_align) == NULL; > > Which will fail if addr_to_align is offset within the VMA. Your suggestion would mean that we do a full VMA search starting from the root. That would not be a nice thing if say we've 1000s of VMAs? Actually Liam told me to use find_vma_prev() because given a VMA, the maple tree would not have to work that hard for the common case to find the previous VMA. Per conversing with him, there is a chance we may have to go one step above in the tree if we hit the edge of a node, but that's not supposed to be the common case. In previous code, the previous VMA could just be obtained using the "previous VMA" pointer, however that pointer has been remove since the maple tree changes and given a VMA, going to the previous one using the maple tree is just as fast (as I'm told). Considering this, I would keep the code as-is and perhaps you/we could consider the replacement with another API in a subsequent patch as it does the job for this patch. >> + unsigned long *new_addr, struct vm_area_struct *new_vma, >> + unsigned long mask) >> +{ >> + bool mutually_aligned = (*old_addr & ~mask) == (*new_addr & ~mask); >> + >> + if ((*old_addr & ~mask) && mutually_aligned > > I may be misunderstanding something here, but doesn't the first condition > here disallow for offset into PMD == 0? Why? Because in such a situation, the alignment is already done and there's nothing to align. The patch wants to align down to the PMD and we would not want to waste CPU cycles if there's nothing to do. >> + && can_align_down(old_vma, *old_addr, mask) >> + && can_align_down(new_vma, *new_addr, mask)) { >> + *old_addr = *old_addr & mask; >> + *new_addr = *new_addr & mask; >> + } >> +} >> + >> unsigned long move_page_tables(struct vm_area_struct *vma, >> unsigned long old_addr, struct vm_area_struct *new_vma, >> unsigned long new_addr, unsigned long len, >> @@ -493,6 +538,15 @@ unsigned long move_page_tables(struct vm_area_struct *vma, >> >> old_end = old_addr + len; >> >> + /* >> + * If possible, realign addresses to PMD boundary for faster copy. >> + * Don't align for intra-VMA moves as we may destroy existing mappings. >> + */ >> + if ((vma != new_vma) > > Nit but these parens aren't needed. Sure, I can drop the parens. > Also if we're deferring the decision as > to whether we realign to this function, why are we doing this check here > and not here? Hmm, well the function name is realign_addr() so I kept some of the initial checks outside of it where we should "obviously" not realign. I could do what you're suggesting and change it to try_realign_addr() or something. And move those checks in there. That would be a bit better. > It feels like it'd be neater to keep all the conditions (including the > length one) together in one place. > > >> + && (len >= PMD_SIZE - (old_addr & ~PMD_MASK))) { Well, yeah maybe. I'll look into it, thanks. > You don't mention this condition in the above comment (if we have this > altogether as part of the realign function could comment separately there) Ok, sounds good -- I will add a comment with some of the explanation above. > - so we only go ahead and do this optimisation if the length of the remap > is such that the entire of old_addr -> end of its PMD (and thus the same > for new_addr) is copied? Yes, correct. And in the future that could also be optimized (if say there is no subsequent mapping, so we can copy the tail PMD as well, however one step at a time and all that.) > I may be missing something/being naive here, but can't we just do a similar > check to the one done for space _below_ the VMA to see if [end, (end of > PMD)) is equally empty? We can, but the warning that was triggering does not really need that to be silenced. I am happy to do that in a later patch if needed, or you can. ;-) But I'd like to keep the risk low since this was itself hard enough to get right. >> + realign_addr(&old_addr, vma, &new_addr, new_vma, PMD_MASK); >> + } >> + >> if (is_vm_hugetlb_page(vma)) >> return move_hugetlb_page_tables(vma, new_vma, old_addr, >> new_addr, len); >> @@ -565,6 +619,13 @@ unsigned long move_page_tables(struct vm_area_struct *vma, >> >> mmu_notifier_invalidate_range_end(&range); >> >> + /* >> + * Prevent negative return values when {old,new}_addr was realigned >> + * but we broke out of the above loop for the first PMD itself. >> + */ >> + if (len + old_addr < old_end) >> + return 0; >> + > > I find this a little iffy, I mean I see that if you align [old,new]_addr to > PMD, then from then on in you're relying on the fact that the loop is just > going from old_addr (now aligned) -> old_end and thus has the correct > length. > > Can't we just fix this issue by correcting len? If you take my review above > which checks len in [maybe_]realign_addr(), you could take that as a > pointer and equally update that. > > Then you can drop this check. The drawback of adjusting len is it changes what move_page_tables() users were previously expecting. I think we should look at the return value of move_page_tables() as well, not just len independently. len is what the user requested. "len + old_addr - old_end" is how much was actually copied and is the return value. If everything was copied, old_addr == old_end and len is unchanged. The users of move_page_tables(), like move_vma() should not care whether we copied a full PMD or not. In fact telling them anything like may cause problems with the interpretation of the return value I think. They asked us to copy len, did we copy it? hell yeah. Note that after the first loop iteration's PMD copy, old_addr is now at the PMD boundary and the functionality of this function is not changed with this patch. We end up doing a PMD-copy just like we used to without this patch. So this patch does not really change anything from before. The following are the cases: 1. If we realign and copy, yes we copied a PMD, but really it was to satisfy the requested length. In this situation, "len + old_addr - old_end" is accurate and just like before. We copied whatever the user requested. Yes we copied a little more, but who cares? We copied into a mapping that does not exist anyway. It may be absurd for us to return a len that is greater than the requested len IMO. 2. If there are no errors (example first PMD copy did not fail), "len + old_addr - old_end" is identical to what it was without this patch -- as it should be. That's true whether we realigned or not. 3. If we realigned and the first PMD copy failed (unlikely error) -- that's where there's a problem. We would end up returning a negative value. That's what Linus found and suggested to correct. Because (old_addr - old_end) will be greater than len in such a situation, however unlikely. >> return len + old_addr - old_end; /* how much done */ >> } > Also I am concerned in the hugetlb case -> len is passed to > move_hugetlb_page_tables() which is now strictly incorrect, I wonder if > this could cause an issue? > > Correcting len seems the neat way of addressing this. That's a good point. I am wondering if we can just change that from: if (is_vm_hugetlb_page(vma)) return move_hugetlb_page_tables(vma, new_vma, old_addr, new_addr, len); to: if (is_vm_hugetlb_page(vma)) return move_hugetlb_page_tables(vma, new_vma, old_addr, new_addr, old_addr - new_addr); Or, another option is to turn it off for hugetlb by just moving: if (len >= PMD_SIZE - (old_addr & ~PMD_MASK)) realign_addr(...); to after: if (is_vm_hugetlb_page(vma)) return move_hugetlb_page_tables(...); thanks, - Joel