From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09F64C352A1 for ; Tue, 6 Dec 2022 11:28:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37AB88E0002; Tue, 6 Dec 2022 06:28:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 32B1E8E0001; Tue, 6 Dec 2022 06:28:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F2268E0002; Tue, 6 Dec 2022 06:28:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 103048E0001 for ; Tue, 6 Dec 2022 06:28:54 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E061C1C65B2 for ; Tue, 6 Dec 2022 11:28:53 +0000 (UTC) X-FDA: 80211659346.30.AD8107B Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf30.hostedemail.com (Postfix) with ESMTP id 14CD48000F for ; Tue, 6 Dec 2022 11:28:51 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670326133; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l+xLSobVeGZW/I1QjRJkbdhyGJtmUoo0xsFX7onJ+JE=; b=xs8QpLhQcDQa8kYAwNhP8nTmGwFgB4GAYkgZzQz3IJnvZPx9DdED06Jm7Gsz//EDErkmyk s2AbTssnRxZFQeLFY8ZQMyQZmm/JejISYdaXKfcNfLxmyqKhOPzWBsv510MCwMaa1kb2F5 Cx1IzpNDKivt1HTYrMrAF4BJrx6Jsh8= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670326133; a=rsa-sha256; cv=none; b=atMJL/o6mHqsbk5v+dwvzMUyD/K2CMA1MwIIaQOj6QBIYjvRqO44uxiNV6fwqQydITLXO9 vswKNIgMNQVRtABvEmFcurCyOcdkvoIhYFgHa8cpKn/Dp2jVhyK2ZIYj+l7LlTXqaQlQm7 9cc4kbW5KLPpRnn+RQ6RP/N0JFiqWy0= Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4NRJ7M2YcTz15Mxx; Tue, 6 Dec 2022 19:27:59 +0800 (CST) Received: from [10.174.151.185] (10.174.151.185) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 6 Dec 2022 19:28:45 +0800 Subject: Re: [PATCH v4 12/17] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive To: David Hildenbrand , CC: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , References: <20220428083441.37290-1-david@redhat.com> <20220428083441.37290-13-david@redhat.com> <90dd6a93-4500-e0de-2bf0-bf522c311b0c@huawei.com> <3c7fd5da-b3f8-5562-45a9-f83d7dbcdd7d@redhat.com> From: Miaohe Lin Message-ID: <954b0bd3-bf7d-5c4b-5d76-8ac13b5ee8ac@huawei.com> Date: Tue, 6 Dec 2022 19:28:45 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.151.185] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Stat-Signature: jwpuhog8mfrcxo8wudqg4i3o7it5o185 X-Rspam-User: X-Spamd-Result: default: False [-3.99 / 9.00]; BAYES_HAM(-5.79)[99.54%]; SUSPICIOUS_RECIPS(1.50)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[huawei.com,quarantine]; R_SPF_ALLOW(-0.20)[+ip4:45.249.212.255]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; RCPT_COUNT_TWELVE(0.00)[30]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; MID_RHS_MATCH_FROM(0.00)[]; HAS_XOIP(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; TAGGED_RCPT(0.00)[]; TO_DN_SOME(0.00)[]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 14CD48000F X-Rspamd-Server: rspam06 X-HE-Tag: 1670326131-678715 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/12/6 17:40, David Hildenbrand wrote: > On 06.12.22 10:37, Miaohe Lin wrote: >> On 2022/12/6 16:43, David Hildenbrand wrote: >>>>> >>>> >>>> Hi David, sorry for the late respond and a possible inconsequential question. :) >>> >>> Better late than never! Thanks for the review, independently at which time it happens :) >>> >>>> >>>> >>>> >>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>>>> index 7a71ed679853..5add8bbd47cd 100644 >>>>> --- a/mm/hugetlb.c >>>>> +++ b/mm/hugetlb.c >>>>> @@ -4772,7 +4772,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, >>>>>                        is_hugetlb_entry_hwpoisoned(entry))) { >>>>>                swp_entry_t swp_entry = pte_to_swp_entry(entry); >>>>>    -            if (is_writable_migration_entry(swp_entry) && cow) { >>>>> +            if (!is_readable_migration_entry(swp_entry) && cow) { >>>>>                    /* >>>>>                     * COW mappings require pages in both >>>>>                     * parent and child to be set to read. >>>>> @@ -5172,6 +5172,8 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, >>>>>            set_huge_ptep_writable(vma, haddr, ptep); >>>>>            return 0; >>>>>        } >>>>> +    VM_BUG_ON_PAGE(PageAnon(old_page) && PageAnonExclusive(old_page), >>>>> +               old_page); >>>>>          /* >>>>>         * If the process that created a MAP_PRIVATE mapping is about to >>>>> @@ -6169,12 +6171,17 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, >>>>>            } >>>>>            if (unlikely(is_hugetlb_entry_migration(pte))) { >>>>>                swp_entry_t entry = pte_to_swp_entry(pte); >>>>> +            struct page *page = pfn_swap_entry_to_page(entry); >>>>>    -            if (is_writable_migration_entry(entry)) { >>>>> +            if (!is_readable_migration_entry(entry)) { >>>> >>>> In hugetlb_change_protection(), is_writable_migration_entry() is changed to !is_readable_migration_entry(), >>>> but >>>> >>>>>                    pte_t newpte; >>>>>    -                entry = make_readable_migration_entry( >>>>> -                            swp_offset(entry)); >>>>> +                if (PageAnon(page)) >>>>> +                    entry = make_readable_exclusive_migration_entry( >>>>> +                                swp_offset(entry)); >>>>> +                else >>>>> +                    entry = make_readable_migration_entry( >>>>> +                                swp_offset(entry)); >>>>>                    newpte = swp_entry_to_pte(entry); >>>>>                    set_huge_swap_pte_at(mm, address, ptep, >>>>>                                 newpte, huge_page_size(h)); >>>> >>>> >>>> >>>>> diff --git a/mm/mprotect.c b/mm/mprotect.c >>>>> index b69ce7a7b2b7..56060acdabd3 100644 >>>>> --- a/mm/mprotect.c >>>>> +++ b/mm/mprotect.c >>>>> @@ -152,6 +152,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, >>>>>                pages++; >>>>>            } else if (is_swap_pte(oldpte)) { >>>>>                swp_entry_t entry = pte_to_swp_entry(oldpte); >>>>> +            struct page *page = pfn_swap_entry_to_page(entry); >>>>>                pte_t newpte; >>>>>                  if (is_writable_migration_entry(entry)) { >>>> >>>> In change_pte_range(), is_writable_migration_entry() is not changed to !is_readable_migration_entry(). >>> >>> Yes, and also in change_huge_pmd(), is_writable_migration_entry() stays unchanged. >>> >>>> Is this done intentionally? Could you tell me why there's such a difference? I'm confused. It's very >>>> kind of you if you can answer my puzzle. >>> >>> For change protection, the only relevant part is to convert writable -> readable or writable -> readable_exclusive. >>> >>> If an entry is already readable or readable_exclusive, there is nothing to do. The only issues would be when turning a readable one into a readable_exclusive one or a readable_exclusive one into a readable one. >>> >>> >>> In hugetlb_change_protection(), the "!is_readable_migration_entry" could in fact be turned into a "is_writable_migration_entry()". Right now, it would convert writable -> readable or writable -> readable_exclusive AND readable -> readable AND readable_exclusive -> readable_exclusive, which isn't necessary but also shouldn't hurt either. >> >> Many thanks for your explanation. It's really helpful. :) >> >>> >>> >>> So yeah, it's not consistent but shouldn't be problematic. Do you see an issue with that? >> >> No, I don't see any issue with that. I just wonder whether we can change "!is_readable_migration_entry" to "is_writable_migration_entry()" to make code >> more consistent and avoid possible future puzzle. Also we can further remove this harmless unnecessary migration entry conversion. But this should >> be a separate cleanup patch anyway. > > Want to send a patch? :) Queued in my todo list. ;) Thanks! Miaohe Lin