From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7C52C4332F for ; Sat, 24 Dec 2022 17:01:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E9D7E940011; Sat, 24 Dec 2022 12:01:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E4D3F94000A; Sat, 24 Dec 2022 12:01:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CEEEE940011; Sat, 24 Dec 2022 12:01:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BC24594000A for ; Sat, 24 Dec 2022 12:01:25 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 1A8721C61D2 for ; Sat, 24 Dec 2022 17:01:24 +0000 (UTC) X-FDA: 80277815688.21.B6E6162 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id C5BFBA0006 for ; Sat, 24 Dec 2022 17:01:21 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=EhUvObbN; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1671901282; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RwbO8aetiymc8USYx00SlW0UU+lEJdIpUp8WjZIYvSA=; b=EtMjG6FLBMLQAfxZUc/UMSa8cGK9RWW9H9rb4R8IpWimV4SwmpxOw+IvQdG3UEGULw6rZp 4aRBcb6KiQfhZ7Fr72bbD4tKxDro08VduK6xE3Zul5B5lsyqlHlMSY8fcJR7RySnEDzgNN gBXRneXNFYGzMjPHi+RhG9vPRkE0BGY= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=EhUvObbN; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1671901282; a=rsa-sha256; cv=none; b=7bz4TKghRcL/7/ZtFILmlCBKOmhrm+xHAheianZ9eBdT7XJbU/L36Kg5st82TY0Ibh7OFk B/fN+Qons2A2TKsBQrkRBfWakobkDwufKgndyQhllUdBus6wBuXl+mFhp4nC3Vd4i09hXl pCLuKc45hYirO1w1z1J5J3wkOwT83aU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671901281; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RwbO8aetiymc8USYx00SlW0UU+lEJdIpUp8WjZIYvSA=; b=EhUvObbN2pu/fdkBButN2Dp+LEAnRWuRn9q3bs5Q6kkR0vVV5vFZATzGYaKtF5afoWLqFF Bi67lho+kwuUMeddoCmjzM3HY8SwJdGx1pI9WQfL24KtFCXvCI5eELJfCtgrN5YcXJOi4/ ojA2YC+iS/7rGZ10k+YZwGiquiATCGA= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-126-WRI4j616ODymvlw-hHOn5g-1; Sat, 24 Dec 2022 12:01:17 -0500 X-MC-Unique: WRI4j616ODymvlw-hHOn5g-1 Received: by mail-wm1-f70.google.com with SMTP id m38-20020a05600c3b2600b003d1fc5f1f80so5524143wms.1 for ; Sat, 24 Dec 2022 09:01:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RwbO8aetiymc8USYx00SlW0UU+lEJdIpUp8WjZIYvSA=; b=KKhaISwrkw9oVrKRIR/nUgZsV3SRcmxJ3sVzK8x0HQ3Rv+0790GEuDCIJQMqDU7h1M PWlWH8LsVNxlhCE2TphukDsfcA5614sGdfpPzLTe9Uqq4xVYK5DCssn1wfqsCQT6WCzz 8pyzolm2aC1kzC/AGBb8aiESHMwlbc9Ic7TDQ0YUkJRjTBbIEgM1Sg3jBucpk0wvtDl8 cFP7JZBtFl65U201cOdO+AWtR67YQx1a+z4W0HYpDPbp+sfriL/YPqvdaMxgAhZ1DhOg mieNX6Jqnu05QsxsyrZJ1V3k6vFeAQOCzwgmISyhj8xsCW9NCrNkiEBnR6ZJw3C77S8Z dYyA== X-Gm-Message-State: AFqh2kojVuhwMjHGxqiVasIjkcz2k2yd9PsDemziIA44n0FpIod1uYlE mepI20pNI4LryVVxz0cZR+Yc9DO1jA+aCvzcn/yk7KrncvsNSFkO1dt0Urk4BgUuRHLJJKY/x/k DJ13MHPMmmNU= X-Received: by 2002:a05:600c:3b93:b0:3d3:43ae:4d10 with SMTP id n19-20020a05600c3b9300b003d343ae4d10mr10334977wms.11.1671901276648; Sat, 24 Dec 2022 09:01:16 -0800 (PST) X-Google-Smtp-Source: AMrXdXu69zJB67skVTScKbwc6lKTQUqPaONoZ8CnApfkkAeSSx0dGGEwnpdyc06xrriycDFh+t+wRA== X-Received: by 2002:a05:600c:3b93:b0:3d3:43ae:4d10 with SMTP id n19-20020a05600c3b9300b003d343ae4d10mr10334965wms.11.1671901276436; Sat, 24 Dec 2022 09:01:16 -0800 (PST) Received: from ?IPV6:2003:d8:2f16:1800:a9b4:1776:c5d9:1d9a? (p200300d82f161800a9b41776c5d91d9a.dip0.t-ipconnect.de. [2003:d8:2f16:1800:a9b4:1776:c5d9:1d9a]) by smtp.gmail.com with ESMTPSA id m13-20020a05600c4f4d00b003b95ed78275sm8836047wmq.20.2022.12.24.09.01.15 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 24 Dec 2022 09:01:15 -0800 (PST) Message-ID: <5084ff1c-ebb3-f918-6a60-bacabf550a88@redhat.com> Date: Sat, 24 Dec 2022 18:01:15 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Subject: Re: [PATCH v1 2/2] mm/mprotect: drop pgprot_t parameter from change_protection() To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Peter Xu , Hugh Dickins , Andrea Arcangeli , Nadav Amit References: <20221223155616.297723-1-david@redhat.com> <20221223155616.297723-3-david@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: <20221223155616.297723-3-david@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C5BFBA0006 X-Stat-Signature: 4iebfmiusi98xa9cme483huoys11bgz3 X-HE-Tag: 1671901281-885487 X-HE-Meta: U2FsdGVkX18PIyrw8Y1Zeejq1u/6ixYqQDwD0RYiCm8BuKMdX0kCGJkvIWv69X/uOr/ekqtOMjZ/WUVFmM1u3nbI5DEgrs3Lq/mkf2s3pd5aGRa+y4jhd7VxVdOMo7l+nnSExijVh74cj9tKd45RjV3TnT8aSfi2ZqWZxrTfohrHYsvTKT6YS/SBK3u7S82Wv5Z1zFw8i4a1lYywnWOLq1ExDkkdEkd1CFjeFFiaGClcgxbVUDq+Rg+CTCFYOK67rNqQE4Nq5wSfDQks/zIwnUEFpqYpConWtzK7qsp4y5wol/le2yz/w/MrhuDYj8sRaENJq2mi70j4+zNHJjxjdRVwIKGAdkTarr2CpI6rAmsL8icJ8qznZRooU6/qD553BtAZ01NyYVmdg9QcmkifMh23Ph8Wy1zigjovZ7jjyJN5/qqNsOnZTHjdJ91tx7gax87JqzufV3dX7sEV7lqQ4fZxyj/5bmxCS4B7RAix5A/NQmHbsSDTHSATrENSd8RGssrx/yo2lOGJYK7Ym7L27GlbsUaYo1hSYaIbwD71+BWVeGk7WUofhDhhOs9u7WgbQ3uHUteruhH9M89Idw/CA2P5yaeZ9nawCeC9hjGHgBuyblXE1YnO4L53l3x8IQRrvZZa8ggMR3M8c3DLCqnhHLPEqpXV/4K1jQ6x6CJ8m96ADFwgH56D9KgJmW9PZv8xjyneGNhH+VHDJ3L0PiMrW+MN8f/OJSDZPc7ZzA1wcdQtfSVvFmv/kAB1pbKRLCIJ2S3SZ5gbfJi5AVPpIB0XeeDWSdrgaxw78EbXzzi21tqHJEjLOFHa7KFQm14m7fctrkDO8//C4Nr9d7JIviCgkpzmvR/H/XDygbun2r5NUf/SeRPCt3aka6JxJCE0UWu19PvqHBF6zSA2ExsngEMknggGSal0X/frCPtGb33Gim+yHuM8IoRYI55ySpc0pjBRX3ObqbTrbajDEHyLEM5 2ZXYSnb7 8v2TsA62qUZH0N4qyY/OxVOxrcsldCh7f+gYfsjefCzi8inmTTGgcyK3SJiyQRBxT3IYLaqsOhLrJQmHwRz58oJIn8dqAcOk4u2xfA1LGp1BYDqGyBDfeSpTe7uzZ8NeLV5fiqKe65FuymeOYI1xRs8gnm0p3o+OU2EtB7gT5q0eLFpT6U8vzdh7FZ/bBmksOxHFtPVwwVhmotuRJxYdk65vnC5AvvyJ36zpjsDrxWvrgHxqygH+vQjs/WyKxRH9pV9QAgffTGU2LL+E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 23.12.22 16:56, David Hildenbrand wrote: > Being able to provide a custom protection opens the door for > inconsistencies and BUGs: for example, accidentally allowing for more > permissions than desired by other mechanisms (e.g., softdirty tracking). > vma->vm_page_prot should be the single source of truth. > > Only PROT_NUMA is special: there is no way we can erroneously allow > for more permissions when removing all permissions. Special-case using > the MM_CP_PROT_NUMA flag. > > Signed-off-by: David Hildenbrand > --- > include/linux/mm.h | 3 +-- > mm/mempolicy.c | 3 +-- > mm/mprotect.c | 14 +++++++++++--- > mm/userfaultfd.c | 3 +-- > 4 files changed, 14 insertions(+), 9 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index f3f196e4d66d..b8be8c33ca20 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2148,8 +2148,7 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr, > pte_t pte); > extern unsigned long change_protection(struct mmu_gather *tlb, > struct vm_area_struct *vma, unsigned long start, > - unsigned long end, pgprot_t newprot, > - unsigned long cp_flags); > + unsigned long end, unsigned long cp_flags); > extern int mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma, > struct vm_area_struct **pprev, unsigned long start, > unsigned long end, unsigned long newflags); > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 61aa9aedb728..c3f02703a710 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -635,8 +635,7 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, > > tlb_gather_mmu(&tlb, vma->vm_mm); > > - nr_updated = change_protection(&tlb, vma, addr, end, PAGE_NONE, > - MM_CP_PROT_NUMA); > + nr_updated = change_protection(&tlb, vma, addr, end, MM_CP_PROT_NUMA); > if (nr_updated) > count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated); > > diff --git a/mm/mprotect.c b/mm/mprotect.c > index 908df12caa26..569cefa668a6 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -503,13 +503,21 @@ static unsigned long change_protection_range(struct mmu_gather *tlb, > > unsigned long change_protection(struct mmu_gather *tlb, > struct vm_area_struct *vma, unsigned long start, > - unsigned long end, pgprot_t newprot, > - unsigned long cp_flags) > + unsigned long end, unsigned long cp_flags) > { > + pgprot_t newprot = vma->vm_page_prot; > unsigned long pages; > > BUG_ON((cp_flags & MM_CP_UFFD_WP_ALL) == MM_CP_UFFD_WP_ALL); > > + /* > + * Ordinary protection updates (mprotect, uffd-wp, softdirty tracking) > + * are expected to reflect their requirements via VMA flags such that > + * vma_set_page_prot() will adjust vma->vm_page_prot accordingly. > + */ > + if (cp_flags & MM_CP_PROT_NUMA) > + newprot = PAGE_NONE; > + > if (is_vm_hugetlb_page(vma)) > pages = hugetlb_change_protection(vma, start, end, newprot, > cp_flags); > @@ -638,7 +646,7 @@ mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma, > mm_cp_flags |= MM_CP_TRY_CHANGE_WRITABLE; > vma_set_page_prot(vma); > > - change_protection(tlb, vma, start, end, vma->vm_page_prot, mm_cp_flags); > + change_protection(tlb, vma, start, end, mm_cp_flags); > > /* > * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index 351e8d6b398b..be7ee9d82e72 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -744,8 +744,7 @@ void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma, > if (vma_wants_manual_pte_write_upgrade(dst_vma)) > mm_cp_flags |= MM_CP_TRY_CHANGE_WRITABLE; > tlb_gather_mmu(&tlb, dst_mm); > - change_protection(&tlb, dst_vma, start, start + len, vma->vm_page_prot, > - mm_cp_flags); > + change_protection(&tlb, dst_vma, start, start + len, mm_cp_flags); > tlb_finish_mmu(&tlb); > } > The following fix for compilation errors without PAGE_NONE: From a164d6cf728e353294aa9e65b8ead5241c800421 Mon Sep 17 00:00:00 2001 From: David Hildenbrand Date: Sat, 24 Dec 2022 15:01:18 +0100 Subject: [PATCH] fixup: mm/mprotect: drop pgprot_t parameter from change_protection() PAGE_NONE might not be defined without CONFIG_NUMA_BALANCING Signed-off-by: David Hildenbrand --- mm/mprotect.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/mprotect.c b/mm/mprotect.c index 569cefa668a6..809832954898 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -510,6 +510,7 @@ unsigned long change_protection(struct mmu_gather *tlb, BUG_ON((cp_flags & MM_CP_UFFD_WP_ALL) == MM_CP_UFFD_WP_ALL); +#ifdef CONFIG_NUMA_BALANCING /* * Ordinary protection updates (mprotect, uffd-wp, softdirty tracking) * are expected to reflect their requirements via VMA flags such that @@ -517,6 +518,9 @@ unsigned long change_protection(struct mmu_gather *tlb, */ if (cp_flags & MM_CP_PROT_NUMA) newprot = PAGE_NONE; +#else + WARN_ON_ONCE(cp_flags & MM_CP_PROT_NUMA); +#endif if (is_vm_hugetlb_page(vma)) pages = hugetlb_change_protection(vma, start, end, newprot, -- 2.38.1 -- Thanks, David / dhildenb