From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B7CFC11D05 for ; Thu, 20 Feb 2020 16:31:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 47DFB20722 for ; Thu, 20 Feb 2020 16:31:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Z4HyxeNv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 47DFB20722 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C4496B0032; Thu, 20 Feb 2020 11:31:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 54E956B0036; Thu, 20 Feb 2020 11:31:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C75A6B0037; Thu, 20 Feb 2020 11:31:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 211296B0032 for ; Thu, 20 Feb 2020 11:31:35 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C971437E7 for ; Thu, 20 Feb 2020 16:31:34 +0000 (UTC) X-FDA: 76511046108.26.oil71_2ad2b5b1cfa1a X-HE-Tag: oil71_2ad2b5b1cfa1a X-Filterd-Recvd-Size: 13633 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 20 Feb 2020 16:31:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582216293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N4rhCI899dcaSlOnCuv08Oay9LHLy4uauzfbzUNThyo=; b=Z4HyxeNv96es1E7SSX2BPFlfZNqLt0CfKZw9NA0MeJOyXo+God8OtQsBaI3oSThYXZUNRE cXeBHZYSdogeRbfYtXCNQ3YscoSMgRtHWdbYnnZJem3T6H328DI6rhEYn0c1lmB7/Zb+nD X5SRiroE6cxPv3ZiKJrAEam8u/XFILo= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-462-u3vbPrJwN7KnJZdsuprtZg-1; Thu, 20 Feb 2020 11:31:31 -0500 Received: by mail-qv1-f69.google.com with SMTP id cp3so1084409qvb.8 for ; Thu, 20 Feb 2020 08:31:31 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TzxiNpJ74cm3apNvzVOkAuAMHa+8ebeRBqCgwj3NQos=; b=oDN0Ul4uoWx43cvxr0iy5sBNHOc1geCr+VkGJ7AeR4QpTbWceQJ2n5C3qq+UJhStkf K7IwIw9j/F6fc2XJU49n1FHAELOD++wDN4a78SEKWt+1mfjeXezbFUW7J34fI50wM+uq AH9wbhxvDGoq8h9yyqBST5D6UrVjMR5jJDi5cK8S/aE2wfjG5P+3goIzaN9hevBFuZfv Z26y28VRKMz193WZnozOBHX5xMsc7+u8U7KvzfGDTmFkNKB9msxd1I0raPA5UNllD8Q8 xfsxZzCVULlznnAtrk1Fy0/ROIQ0izK/uQwyfxpWX+wolbZb/OOIiR5YeDQgrSXFNqXe MSoA== X-Gm-Message-State: APjAAAXhI2LNjLwUf0ENg8l4OzbMDAhNRFKXlHUyqwYQ6Qrje+UU9/xI MNCt0dk/4bjKacMAI/niGRGFjeUce+SxFxjjEC3WLmCDyStl4Hf+vkgwHQOTjKqVh/1DqFQVDtx kZy8So4uLsMQ= X-Received: by 2002:ac8:cb:: with SMTP id d11mr27573201qtg.22.1582216286045; Thu, 20 Feb 2020 08:31:26 -0800 (PST) X-Google-Smtp-Source: APXvYqyk1N7dL59sWammyS1kvLismOt/2QFhFT9Et6JVVWTW7b592VWkU5ILX5RvyRKdfYH5Z5TKNQ== X-Received: by 2002:ac8:cb:: with SMTP id d11mr27573156qtg.22.1582216285648; Thu, 20 Feb 2020 08:31:25 -0800 (PST) Received: from xz-x1.redhat.com ([104.156.64.75]) by smtp.gmail.com with ESMTPSA id l19sm42366qkl.3.2020.02.20.08.31.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2020 08:31:24 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Brian Geffon , Pavel Emelyanov , Mike Kravetz , David Hildenbrand , peterx@redhat.com, Martin Cracauer , Andrea Arcangeli , Mel Gorman , Bobby Powers , Mike Rapoport , "Kirill A . Shutemov" , Maya Gokhale , Johannes Weiner , Marty McFadden , Denis Plotnikov , Hugh Dickins , "Dr . David Alan Gilbert" , Jerome Glisse Subject: [PATCH v6 06/19] mm: merge parameters for change_protection() Date: Thu, 20 Feb 2020 11:30:59 -0500 Message-Id: <20200220163112.11409-7-peterx@redhat.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200220163112.11409-1-peterx@redhat.com> References: <20200220163112.11409-1-peterx@redhat.com> MIME-Version: 1.0 X-MC-Unique: u3vbPrJwN7KnJZdsuprtZg-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: change_protection() was used by either the NUMA or mprotect() code, there's one parameter for each of the callers (dirty_accountable and prot_numa). Further, these parameters are passed along the calls: - change_protection_range() - change_p4d_range() - change_pud_range() - change_pmd_range() - ... Now we introduce a flag for change_protect() and all these helpers to replace these parameters. Then we can avoid passing multiple parameters multiple times along the way. More importantly, it'll greatly simplify the work if we want to introduce any new parameters to change_protection(). In the follow up patches, a new parameter for userfaultfd write protection will be introduced. No functional change at all. Reviewed-by: Jerome Glisse Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 2 +- include/linux/mm.h | 14 +++++++++++++- mm/huge_memory.c | 3 ++- mm/mempolicy.c | 2 +- mm/mprotect.c | 29 ++++++++++++++++------------- 5 files changed, 33 insertions(+), 17 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5aca3d1bdb32..92220ec66862 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -46,7 +46,7 @@ extern bool move_huge_pmd(struct vm_area_struct *vma, uns= igned long old_addr, =09=09=09 pmd_t *old_pmd, pmd_t *new_pmd); extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, =09=09=09unsigned long addr, pgprot_t newprot, -=09=09=09int prot_numa); +=09=09=09unsigned long cp_flags); vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)= ; vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)= ; enum transparent_hugepage_flag { diff --git a/include/linux/mm.h b/include/linux/mm.h index 51a886d50758..547c7415ff92 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1660,9 +1660,21 @@ extern unsigned long move_page_tables(struct vm_area= _struct *vma, =09=09unsigned long old_addr, struct vm_area_struct *new_vma, =09=09unsigned long new_addr, unsigned long len, =09=09bool need_rmap_locks); + +/* + * Flags used by change_protection(). For now we make it a bitmap so + * that we can pass in multiple flags just like parameters. However + * for now all the callers are only use one of the flags at the same + * time. + */ +/* Whether we should allow dirty bit accounting */ +#define MM_CP_DIRTY_ACCT (1UL << 0) +/* Whether this protection change is for NUMA hints */ +#define MM_CP_PROT_NUMA (1UL << 1) + extern unsigned long change_protection(struct vm_area_struct *vma, unsigne= d long start, =09=09=09 unsigned long end, pgprot_t newprot, -=09=09=09 int dirty_accountable, int prot_numa); +=09=09=09 unsigned long cp_flags); extern int mprotect_fixup(struct vm_area_struct *vma, =09=09=09 struct vm_area_struct **pprev, unsigned long start, =09=09=09 unsigned long end, unsigned long newflags); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b08b199f9a11..2b01765bee92 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1934,13 +1934,14 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsi= gned long old_addr, * - HPAGE_PMD_NR is protections changed and TLB flush necessary */ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, -=09=09unsigned long addr, pgprot_t newprot, int prot_numa) +=09=09unsigned long addr, pgprot_t newprot, unsigned long cp_flags) { =09struct mm_struct *mm =3D vma->vm_mm; =09spinlock_t *ptl; =09pmd_t entry; =09bool preserve_write; =09int ret; +=09bool prot_numa =3D cp_flags & MM_CP_PROT_NUMA; =20 =09ptl =3D __pmd_trans_huge_lock(pmd, vma); =09if (!ptl) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 977c641f78cf..2ea6c4c0579a 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -598,7 +598,7 @@ unsigned long change_prot_numa(struct vm_area_struct *v= ma, { =09int nr_updated; =20 -=09nr_updated =3D change_protection(vma, addr, end, PAGE_NONE, 0, 1); +=09nr_updated =3D change_protection(vma, addr, end, PAGE_NONE, MM_CP_PROT_= NUMA); =09if (nr_updated) =09=09count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated); =20 diff --git a/mm/mprotect.c b/mm/mprotect.c index 7a8e84f86831..1565058ebcfc 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -37,12 +37,14 @@ =20 static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *p= md, =09=09unsigned long addr, unsigned long end, pgprot_t newprot, -=09=09int dirty_accountable, int prot_numa) +=09=09unsigned long cp_flags) { =09pte_t *pte, oldpte; =09spinlock_t *ptl; =09unsigned long pages =3D 0; =09int target_node =3D NUMA_NO_NODE; +=09bool dirty_accountable =3D cp_flags & MM_CP_DIRTY_ACCT; +=09bool prot_numa =3D cp_flags & MM_CP_PROT_NUMA; =20 =09/* =09 * Can be called with only the mmap_sem for reading by @@ -163,7 +165,7 @@ static unsigned long change_pte_range(struct vm_area_st= ruct *vma, pmd_t *pmd, =20 static inline unsigned long change_pmd_range(struct vm_area_struct *vma, =09=09pud_t *pud, unsigned long addr, unsigned long end, -=09=09pgprot_t newprot, int dirty_accountable, int prot_numa) +=09=09pgprot_t newprot, unsigned long cp_flags) { =09pmd_t *pmd; =09unsigned long next; @@ -195,7 +197,7 @@ static inline unsigned long change_pmd_range(struct vm_= area_struct *vma, =09=09=09=09__split_huge_pmd(vma, pmd, addr, false, NULL); =09=09=09} else { =09=09=09=09int nr_ptes =3D change_huge_pmd(vma, pmd, addr, -=09=09=09=09=09=09newprot, prot_numa); +=09=09=09=09=09=09=09 newprot, cp_flags); =20 =09=09=09=09if (nr_ptes) { =09=09=09=09=09if (nr_ptes =3D=3D HPAGE_PMD_NR) { @@ -210,7 +212,7 @@ static inline unsigned long change_pmd_range(struct vm_= area_struct *vma, =09=09=09/* fall through, the trans huge pmd just split */ =09=09} =09=09this_pages =3D change_pte_range(vma, pmd, addr, next, newprot, -=09=09=09=09 dirty_accountable, prot_numa); +=09=09=09=09=09 cp_flags); =09=09pages +=3D this_pages; next: =09=09cond_resched(); @@ -226,7 +228,7 @@ static inline unsigned long change_pmd_range(struct vm_= area_struct *vma, =20 static inline unsigned long change_pud_range(struct vm_area_struct *vma, =09=09p4d_t *p4d, unsigned long addr, unsigned long end, -=09=09pgprot_t newprot, int dirty_accountable, int prot_numa) +=09=09pgprot_t newprot, unsigned long cp_flags) { =09pud_t *pud; =09unsigned long next; @@ -238,7 +240,7 @@ static inline unsigned long change_pud_range(struct vm_= area_struct *vma, =09=09if (pud_none_or_clear_bad(pud)) =09=09=09continue; =09=09pages +=3D change_pmd_range(vma, pud, addr, next, newprot, -=09=09=09=09 dirty_accountable, prot_numa); +=09=09=09=09=09 cp_flags); =09} while (pud++, addr =3D next, addr !=3D end); =20 =09return pages; @@ -246,7 +248,7 @@ static inline unsigned long change_pud_range(struct vm_= area_struct *vma, =20 static inline unsigned long change_p4d_range(struct vm_area_struct *vma, =09=09pgd_t *pgd, unsigned long addr, unsigned long end, -=09=09pgprot_t newprot, int dirty_accountable, int prot_numa) +=09=09pgprot_t newprot, unsigned long cp_flags) { =09p4d_t *p4d; =09unsigned long next; @@ -258,7 +260,7 @@ static inline unsigned long change_p4d_range(struct vm_= area_struct *vma, =09=09if (p4d_none_or_clear_bad(p4d)) =09=09=09continue; =09=09pages +=3D change_pud_range(vma, p4d, addr, next, newprot, -=09=09=09=09 dirty_accountable, prot_numa); +=09=09=09=09=09 cp_flags); =09} while (p4d++, addr =3D next, addr !=3D end); =20 =09return pages; @@ -266,7 +268,7 @@ static inline unsigned long change_p4d_range(struct vm_= area_struct *vma, =20 static unsigned long change_protection_range(struct vm_area_struct *vma, =09=09unsigned long addr, unsigned long end, pgprot_t newprot, -=09=09int dirty_accountable, int prot_numa) +=09=09unsigned long cp_flags) { =09struct mm_struct *mm =3D vma->vm_mm; =09pgd_t *pgd; @@ -283,7 +285,7 @@ static unsigned long change_protection_range(struct vm_= area_struct *vma, =09=09if (pgd_none_or_clear_bad(pgd)) =09=09=09continue; =09=09pages +=3D change_p4d_range(vma, pgd, addr, next, newprot, -=09=09=09=09 dirty_accountable, prot_numa); +=09=09=09=09=09 cp_flags); =09} while (pgd++, addr =3D next, addr !=3D end); =20 =09/* Only flush the TLB if we actually modified any entries: */ @@ -296,14 +298,15 @@ static unsigned long change_protection_range(struct v= m_area_struct *vma, =20 unsigned long change_protection(struct vm_area_struct *vma, unsigned long = start, =09=09 unsigned long end, pgprot_t newprot, -=09=09 int dirty_accountable, int prot_numa) +=09=09 unsigned long cp_flags) { =09unsigned long pages; =20 =09if (is_vm_hugetlb_page(vma)) =09=09pages =3D hugetlb_change_protection(vma, start, end, newprot); =09else -=09=09pages =3D change_protection_range(vma, start, end, newprot, dirty_ac= countable, prot_numa); +=09=09pages =3D change_protection_range(vma, start, end, newprot, +=09=09=09=09=09=09cp_flags); =20 =09return pages; } @@ -425,7 +428,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_ar= ea_struct **pprev, =09vma_set_page_prot(vma); =20 =09change_protection(vma, start, end, vma->vm_page_prot, -=09=09=09 dirty_accountable, 0); +=09=09=09 dirty_accountable ? MM_CP_DIRTY_ACCT : 0); =20 =09/* =09 * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major --=20 2.24.1