From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7D53C46467 for ; Mon, 16 Jan 2023 19:28:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E17666B0074; Mon, 16 Jan 2023 14:28:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DC04D6B0078; Mon, 16 Jan 2023 14:28:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0D796B0075; Mon, 16 Jan 2023 14:28:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AB3AB6B0078 for ; Mon, 16 Jan 2023 14:28:19 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 84652C0818 for ; Mon, 16 Jan 2023 19:28:19 +0000 (UTC) X-FDA: 80361648318.19.063221A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 0245C1A0009 for ; Mon, 16 Jan 2023 19:28:17 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IoHDneRA; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673897298; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6zQHsRmNOfrQ0CJm0iVq8zeIP1rgt6pvriqsltKtBck=; b=Qx47g0gBTUR70by1a3ynv6kxl9347S1GytMbOnSjHtWF/n3n4bQsSp30tS+2qgvy1y+9VN pA+kUTwmYVENPcyGB6TSLKGxMO7kOnoLhJackhJsagbbpC704pmX8lXjW1JPNsCYSRtTQT ufX54GFtSflYhRSu5N1zprFZknMs7UI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IoHDneRA; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673897298; a=rsa-sha256; cv=none; b=ef6y+ysuKY4gbVwr4XzB9yJkWm+LQq/KWW8bnOX2dR7YX+l1Lb2an8cUvehRvIAOziL45Q PRHButtLlclUsPdIAYABLWTH3AURN9k0qrt2g9BVLjJ6OLqD9bous86n8EpYvkMGuo7MMx G01vs9icyzxk2jxLWuWkk4sY7/1QcSs= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6zQHsRmNOfrQ0CJm0iVq8zeIP1rgt6pvriqsltKtBck=; b=IoHDneRAvW6VClLjADOrHFFN17 R1R59peMj+AGnv0LZqaIK8zJUyDE6F/CqYzN/8llkp/5DdoOnBFk50qV6j0TTfZ0uKDYdgfVyp9ME Dio528zyzq+g+Sm9ezdaPWZq+sY0Seqq1Xs+ZShY9W9zTcZVt27GZvwn5mk0pRPm6EIHDvJ8SuKk/ WbMMwYHkNs3igw54b647qLRHF65MHP1oSrNKgFATK7cN5bpOOaGFdlV4sf9M6TFtK8HQW4BGeS6uB bimFd07VRT5NyKRX0iifLTApBinu3k9jhj+UmUyEpKJsIpdXJrtV9fKgC2m5KHA9FmGJMgw3v3l2T JoFv6Xzw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pHV9h-0090T6-Kp; Mon, 16 Jan 2023 19:28:29 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 3/4] mm: Remove munlock_vma_page() Date: Mon, 16 Jan 2023 19:28:26 +0000 Message-Id: <20230116192827.2146732-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230116192827.2146732-1-willy@infradead.org> References: <20230116192827.2146732-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: ef6g3ipjg6hcuf3icfwfr8z11136s5pc X-Rspamd-Queue-Id: 0245C1A0009 X-HE-Tag: 1673897297-768817 X-HE-Meta: U2FsdGVkX1/aa86jTzoturUa7F6HrRkU9UUSTEOrUVZXiV25TY8xVib311seN/kAmDoxhxnYe8UsCV9OmkvJ+OG6TfNno/FyYVhQUpiZ/8Kk88YwgrPDPDptlUyCu/jH/ejGrlzV0+ts/xg+fuHPL7sODfx/An0t8vR8LgoCZvgIdhQ4HhHqDOLfkmlhMWdOMInWU1agq+PNS8J1KCBPafGZc+I5hQkkil4+lMga1SRhdsQndxWOyb79SVJtW9DrESnnZMCeB5n7GXUuEzxyMx82/ubyp/oFf9FHIjHndfPsi1E9dmYSmSvahHDx4sKR5vlsOBQOpjyLXf3bvvZey2eRBBxihMd2RHFVomG3VcgvdCjmwm3vxp2sxJT++R4MA1WvkxiW9Y8b4SiXesEibzWDrwqSe6R+S+qGNs5/cavEIsJ4w+Fu8Q55nU5Qspiy0FgtbiE7QnoUqnZdXzSY8ZgMEufWZLfIFoh5GdHWiXIWExjpFuRvd/idhtSGym7dzTcScgpRgUtEHkp1zrVLmtJ6dnn7YMFTrIYh41gGQfyjRSD+TK+dYqbtaC1+68W7P04eHZV94OA8myHWMsE1fhhv2y3AXbNN499P+WZaay+xBoXxyWZUM8LT7dxyZPXzH2XiXkBukDfhyLHvbc29c16fsH2eF9DUDmyb4IxpHecx1ISH2DkoobFjkwx6mViv9RgSqypO0VQ3TWEvTmNid/hXvhSbjBP2kGlDNA5porjJkAZJ3a1+SejFQssQ2OV4yHCKGVUYzgnY/KAS0lvGDw3xlbJOjj/VTU6z4kXkkclVPJADmpyNpoXRKV2Yr3+LGJcC95JfpwO+JX3SdFdpmN1y8qkJ42bUyHflzE9xmHXqCQRi4m/sk+lstdPUtBB1L38Ssws7Y7+KBHcj3UQvdYWz5UMqRmHiM81KCvAwD8iIRWaVpjONaw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All callers now have a folio and can call munlock_vma_folio(). Update the documentation to refer to munlock_vma_folio(). Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/mm/unevictable-lru.rst | 4 ++-- kernel/events/uprobes.c | 1 - mm/internal.h | 8 -------- mm/rmap.c | 12 ++++++------ 4 files changed, 8 insertions(+), 17 deletions(-) diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index 45aadfefb810..9afceabf26f7 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -486,7 +486,7 @@ Before the unevictable/mlock changes, mlocking did not mark the pages in any way, so unmapping them required no processing. For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls -munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED +munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED (unless it was a PTE mapping of a part of a transparent huge page). munlock_page() uses the mlock pagevec to batch up work to be done under @@ -510,7 +510,7 @@ which had been Copied-On-Write from the file pages now being truncated. Mlocked pages can be munlocked and deleted in this way: like with munmap(), for each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls -munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED +munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED (unless it was a PTE mapping of a part of a transparent huge page). However, if there is a racing munlock(), since mlock_vma_pages_range() starts diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 29f36d2ae129..1a3904e0179c 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -22,7 +22,6 @@ #include /* folio_free_swap */ #include /* user_enable_single_step */ #include /* notifier mechanism */ -#include "../../mm/internal.h" /* munlock_vma_page */ #include #include #include diff --git a/mm/internal.h b/mm/internal.h index 0b74105ea363..ce462bf145b4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -548,7 +548,6 @@ static inline void mlock_vma_folio(struct folio *folio, } void munlock_folio(struct folio *folio); - static inline void munlock_vma_folio(struct folio *folio, struct vm_area_struct *vma, bool compound) { @@ -557,11 +556,6 @@ static inline void munlock_vma_folio(struct folio *folio, munlock_folio(folio); } -static inline void munlock_vma_page(struct page *page, - struct vm_area_struct *vma, bool compound) -{ - munlock_vma_folio(page_folio(page), vma, compound); -} void mlock_new_folio(struct folio *folio); bool need_mlock_drain(int cpu); void mlock_drain_local(void); @@ -650,8 +644,6 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf, } #else /* !CONFIG_MMU */ static inline void unmap_mapping_folio(struct folio *folio) { } -static inline void munlock_vma_page(struct page *page, - struct vm_area_struct *vma, bool compound) { } static inline void mlock_new_folio(struct folio *folio) { } static inline bool need_mlock_drain(int cpu) { return false; } static inline void mlock_drain_local(void) { } diff --git a/mm/rmap.c b/mm/rmap.c index 1934f9dc9758..948ca17a96ad 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1432,14 +1432,14 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, } /* - * It would be tidy to reset PageAnon mapping when fully unmapped, - * but that might overwrite a racing page_add_anon_rmap - * which increments mapcount after us but sets mapping - * before us: so leave the reset to free_pages_prepare, - * and remember that it's only reliable while mapped. + * It would be tidy to reset folio_test_anon mapping when fully + * unmapped, but that might overwrite a racing page_add_anon_rmap + * which increments mapcount after us but sets mapping before us: + * so leave the reset to free_pages_prepare, and remember that + * it's only reliable while mapped. */ - munlock_vma_page(page, vma, compound); + munlock_vma_folio(folio, vma, compound); } /* -- 2.35.1