From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4F29C43603 for ; Fri, 13 Dec 2019 02:04:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D37C206A5 for ; Fri, 13 Dec 2019 02:04:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="AAJZF5X8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D37C206A5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 904D58E0006; Thu, 12 Dec 2019 21:04:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B4948E0001; Thu, 12 Dec 2019 21:04:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A3C58E0006; Thu, 12 Dec 2019 21:04:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 613B98E0001 for ; Thu, 12 Dec 2019 21:04:40 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 0DF33127B for ; Fri, 13 Dec 2019 02:04:40 +0000 (UTC) X-FDA: 76258474320.17.level41_62e1e8abc9d5f X-HE-Tag: level41_62e1e8abc9d5f X-Filterd-Recvd-Size: 9001 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 13 Dec 2019 02:04:39 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id s35so498805pjb.7 for ; Thu, 12 Dec 2019 18:04:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:in-reply-to:references:date:message-id :mime-version; bh=nkgAPdf25+ye80FLYqA0AdaKWwnwZk/f4LcQNHQtdH0=; b=AAJZF5X8DZZ0QluHQto0so41FZzNzrfIO5sQ5k5qzNKBR6qTI6cUpBJYj+moDHePRW pml2RSjuMGiDc1UO3Yv2PmVsiwvBJbbr6dCi9UZMcPPgjhY+Wk2HQc+JV2LoJQfQHvkC B65d3+xR6eDivBiFJuEL9xlchROmbGYcp5Dr0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version; bh=nkgAPdf25+ye80FLYqA0AdaKWwnwZk/f4LcQNHQtdH0=; b=XJNUC1ZgMrSkKGnomk+Dn1exUF8guxTGQh3bGbOuvG69j/1rPtw6JKgfotJ+3Das19 ACm8Ve3nEWo3kyydYq7jeDj/efKpuJ1jrTCdNXT0tgsJKPZIFUFzRqOwqLZgXKl8e14l Zs4VhhgnLe3k2PSOZyxxcminWr1KNc10GncXNQ9/WXlH2rtyrxJbF4km9dpvL7nxwLXf zHsIExV9M9QM+LsMKq01DEW+4ozGHbJW1jfI6JmKV5wvmsyx5B/Kzqse9tvt/+g0uaj/ YTeh/SqXgg7NqufOh+C4Ohp2wRe3q9ng89op+HCey2RJq25LaCd4SYvPtu+8WQG45DTR 2YPg== X-Gm-Message-State: APjAAAUS5n2xBBLtwswCPIMl/9anSh8KndenaNqynx1jtVFE2rNUlVjC z/QXMbf+v+6YEeDPFnnRPX3nVg== X-Google-Smtp-Source: APXvYqzmRNkxDr8W166vC05S6KGMI297Uzb57IekQGVXWNQ516FlnPbKb21qy4J/gIXPNYUqRIEDMw== X-Received: by 2002:a17:902:aa4c:: with SMTP id c12mr12890469plr.178.1576202678020; Thu, 12 Dec 2019 18:04:38 -0800 (PST) Received: from localhost (2001-44b8-1113-6700-b116-2689-a4a9-76f8.static.ipv6.internode.on.net. [2001:44b8:1113:6700:b116:2689:a4a9:76f8]) by smtp.gmail.com with ESMTPSA id 23sm7462079pjx.29.2019.12.12.18.04.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Dec 2019 18:04:37 -0800 (PST) From: Daniel Axtens To: Andrew Morton , Dan Carpenter Cc: linux-mm@kvack.org Subject: Re: [bug report] mm-add-apply_to_existing_pages-helper-fix In-Reply-To: <20191212170336.a8f7b29be837ec265126dd51@linux-foundation.org> References: <20191212092949.efpedkoshbehydfx@kili.mountain> <20191212170336.a8f7b29be837ec265126dd51@linux-foundation.org> Date: Fri, 13 Dec 2019 13:04:33 +1100 Message-ID: <87pngtaslq.fsf@dja-thinkpad.axtens.net> MIME-Version: 1.0 Content-Type: text/plain X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Andrew, > Daniel, have you had a chance to runtime test these various fiddles? Yes, I've been testing with -fix and -fix-fix since they hit next. They work fine for me on x86 and powerpc. Regards, Daniel > > From: Andrew Morton > Subject: mm-add-apply_to_existing_pages-helper-fix > > reduce code duplication > > Cc: Daniel Axtens > Cc: Alexander Potapenko > Cc: Andrey Ryabinin > Cc: Dmitry Vyukov > Cc: Qian Cai > Cc: Uladzislau Rezki (Sony) > Signed-off-by: Andrew Morton > --- > > mm/memory.c | 43 +++++++++++++++++-------------------------- > 1 file changed, 17 insertions(+), 26 deletions(-) > > --- a/mm/memory.c~mm-add-apply_to_existing_pages-helper-fix > +++ a/mm/memory.c > @@ -2141,12 +2141,9 @@ static int apply_to_p4d_range(struct mm_ > return err; > } > > -/* > - * Scan a region of virtual memory, filling in page tables as necessary > - * and calling a provided function on each leaf page table. > - */ > -int apply_to_page_range(struct mm_struct *mm, unsigned long addr, > - unsigned long size, pte_fn_t fn, void *data) > +static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr, > + unsigned long size, pte_fn_t fn, > + void *data, bool create) > { > pgd_t *pgd; > unsigned long next; > @@ -2159,13 +2156,25 @@ int apply_to_page_range(struct mm_struct > pgd = pgd_offset(mm, addr); > do { > next = pgd_addr_end(addr, end); > - err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, true); > + if (!create && pgd_none_or_clear_bad(pgd)) > + continue; > + err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create); > if (err) > break; > } while (pgd++, addr = next, addr != end); > > return err; > } > + > +/* > + * Scan a region of virtual memory, filling in page tables as necessary > + * and calling a provided function on each leaf page table. > + */ > +int apply_to_page_range(struct mm_struct *mm, unsigned long addr, > + unsigned long size, pte_fn_t fn, void *data) > +{ > + return __apply_to_page_range(mm, addr, size, fn, data, true); > +} > EXPORT_SYMBOL_GPL(apply_to_page_range); > > /* > @@ -2178,25 +2187,7 @@ EXPORT_SYMBOL_GPL(apply_to_page_range); > int apply_to_existing_pages(struct mm_struct *mm, unsigned long addr, > unsigned long size, pte_fn_t fn, void *data) > { > - pgd_t *pgd; > - unsigned long next; > - unsigned long end = addr + size; > - int err = 0; > - > - if (WARN_ON(addr >= end)) > - return -EINVAL; > - > - pgd = pgd_offset(mm, addr); > - do { > - next = pgd_addr_end(addr, end); > - if (pgd_none_or_clear_bad(pgd)) > - continue; > - err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, false); > - if (err) > - break; > - } while (pgd++, addr = next, addr != end); > - > - return err; > + return __apply_to_page_range(mm, addr, size, fn, data, false); > } > EXPORT_SYMBOL_GPL(apply_to_existing_pages); > > _ > > > From: Andrew Morton > Subject: mm-add-apply_to_existing_pages-helper-fix-fix > > s/apply_to_existing_pages/apply_to_existing_page_range/ > > Cc: Daniel Axtens > Cc: Alexander Potapenko > Cc: Andrey Ryabinin > Cc: Dmitry Vyukov > Cc: Qian Cai > Cc: Uladzislau Rezki (Sony) > Signed-off-by: Andrew Morton > --- > > include/linux/mm.h | 6 +++--- > mm/memory.c | 6 +++--- > 2 files changed, 6 insertions(+), 6 deletions(-) > > --- a/include/linux/mm.h~mm-add-apply_to_existing_pages-helper-fix-fix > +++ a/include/linux/mm.h > @@ -2621,9 +2621,9 @@ static inline int vm_fault_to_errno(vm_f > typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data); > extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, > unsigned long size, pte_fn_t fn, void *data); > -extern int apply_to_existing_pages(struct mm_struct *mm, unsigned long address, > - unsigned long size, pte_fn_t fn, > - void *data); > +extern int apply_to_existing_page_range(struct mm_struct *mm, > + unsigned long address, unsigned long size, > + pte_fn_t fn, void *data); > > #ifdef CONFIG_PAGE_POISONING > extern bool page_poisoning_enabled(void); > --- a/mm/memory.c~mm-add-apply_to_existing_pages-helper-fix-fix > +++ a/mm/memory.c > @@ -2184,12 +2184,12 @@ EXPORT_SYMBOL_GPL(apply_to_page_range); > * Unlike apply_to_page_range, this does _not_ fill in page tables > * where they are absent. > */ > -int apply_to_existing_pages(struct mm_struct *mm, unsigned long addr, > - unsigned long size, pte_fn_t fn, void *data) > +int apply_to_existing_page_range(struct mm_struct *mm, unsigned long addr, > + unsigned long size, pte_fn_t fn, void *data) > { > return __apply_to_page_range(mm, addr, size, fn, data, false); > } > -EXPORT_SYMBOL_GPL(apply_to_existing_pages); > +EXPORT_SYMBOL_GPL(apply_to_existing_page_range); > > /* > * handle_pte_fault chooses page fault handler according to an entry which was > _ > > From: Andrew Morton > Subject: mm-add-apply_to_existing_pages-helper-fix-fix-fix > > initialize __apply_to_page_range::err > > Cc: Alexander Potapenko > Cc: Andrey Ryabinin > Cc: Daniel Axtens > Cc: Dmitry Vyukov > Cc: Qian Cai > Cc: Uladzislau Rezki (Sony) > Signed-off-by: Andrew Morton > --- > > mm/memory.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > --- a/mm/memory.c~mm-add-apply_to_existing_pages-helper-fix-fix-fix > +++ a/mm/memory.c > @@ -2148,7 +2148,7 @@ static int __apply_to_page_range(struct > pgd_t *pgd; > unsigned long next; > unsigned long end = addr + size; > - int err; > + int err = 0; > > if (WARN_ON(addr >= end)) > return -EINVAL; > _