From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8436CC433EF for ; Sun, 3 Oct 2021 12:11:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B4801611EE for ; Sun, 3 Oct 2021 12:11:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B4801611EE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 113526B006C; Sun, 3 Oct 2021 08:11:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C4116B0071; Sun, 3 Oct 2021 08:11:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA5FD6B0072; Sun, 3 Oct 2021 08:11:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id D76CC6B006C for ; Sun, 3 Oct 2021 08:11:34 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 974EB1805957E for ; Sun, 3 Oct 2021 12:11:34 +0000 (UTC) X-FDA: 78655011708.34.C406948 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id E661C1001C87 for ; Sun, 3 Oct 2021 12:11:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=/yJMhw3wPQSlyKaEi+lnBPfgzZ18ON2DTC2x75zIHa0=; b=Z/3qakIcDJeLpgvVAHld4Swu0L 2ZRyHoY1CSgsQSiX2/o9b3Dhr37N2z0Fg+Bf6cIjz2JY/wy9rMpUNfgLs+XESSEyZeklyiqmnmfQ8 yVNXsce+LInTB002JXsZk8AtSl9l/VEYBYOcX6VxnPsHSlMOvr2Pknu00fnsPJqJxGezk9qIugd1G 6ZP5z2SnHBNTiKLn3ogL9BL3zq2ZL89aRqruXT06YPExcvdkutVzYJl8t7inCuQ1FF+HMzps69JYW lVwvJlMl0JwGxuoIMBWIsTOEe5wtGInNwYObBmkEtsXuoBi6hDR7fK+SvFCBAF/cI1ewgiWEETLFh 36LknG7A==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=worktop.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mX0Jw-00Ftb6-KR; Sun, 03 Oct 2021 12:10:42 +0000 Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000) id 1CC12981431; Sun, 3 Oct 2021 14:10:19 +0200 (CEST) Date: Sun, 3 Oct 2021 14:10:19 +0200 From: Peter Zijlstra To: Nadav Amit Cc: Andrew Morton , LKML , Linux-MM , Peter Xu , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: Re: [PATCH 1/2] mm/mprotect: use mmu_gather Message-ID: <20211003121019.GF4323@worktop.programming.kicks-ass.net> References: <20210925205423.168858-1-namit@vmware.com> <20210925205423.168858-2-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210925205423.168858-2-namit@vmware.com> X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E661C1001C87 X-Stat-Signature: eghgwznyu47iaxu6io43mpxkgaytc9dc Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Z/3qakIc"; dmarc=none; spf=none (imf07.hostedemail.com: domain of peterz@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=peterz@infradead.org X-HE-Tag: 1633263093-410880 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Sep 25, 2021 at 01:54:22PM -0700, Nadav Amit wrote: > @@ -338,25 +344,25 @@ static unsigned long change_protection_range(struct vm_area_struct *vma, > struct mm_struct *mm = vma->vm_mm; > pgd_t *pgd; > unsigned long next; > - unsigned long start = addr; > unsigned long pages = 0; > + struct mmu_gather tlb; > > BUG_ON(addr >= end); > pgd = pgd_offset(mm, addr); > flush_cache_range(vma, addr, end); > inc_tlb_flush_pending(mm); That seems unbalanced... > + tlb_gather_mmu(&tlb, mm); > + tlb_start_vma(&tlb, vma); > do { > next = pgd_addr_end(addr, end); > if (pgd_none_or_clear_bad(pgd)) > continue; > - pages += change_p4d_range(vma, pgd, addr, next, newprot, > + pages += change_p4d_range(&tlb, vma, pgd, addr, next, newprot, > cp_flags); > } while (pgd++, addr = next, addr != end); > > - /* Only flush the TLB if we actually modified any entries: */ > - if (pages) > - flush_tlb_range(vma, start, end); > - dec_tlb_flush_pending(mm); ... seeing you do remove the extra decrement. > + tlb_end_vma(&tlb, vma); > + tlb_finish_mmu(&tlb); > > return pages; > } > -- > 2.25.1 >