From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96DB5C47420 for ; Wed, 30 Sep 2020 10:40:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E9B852074A for ; Wed, 30 Sep 2020 10:40:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E9B852074A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 40F986B005C; Wed, 30 Sep 2020 06:40:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C1106B005D; Wed, 30 Sep 2020 06:40:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D6326B0068; Wed, 30 Sep 2020 06:40:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 17E216B005C for ; Wed, 30 Sep 2020 06:40:23 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D451D3655 for ; Wed, 30 Sep 2020 10:40:22 +0000 (UTC) X-FDA: 77319383484.04.peace50_0708e4327192 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id B637380104D6 for ; Wed, 30 Sep 2020 10:40:22 +0000 (UTC) X-HE-Tag: peace50_0708e4327192 X-Filterd-Recvd-Size: 4725 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Wed, 30 Sep 2020 10:40:22 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id BF735ABC1; Wed, 30 Sep 2020 10:40:20 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 250C21E0E76; Wed, 30 Sep 2020 12:40:20 +0200 (CEST) Date: Wed, 30 Sep 2020 12:40:20 +0200 From: Jan Kara To: Matthew Wilcox Cc: Jan Kara , linux-mm@kvack.org, Andrew Morton , Hugh Dickins , William Kucharski , Johannes Weiner , Yang Shi , Dave Chinner , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 05/12] mm: Add and use find_lock_entries Message-ID: <20200930104020.GR10896@quack2.suse.cz> References: <20200914130042.11442-1-willy@infradead.org> <20200914130042.11442-6-willy@infradead.org> <20200929085855.GD10896@quack2.suse.cz> <20200929124806.GC20115@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200929124806.GC20115@casper.infradead.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 29-09-20 13:48:06, Matthew Wilcox wrote: > On Tue, Sep 29, 2020 at 10:58:55AM +0200, Jan Kara wrote: > > On Mon 14-09-20 14:00:35, Matthew Wilcox (Oracle) wrote: > > > We have three functions (shmem_undo_range(), truncate_inode_pages_range() > > > and invalidate_mapping_pages()) which want exactly this function, so > > > add it to filemap.c. > > > > > > Signed-off-by: Matthew Wilcox (Oracle) > > > diff --git a/mm/shmem.c b/mm/shmem.c > > ... > > > index b65263d9bb67..a73ce8ce28e3 100644 > > > --- a/mm/shmem.c > > > +++ b/mm/shmem.c > > > @@ -905,12 +905,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, > > > > > > pagevec_init(&pvec); > > > index = start; > > > - while (index < end) { > > > - pvec.nr = find_get_entries(mapping, index, > > > - min(end - index, (pgoff_t)PAGEVEC_SIZE), > > > - pvec.pages, indices); > > > - if (!pvec.nr) > > > - break; > > > + while (index < end && find_lock_entries(mapping, index, end - 1, > > > + &pvec, indices)) { > > > for (i = 0; i < pagevec_count(&pvec); i++) { > > > struct page *page = pvec.pages[i]; > > > > > > @@ -925,18 +921,10 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, > > > index, page); > > > continue; > > > } > > > + index += thp_nr_pages(page) - 1; > > > > > > - VM_BUG_ON_PAGE(page_to_pgoff(page) != index, page); > > > - > > > - if (!trylock_page(page)) > > > - continue; > > > - > > > - if ((!unfalloc || !PageUptodate(page)) && > > > - page_mapping(page) == mapping) { > > > - VM_BUG_ON_PAGE(PageWriteback(page), page); > > > - if (shmem_punch_compound(page, start, end)) > > > - truncate_inode_page(mapping, page); > > > - } > > > + if (!unfalloc || !PageUptodate(page)) > > > + truncate_inode_page(mapping, page); > > > > Is dropping shmem_punch_compound() really safe? AFAICS it can also call > > split_huge_page() which will try to split THP to be able to truncate it. > > That being said there's another loop in shmem_undo_range() which will try > > again so what you did might make a difference with performance but not much > > else. But still it would be good to at least comment about this in the > > changelog... > > OK, I need to provide better argumentation in the changelog. > > shmem_punch_compound() handles partial THPs. By the end of this series, > we handle the partial pages in the next part of the function ... the > part where we're handling partial PAGE_SIZE pages. At this point in > the series, it's safe to remove the shmem_punch_compound() call because > the new find_lock_entries() loop will only return THPs that lie entirely > within the range. Yes, plus transitioning the first loop in shmem_undo_range() to find_lock_entries() which skips partial THPs is safe at this point in the series because the second loop in find_lock_entries() still uses find_get_entries() and shmem_punch_compound() and so properly treats partial THPs. Anyway, I'm now convinced the patch is fine so feel free to add: Reviewed-by: Jan Kara after expanding the changelog. Honza -- Jan Kara SUSE Labs, CR