From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB349C4727F for ; Tue, 29 Sep 2020 08:28:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 49F8520773 for ; Tue, 29 Sep 2020 08:28:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 49F8520773 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BCA766B005C; Tue, 29 Sep 2020 04:28:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7AEE6B005D; Tue, 29 Sep 2020 04:28:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A912C6B0062; Tue, 29 Sep 2020 04:28:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 91B156B005C for ; Tue, 29 Sep 2020 04:28:30 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5BDC8181AE863 for ; Tue, 29 Sep 2020 08:28:30 +0000 (UTC) X-FDA: 77315422380.25.noise89_180b29a27188 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 34DC1180357FB for ; Tue, 29 Sep 2020 08:28:30 +0000 (UTC) X-HE-Tag: noise89_180b29a27188 X-Filterd-Recvd-Size: 3215 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 29 Sep 2020 08:28:29 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 97FA8B2B0; Tue, 29 Sep 2020 08:28:28 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 693C41E12E9; Tue, 29 Sep 2020 10:28:28 +0200 (CEST) Date: Tue, 29 Sep 2020 10:28:28 +0200 From: Jan Kara To: "Matthew Wilcox (Oracle)" Cc: linux-mm@kvack.org, Andrew Morton , Hugh Dickins , William Kucharski , Johannes Weiner , Jan Kara , Yang Shi , Dave Chinner , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 02/12] mm/shmem: Use pagevec_lookup in shmem_unlock_mapping Message-ID: <20200929082828.GB10896@quack2.suse.cz> References: <20200914130042.11442-1-willy@infradead.org> <20200914130042.11442-3-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200914130042.11442-3-willy@infradead.org> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 14-09-20 14:00:32, Matthew Wilcox (Oracle) wrote: > The comment shows that the reason for using find_get_entries() is now > stale; find_get_pages() will not return 0 if it hits a consecutive run > of swap entries, and I don't believe it has since 2011. pagevec_lookup() > is a simpler function to use than find_get_pages(), so use it instead. > > Signed-off-by: Matthew Wilcox (Oracle) Looks good to me. BTW, I think I've already reviewed this... You can add: Reviewed-by: Jan Kara Honza > --- > mm/shmem.c | 11 +---------- > 1 file changed, 1 insertion(+), 10 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 58bc9e326d0d..108931a6cc43 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -840,7 +840,6 @@ unsigned long shmem_swap_usage(struct vm_area_struct *vma) > void shmem_unlock_mapping(struct address_space *mapping) > { > struct pagevec pvec; > - pgoff_t indices[PAGEVEC_SIZE]; > pgoff_t index = 0; > > pagevec_init(&pvec); > @@ -848,16 +847,8 @@ void shmem_unlock_mapping(struct address_space *mapping) > * Minor point, but we might as well stop if someone else SHM_LOCKs it. > */ > while (!mapping_unevictable(mapping)) { > - /* > - * Avoid pagevec_lookup(): find_get_pages() returns 0 as if it > - * has finished, if it hits a row of PAGEVEC_SIZE swap entries. > - */ > - pvec.nr = find_get_entries(mapping, index, > - PAGEVEC_SIZE, pvec.pages, indices); > - if (!pvec.nr) > + if (!pagevec_lookup(&pvec, mapping, &index)) > break; > - index = indices[pvec.nr - 1] + 1; > - pagevec_remove_exceptionals(&pvec); > check_move_unevictable_pages(&pvec); > pagevec_release(&pvec); > cond_resched(); > -- > 2.28.0 > -- Jan Kara SUSE Labs, CR