From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F5F1C433EF for ; Mon, 4 Oct 2021 12:50:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 97BD761264 for ; Mon, 4 Oct 2021 12:50:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 97BD761264 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C688A6B0087; Mon, 4 Oct 2021 08:50:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF12194000B; Mon, 4 Oct 2021 08:50:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB9EF6B0089; Mon, 4 Oct 2021 08:50:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 9D34E6B0087 for ; Mon, 4 Oct 2021 08:50:49 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 58C432E4C2 for ; Mon, 4 Oct 2021 12:50:49 +0000 (UTC) X-FDA: 78658739418.38.6C9253C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id 8296E300250F for ; Mon, 4 Oct 2021 12:50:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=w06njsaoptfGFZhduTTmP+jVNtUqi6f3nj+AGSbnhxc=; b=eTROU0Y4/l0j7/eY9ZOmYm4kJW ckvlKG0vk21fEP1C2Mw61p4eFJ+1RPM645gR/FGNrAr/OGZMRH6DCaUayUyHFKhyl4i6D8zHBIsot klKzYMe+HOLp9izRyZR+7Ev96RkpMs2YKOvdiuRDfN4QROKamHhY4hGMx+zrXY2YdnraFyg22/D9C ZconhXr47ls2jWJW+7a9HMtuupBBQIP1gsHXPNs6tNA0QmDSw6lsiJ7zYYZ8I8oUIHb81Gz7lOnH0 rVYPmyfrP89DjSw63zfFK8WyHcFKRaGRkdAaXiT5CJDpnubHzJPGuyHNce2w5TBAcM3c9U9kNb2KV YiH2H6aQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXNPV-00GqoP-Ir; Mon, 04 Oct 2021 12:49:57 +0000 Date: Mon, 4 Oct 2021 13:49:37 +0100 From: Matthew Wilcox To: Mel Gorman Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC] mm: Optimise put_pages_list() Message-ID: References: <20210930163258.3114404-1-willy@infradead.org> <20211004091037.GM3959@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211004091037.GM3959@techsingularity.net> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 8296E300250F X-Stat-Signature: to8ndmqo6x481h8oh3fy6fi1qrdturw5 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eTROU0Y4; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633351848-839691 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 04, 2021 at 10:10:37AM +0100, Mel Gorman wrote: > On Thu, Sep 30, 2021 at 05:32:58PM +0100, Matthew Wilcox (Oracle) wrote: > > Instead of calling put_page() one page at a time, pop pages off > > the list if there are other refcounts and pass the remainder > > to free_unref_page_list(). This should be a speed improvement, > > but I have no measurements to support that. It's also not very > > widely used today, so I can't say I've really tested it. I'm only > > bothering with this patch because I'd like the IOMMU code to use it > > https://lore.kernel.org/lkml/20210930162043.3111119-1-willy@infradead.org/ > > > > Signed-off-by: Matthew Wilcox (Oracle) > > I see your motivation but you need to check that all users of > put_pages_list (current and future) handle destroy_compound_page properly > or handle it within put_pages_list. For example, the release_pages() > user of free_unref_page_list calls __put_compound_page directly before > freeing. put_pages_list as it stands will call dstroy_compound_page but > free_unref_page_list does not destroy compound pages in free_pages_prepare Quite right. I was really only thinking about order-zero pages because there aren't any users of compound pages that call this. But of course, we should be robust against future callers. So the obvious thing to do is to copy what release_pages() does: +++ b/mm/swap.c @@ -144,6 +144,10 @@ void put_pages_list(struct list_head *pages) list_for_each_entry_safe(page, next, pages, lru) { if (!put_page_testzero(page)) list_del(&page->lru); + if (PageCompound(page)) { + list_del(&page->lru); + __put_compound_page(page); + } } free_unref_page_list(pages); But would it be better to have free_unref_page_list() handle compound pages itself? +++ b/mm/page_alloc.c @@ -3427,6 +3427,11 @@ void free_unref_page_list(struct list_head *list) /* Prepare pages for freeing */ list_for_each_entry_safe(page, next, list, lru) { + if (PageCompound(page)) { + __put_compound_page(page); + list_del(&page->lru); + continue; + } pfn = page_to_pfn(page); if (!free_unref_page_prepare(page, pfn, 0)) { list_del(&page->lru); (and delete the special handling from release_pages() in the same patch)