From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C006DC433DB for ; Wed, 3 Feb 2021 23:28:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2B7FA64F43 for ; Wed, 3 Feb 2021 23:28:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2B7FA64F43 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AC3208D0006; Wed, 3 Feb 2021 18:28:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A97F18D0001; Wed, 3 Feb 2021 18:28:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D5148D0006; Wed, 3 Feb 2021 18:28:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 88B718D0001 for ; Wed, 3 Feb 2021 18:28:21 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 535FD180AD81A for ; Wed, 3 Feb 2021 23:28:21 +0000 (UTC) X-FDA: 77778547602.11.night93_1007069275d7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 2131C180F8B80 for ; Wed, 3 Feb 2021 23:28:21 +0000 (UTC) X-HE-Tag: night93_1007069275d7 X-Filterd-Recvd-Size: 5940 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Feb 2021 23:28:20 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 03 Feb 2021 15:28:19 -0800 Received: from [10.2.50.90] (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 3 Feb 2021 23:28:18 +0000 Subject: Re: [PATCH 2/4] mm/gup: decrement head page once for group of subpages To: Joao Martins , CC: , , Andrew Morton , Jason Gunthorpe , Doug Ledford , Matthew Wilcox References: <20210203220025.8568-1-joao.m.martins@oracle.com> <20210203220025.8568-3-joao.m.martins@oracle.com> From: John Hubbard Message-ID: Date: Wed, 3 Feb 2021 15:28:18 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:85.0) Gecko/20100101 Thunderbird/85.0 MIME-Version: 1.0 In-Reply-To: <20210203220025.8568-3-joao.m.martins@oracle.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1612394899; bh=NpVFPNmAXUPcZtYNGV8eUTmbWgT9gI9vypgKnogWp0U=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=Fr8Gh2SOSsDHAOUG3QOjsZh42pkAchXWWpbVa8eaP0c5V9yXGmZGGtNKCnFOplHNR GmPAzEQ9Fov/WoEFOfzN2UXmcD3oUJDc9AbC9mEqvHqv8IKQxdZL/+fbqogJeKkUP3 A1nMVtFtSIX6xQ30oI6Hzj4TUHYirpJOGWDCDhVqp4+qDnImNO9wnr171twtHIHoWC +SxKP63n/AjPTeMbrI9NoHwVA5UQsMRSr8yztUi1SEecSJIpq2GWoDUC8g0xYSc01w fpgl9R24FFgxLEcVXHMcvsfLcx7JWfQCpyvVXpg5BW845KSS3lbc3NIrg7hgvhzorF st1RJOvbucgFw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2/3/21 2:00 PM, Joao Martins wrote: > Rather than decrementing the head page refcount one by one, we > walk the page array and checking which belong to the same > compound_head. Later on we decrement the calculated amount > of references in a single write to the head page. To that > end switch to for_each_compound_head() does most of the work. > > set_page_dirty() needs no adjustment as it's a nop for > non-dirty head pages and it doesn't operate on tail pages. > > This considerably improves unpinning of pages with THP and > hugetlbfs: > > - THP > gup_test -t -m 16384 -r 10 [-L|-a] -S -n 512 -w > PIN_LONGTERM_BENCHMARK (put values): ~87.6k us -> ~23.2k us > > - 16G with 1G huge page size > gup_test -f /mnt/huge/file -m 16384 -r 10 [-L|-a] -S -n 512 -w > PIN_LONGTERM_BENCHMARK: (put values): ~87.6k us -> ~27.5k us > > Signed-off-by: Joao Martins > --- > mm/gup.c | 29 +++++++++++------------------ > 1 file changed, 11 insertions(+), 18 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 4f88dcef39f2..971a24b4b73f 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -270,20 +270,15 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, > bool make_dirty) > { > unsigned long index; > - > - /* > - * TODO: this can be optimized for huge pages: if a series of pages is > - * physically contiguous and part of the same compound page, then a > - * single operation to the head page should suffice. > - */ Great to see this TODO (and the related one below) finally done! Everything looks correct here. Reviewed-by: John Hubbard thanks, -- John Hubbard NVIDIA > + struct page *head; > + unsigned int ntails; > > if (!make_dirty) { > unpin_user_pages(pages, npages); > return; > } > > - for (index = 0; index < npages; index++) { > - struct page *page = compound_head(pages[index]); > + for_each_compound_head(index, pages, npages, head, ntails) { > /* > * Checking PageDirty at this point may race with > * clear_page_dirty_for_io(), but that's OK. Two key > @@ -304,9 +299,9 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, > * written back, so it gets written back again in the > * next writeback cycle. This is harmless. > */ > - if (!PageDirty(page)) > - set_page_dirty_lock(page); > - unpin_user_page(page); > + if (!PageDirty(head)) > + set_page_dirty_lock(head); > + put_compound_head(head, ntails, FOLL_PIN); > } > } > EXPORT_SYMBOL(unpin_user_pages_dirty_lock); > @@ -323,6 +318,8 @@ EXPORT_SYMBOL(unpin_user_pages_dirty_lock); > void unpin_user_pages(struct page **pages, unsigned long npages) > { > unsigned long index; > + struct page *head; > + unsigned int ntails; > > /* > * If this WARN_ON() fires, then the system *might* be leaking pages (by > @@ -331,13 +328,9 @@ void unpin_user_pages(struct page **pages, unsigned long npages) > */ > if (WARN_ON(IS_ERR_VALUE(npages))) > return; > - /* > - * TODO: this can be optimized for huge pages: if a series of pages is > - * physically contiguous and part of the same compound page, then a > - * single operation to the head page should suffice. > - */ > - for (index = 0; index < npages; index++) > - unpin_user_page(pages[index]); > + > + for_each_compound_head(index, pages, npages, head, ntails) > + put_compound_head(head, ntails, FOLL_PIN); > } > EXPORT_SYMBOL(unpin_user_pages); > >