From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C108C388F7 for ; Thu, 12 Nov 2020 22:09:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E757422201 for ; Thu, 12 Nov 2020 22:09:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ONeUNh4l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E757422201 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6EEE66B0068; Thu, 12 Nov 2020 17:09:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 69F1D6B006C; Thu, 12 Nov 2020 17:09:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5435B6B006E; Thu, 12 Nov 2020 17:09:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 21FDC6B0068 for ; Thu, 12 Nov 2020 17:09:04 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C1770824999B for ; Thu, 12 Nov 2020 22:09:03 +0000 (UTC) X-FDA: 77477157366.30.bed88_620b88927309 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 9B74C180B3C8E for ; Thu, 12 Nov 2020 22:09:03 +0000 (UTC) X-HE-Tag: bed88_620b88927309 X-Filterd-Recvd-Size: 5154 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Nov 2020 22:09:02 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 12 Nov 2020 14:09:09 -0800 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 12 Nov 2020 22:08:56 +0000 Subject: Re: [RFC PATCH 5/6] mm: truncate: split thp to a non-zero order if possible. To: Zi Yan , , Matthew Wilcox CC: "Kirill A . Shutemov" , Roman Gushchin , Andrew Morton , , , Yang Shi , Michal Hocko , John Hubbard , David Nellans References: <20201111204008.21332-1-zi.yan@sent.com> <20201111204008.21332-6-zi.yan@sent.com> X-Nvconfidentiality: public From: Ralph Campbell Message-ID: Date: Thu, 12 Nov 2020 14:08:56 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20201111204008.21332-6-zi.yan@sent.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1605218949; bh=NDadN03xqNXrvsneHUexyQUsnW4XqQpKoZq/QqGvCrw=; h=Subject:To:CC:References:X-Nvconfidentiality:From:Message-ID:Date: User-Agent:MIME-Version:In-Reply-To:Content-Type:Content-Language: Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy; b=ONeUNh4lG4o9qWXiR88CyF1ZxcdWwee7RTtb/uWeGAnlrf8wXC2ARsJLpNsGVBdxN aEc4mC8Myjj1a9DfqvM8o2404w4c+7crPIpIo5lXWsM5MkgLSjQ1wPWUet0SqC2sFs Dvr8aEbpecCyV9qrXet+AdZHAuLskwjJW+hjxpuL33VtEOZM9drs9dR52IkqGxS+nb z09Pb8R81cxJBpS0KwNr5p1KoTnIYzHU557XdAecMGuerjhwH7uqLn2Fiqp1PuGYt3 lgWfsPUWOboleiRMcdJH54EXXOxFCyOxLWzVKYD2oab5cOP4qNsZdaNlCopPgZt36m A025KYcb7B57A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/11/20 12:40 PM, Zi Yan wrote: > From: Zi Yan > > To minimize the number of pages after a truncation, when truncating a > THP, we do not need to split it all the way down to order-0. The THP has > at most three parts, the part before offset, the part to be truncated, > the part left at the end. Use the non-zero minimum of them to decide > what order we split the THP to. > > Signed-off-by: Zi Yan > --- > mm/truncate.c | 22 ++++++++++++++++++++-- > 1 file changed, 20 insertions(+), 2 deletions(-) > > diff --git a/mm/truncate.c b/mm/truncate.c > index 20bd17538ec2..6d8e3c6115bc 100644 > --- a/mm/truncate.c > +++ b/mm/truncate.c > @@ -237,7 +237,7 @@ int truncate_inode_page(struct address_space *mapping, struct page *page) > bool truncate_inode_partial_page(struct page *page, loff_t start, loff_t end) > { > loff_t pos = page_offset(page); > - unsigned int offset, length; > + unsigned int offset, length, left, min_subpage_size = PAGE_SIZE; Maybe use "remaining" instead of "left" since I think of the latter as the length of the left side (offset). > if (pos < start) > offset = start - pos; > @@ -248,6 +248,7 @@ bool truncate_inode_partial_page(struct page *page, loff_t start, loff_t end) > length = length - offset; > else > length = end + 1 - pos - offset; > + left = thp_size(page) - offset - length; > > wait_on_page_writeback(page); > if (length == thp_size(page)) { > @@ -267,7 +268,24 @@ bool truncate_inode_partial_page(struct page *page, loff_t start, loff_t end) > do_invalidatepage(page, offset, length); > if (!PageTransHuge(page)) > return true; > - return split_huge_page(page) == 0; > + > + /* > + * find the non-zero minimum of offset, length, and left and use it to > + * decide the new order of the page after split > + */ > + if (offset && left) > + min_subpage_size = min_t(unsigned int, > + min_t(unsigned int, offset, length), > + left); > + else if (!offset) > + min_subpage_size = min_t(unsigned int, length, left); > + else /* !left */ > + min_subpage_size = min_t(unsigned int, length, offset); > + > + min_subpage_size = max_t(unsigned int, PAGE_SIZE, min_subpage_size); > + > + return split_huge_page_to_list_to_order(page, NULL, > + ilog2(min_subpage_size/PAGE_SIZE)) == 0; > } What if "min_subpage_size" is 1/2 the THP but offset isn't aligned to 1/2? Splitting the page in half wouldn't result in a page that could be freed but maybe splitting to 1/4 would (assuming the THP is at least 8x PAGE_SIZE).