From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9A18C433FE for ; Wed, 4 May 2022 18:29:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E2B596B0078; Wed, 4 May 2022 14:29:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 989DE6B008A; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 282996B007E; Wed, 4 May 2022 14:29:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8546D6B007E for ; Wed, 4 May 2022 14:29:05 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5F2532B416 for ; Wed, 4 May 2022 18:29:05 +0000 (UTC) X-FDA: 79428897450.05.73A05CE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id B4F04A0081 for ; Wed, 4 May 2022 18:28:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=56J//iEqV/S82wl4E6wMXPWkr2G51NtkjdnB2PcQ4Ks=; b=UrxFBJkYwKTMc17sDZylKrtgXH uidSTU+6RxNDUSEK5IUdk9oIUkfYydnlN0FkTIT2hnypMf/t5vBgRUBgvlB2yKQBNtd4Vq4Q9VrqW 8+g46tapBuWQNETZ6WMkO7C30WmMfgFo6j3cFRhZW+90HwV0LDosXo7liJzhwljqT9+3jrTTzqx/K EjFRBKxZaShcy1oGgE6/0S1QI9lUfW+gcwY5Ikzb4zshZ2PVb1VoLpjNs0zfiJk1degcmsW+OIMmS LMqrWvLQhS1VO4X/dppGi1VVuHrCUJ0C4ISNzCgqxV5G/d/l1icXYPZ/hnrOMuAkYxmV0NXNKx/H6 M9Buj0pg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nmJkF-00Gq7b-Nw; Wed, 04 May 2022 18:29:03 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH v2 16/26] mm/shmem: Use a folio in shmem_unused_huge_shrink Date: Wed, 4 May 2022 19:28:47 +0100 Message-Id: <20220504182857.4013401-17-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220504182857.4013401-1-willy@infradead.org> References: <20220504182857.4013401-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: B4F04A0081 X-Stat-Signature: yqw887nqg1tuaqigyx7jpuiand59c5hn Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=UrxFBJkY; dmarc=none; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1651688931-106908 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When calling split_huge_page() we usually have to find the precise page, but that's not necessary here because we only need to unlock and put the folio afterwards. Saves 231 bytes of text (20% of this function). Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 85c23696efc6..3461bdec6b38 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -553,7 +553,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, LIST_HEAD(to_remove); struct inode *inode; struct shmem_inode_info *info; - struct page *page; + struct folio *folio; unsigned long batch = sc ? sc->nr_to_scan : 128; int split = 0; @@ -597,6 +597,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, list_for_each_safe(pos, next, &list) { int ret; + pgoff_t index; info = list_entry(pos, struct shmem_inode_info, shrinklist); inode = &info->vfs_inode; @@ -604,14 +605,14 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, if (nr_to_split && split >= nr_to_split) goto move_back; - page = find_get_page(inode->i_mapping, - (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT); - if (!page) + index = (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT; + folio = filemap_get_folio(inode->i_mapping, index); + if (!folio) goto drop; /* No huge page at the end of the file: nothing to split */ - if (!PageTransHuge(page)) { - put_page(page); + if (!folio_test_large(folio)) { + folio_put(folio); goto drop; } @@ -622,14 +623,14 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, * Waiting for the lock may lead to deadlock in the * reclaim path. */ - if (!trylock_page(page)) { - put_page(page); + if (!folio_trylock(folio)) { + folio_put(folio); goto move_back; } - ret = split_huge_page(page); - unlock_page(page); - put_page(page); + ret = split_huge_page(&folio->page); + folio_unlock(folio); + folio_put(folio); /* If split failed move the inode on the list back to shrinklist */ if (ret) -- 2.34.1