From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BE60C6FA82 for ; Fri, 2 Sep 2022 19:48:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6D0A80122; Fri, 2 Sep 2022 15:47:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CDCB180120; Fri, 2 Sep 2022 15:47:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A93880122; Fri, 2 Sep 2022 15:47:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7A88A80120 for ; Fri, 2 Sep 2022 15:47:29 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5B1481601BA for ; Fri, 2 Sep 2022 19:47:29 +0000 (UTC) X-FDA: 79868179818.10.D45E09B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id E93A2100060 for ; Fri, 2 Sep 2022 19:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=G2a4/s/ZK1NLTC/qMKepXr39H7qLw37+HTU/fUXQE5c=; b=kChyukNnjZquXUjR8RJHZCmghw hC6qSWfIILzXbSs1mI0nb7tS8x7kBDJqbLW3BRKDLNCC5eke/mjRKOIzRjGDetoFn1HW0D+pEPxii tKTnzmiUdbJmHVd14hc1flNfoS5z43fMFCQbm14PmfbPL7oJSDIsYtwti5T3xlK3l82moYy3Mm6n3 CkOi9D8ayJPlbLGQzdwF1mRvVnnoZtO3s6hKc1owjDQJ/Vs/ojKh/D6JGg7u2K1hn86IaAZq50kUA yKRD96tlCgKHNaC6miv9tpv7nKM53wSWoXMLccerPo2g4VrijDWqYhlxMrTh7KqoPXnjdvMODDy+G SJModUyg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oUCd0-007Ibr-IY; Fri, 02 Sep 2022 19:46:58 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 08/57] shmem: Convert shmem_replace_page() to use folios throughout Date: Fri, 2 Sep 2022 20:46:04 +0100 Message-Id: <20220902194653.1739778-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220902194653.1739778-1-willy@infradead.org> References: <20220902194653.1739778-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kChyukNn; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662148048; a=rsa-sha256; cv=none; b=Pjn4fwUePqDtKJDWr0DGpaivXilIeC96BfAObeG8mJluhfmHKHZ69SC4fUSwyORyNUxlsx I4kxEkDJRmhorklFSnu++tcAU60mtLetRqWUktGaE2qCVle0HevVSH52WS3tYEUQr95X1m fpzZ4GadJt9DwhoycShKfXdvgvVXl/s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662148048; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G2a4/s/ZK1NLTC/qMKepXr39H7qLw37+HTU/fUXQE5c=; b=Z8WFln+I0tLl6BsHGso1OhmomNdeWq91ab/ybut6RqVhHVMTDft3QCaHw3uGQKbCXyUNHR GMYPBUNexIkC1BkasM0A+99KH6lvZve8zo+EPcxWYmA6u6geCgRzFLF3jL5q5AHXCWsL5c 2iPjRuC2OY7AnFT6yYP2G6Z1xB91LM8= X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E93A2100060 X-Stat-Signature: bop8draocz7cd5z84kdixt47q6up6oje Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kChyukNn; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1662148048-4289 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce folio_set_swap_entry() to abstract how both folio->private and swp_entry_t work. Use swap_address_space() directly instead of indirecting through folio_mapping(). Include an assertion that the old folio is not large as we only allocate a single-page folio to replace it. Use folio_put_refs() instead of calling folio_put() twice. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/swap.h | 5 ++++ mm/shmem.c | 67 +++++++++++++++++++++----------------------- 2 files changed, 37 insertions(+), 35 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 333d5588dc2d..afcb76bbd141 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -351,6 +351,11 @@ static inline swp_entry_t folio_swap_entry(struct folio *folio) return entry; } +static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry) +{ + folio->private = (void *)entry.val; +} + /* linux/mm/workingset.c */ void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg); diff --git a/mm/shmem.c b/mm/shmem.c index 9e851fc87601..4113f1b9d4a8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1560,12 +1560,6 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, return folio; } -static struct page *shmem_alloc_page(gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) -{ - return &shmem_alloc_folio(gfp, info, index)->page; -} - static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode, pgoff_t index, bool huge) { @@ -1617,51 +1611,49 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp) static int shmem_replace_page(struct page **pagep, gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { - struct page *oldpage, *newpage; struct folio *old, *new; struct address_space *swap_mapping; swp_entry_t entry; pgoff_t swap_index; int error; - oldpage = *pagep; - entry.val = page_private(oldpage); + old = page_folio(*pagep); + entry = folio_swap_entry(old); swap_index = swp_offset(entry); - swap_mapping = page_mapping(oldpage); + swap_mapping = swap_address_space(entry); /* * We have arrived here because our zones are constrained, so don't * limit chance of success by further cpuset and node constraints. */ gfp &= ~GFP_CONSTRAINT_MASK; - newpage = shmem_alloc_page(gfp, info, index); - if (!newpage) + VM_BUG_ON_FOLIO(folio_test_large(old), old); + new = shmem_alloc_folio(gfp, info, index); + if (!new) return -ENOMEM; - get_page(newpage); - copy_highpage(newpage, oldpage); - flush_dcache_page(newpage); + folio_get(new); + folio_copy(new, old); + flush_dcache_folio(new); - __SetPageLocked(newpage); - __SetPageSwapBacked(newpage); - SetPageUptodate(newpage); - set_page_private(newpage, entry.val); - SetPageSwapCache(newpage); + __folio_set_locked(new); + __folio_set_swapbacked(new); + folio_mark_uptodate(new); + folio_set_swap_entry(new, entry); + folio_set_swapcache(new); /* * Our caller will very soon move newpage out of swapcache, but it's * a nice clean interface for us to replace oldpage by newpage there. */ xa_lock_irq(&swap_mapping->i_pages); - error = shmem_replace_entry(swap_mapping, swap_index, oldpage, newpage); + error = shmem_replace_entry(swap_mapping, swap_index, old, new); if (!error) { - old = page_folio(oldpage); - new = page_folio(newpage); mem_cgroup_migrate(old, new); - __inc_lruvec_page_state(newpage, NR_FILE_PAGES); - __inc_lruvec_page_state(newpage, NR_SHMEM); - __dec_lruvec_page_state(oldpage, NR_FILE_PAGES); - __dec_lruvec_page_state(oldpage, NR_SHMEM); + __lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1); + __lruvec_stat_mod_folio(new, NR_SHMEM, 1); + __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1); + __lruvec_stat_mod_folio(old, NR_SHMEM, -1); } xa_unlock_irq(&swap_mapping->i_pages); @@ -1671,18 +1663,17 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp, * both PageSwapCache and page_private after getting page lock; * but be defensive. Reverse old to newpage for clear and free. */ - oldpage = newpage; + old = new; } else { - lru_cache_add(newpage); - *pagep = newpage; + folio_add_lru(new); + *pagep = &new->page; } - ClearPageSwapCache(oldpage); - set_page_private(oldpage, 0); + folio_clear_swapcache(old); + old->private = NULL; - unlock_page(oldpage); - put_page(oldpage); - put_page(oldpage); + folio_unlock(old); + folio_put_refs(old, 2); return error; } @@ -2383,6 +2374,12 @@ static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir, } #ifdef CONFIG_USERFAULTFD +static struct page *shmem_alloc_page(gfp_t gfp, + struct shmem_inode_info *info, pgoff_t index) +{ + return &shmem_alloc_folio(gfp, info, index)->page; +} + int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, -- 2.35.1