From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C09EFC6FA82 for ; Fri, 2 Sep 2022 19:47:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 47207800EC; Fri, 2 Sep 2022 15:47:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3DCB8800ED; Fri, 2 Sep 2022 15:47:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EF4E800EC; Fri, 2 Sep 2022 15:47:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 08D9D800DD for ; Fri, 2 Sep 2022 15:47:00 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D8457140866 for ; Fri, 2 Sep 2022 19:46:59 +0000 (UTC) X-FDA: 79868178558.04.FCD8BD7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf16.hostedemail.com (Postfix) with ESMTP id 89C5818004E for ; Fri, 2 Sep 2022 19:46:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LMCBKoyBWEa1JTiNxACv5KWjlvfrg4Sp3Qe2NN/LXpw=; b=Y6SBrNYV6mTK4gi22nVmF4Kjzr QWV2To3hJJWIYRvny9MsnLKuvwXlf0xcDyvNuQP60LWFfLyEZpN7suFB5QFPainemOmGxOlianCWt qhHEN9ZsBLNoDxN7B49+KkQKKonv8ZQ9lU89GQdEaQ0Q8vx26YQ2qavxOIoq5FQN8bUeIfTOhcAte qUqx1MNu98EgCbng2k/mTCdZnyi5l6q0v1eXFcl8Pj+R59DkYbwHapuny2x+WHd7eOKr4kaWQ+ZzG sDqKO7x75o+sALe92T9ZWxlnkgjOZCziC2g6CyhR9o0HGfpmSBJWb0mmjas6jhNVb5UWdM9kBce5/ Fk59e/dg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oUCd1-007IcW-C8; Fri, 02 Sep 2022 19:46:59 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 17/57] shmem: Convert shmem_mfill_atomic_pte() to use a folio Date: Fri, 2 Sep 2022 20:46:13 +0100 Message-Id: <20220902194653.1739778-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220902194653.1739778-1-willy@infradead.org> References: <20220902194653.1739778-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662148019; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LMCBKoyBWEa1JTiNxACv5KWjlvfrg4Sp3Qe2NN/LXpw=; b=A2Mk7AREMBCwPzanA0VG6Bk2IvgqTickX333MgUflEkOBtupEBF4g813GkyDVyQ7bTOKUP pJ9MnE9Y2cgVCqUTWPGbg2yn6V2dqxuAEbGNIEgS+9SAqw2c8tM/xNs1glyl/v1YdzOBp1 yNuns1fGweo8JlChdNmgny9svXxf1hg= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y6SBrNYV; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662148019; a=rsa-sha256; cv=none; b=INdicHriS37sHGc2hDH28MCnRxMujBm6sB+xT7lWW2cAHEHpQXcFL/ZtlHEvohowIxy/aY zHrx7L5AkeQ07Sbtw5ehqh1GJlUuJlUJLQPdE60VCFFFzjwfn7u8YZVequt9EQnSPNTc5p hXPHC9+pV3KouczA92R6G6HsrTPu9cM= X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 89C5818004E Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y6SBrNYV; dmarc=none; spf=none (imf16.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Stat-Signature: 1hss4uqoqgpi9mt3s1yajzobs9tgbs5r X-HE-Tag: 1662148019-908124 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Assert that this is a single-page folio as there are several assumptions in here that it's exactly PAGE_SIZE bytes large. Saves several calls to compound_head() and removes the last caller of shmem_alloc_page(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 45 +++++++++++++++++++-------------------------- 1 file changed, 19 insertions(+), 26 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 56cabf9bb947..8754e2b4800a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2374,12 +2374,6 @@ static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir, } #ifdef CONFIG_USERFAULTFD -static struct page *shmem_alloc_page(gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) -{ - return &shmem_alloc_folio(gfp, info, index)->page; -} - int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, @@ -2395,7 +2389,6 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); void *page_kaddr; struct folio *folio; - struct page *page; int ret; pgoff_t max_off; @@ -2414,53 +2407,53 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (!*pagep) { ret = -ENOMEM; - page = shmem_alloc_page(gfp, info, pgoff); - if (!page) + folio = shmem_alloc_folio(gfp, info, pgoff); + if (!folio) goto out_unacct_blocks; if (!zeropage) { /* COPY */ - page_kaddr = kmap_atomic(page); + page_kaddr = kmap_local_folio(folio, 0); ret = copy_from_user(page_kaddr, (const void __user *)src_addr, PAGE_SIZE); - kunmap_atomic(page_kaddr); + kunmap_local(page_kaddr); /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { - *pagep = page; + *pagep = &folio->page; ret = -ENOENT; /* don't free the page */ goto out_unacct_blocks; } - flush_dcache_page(page); + flush_dcache_folio(folio); } else { /* ZEROPAGE */ - clear_user_highpage(page, dst_addr); + clear_user_highpage(&folio->page, dst_addr); } } else { - page = *pagep; + folio = page_folio(*pagep); + VM_BUG_ON_FOLIO(folio_test_large(folio), folio); *pagep = NULL; } - VM_BUG_ON(PageLocked(page)); - VM_BUG_ON(PageSwapBacked(page)); - __SetPageLocked(page); - __SetPageSwapBacked(page); - __SetPageUptodate(page); + VM_BUG_ON(folio_test_locked(folio)); + VM_BUG_ON(folio_test_swapbacked(folio)); + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + __folio_mark_uptodate(folio); ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); if (unlikely(pgoff >= max_off)) goto out_release; - folio = page_folio(page); ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, gfp & GFP_RECLAIM_MASK, dst_mm); if (ret) goto out_release; ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, - page, true, wp_copy); + &folio->page, true, wp_copy); if (ret) goto out_delete_from_cache; @@ -2470,13 +2463,13 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); - unlock_page(page); + folio_unlock(folio); return 0; out_delete_from_cache: - delete_from_page_cache(page); + filemap_remove_folio(folio); out_release: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); out_unacct_blocks: shmem_inode_unacct_blocks(inode, 1); return ret; -- 2.35.1