From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AB99C00140 for ; Mon, 8 Aug 2022 19:35:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9A0E8E0008; Mon, 8 Aug 2022 15:34:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BAF5E8E0006; Mon, 8 Aug 2022 15:34:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93D5A8E0008; Mon, 8 Aug 2022 15:34:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7030E8E0006 for ; Mon, 8 Aug 2022 15:34:54 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5BC424050F for ; Mon, 8 Aug 2022 19:34:54 +0000 (UTC) X-FDA: 79777428108.16.4203E9F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id BA11E12015D for ; Mon, 8 Aug 2022 19:34:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Br2UbecTvmiQlxiPKbP5mNziAK8h3pPAFMipE73VnEQ=; b=YszMjZqf48Yq4THQBJ8ConKVwj sdE8m69Cf/W4RdQdDdBOcW1muQkR4q5hcVIJf1w7bKaBmaJ/XBsgSr/qB6T62ysMesa4YRr1geSoq +IkqkJhw3bcUWt5aFczCASqvsP1JE2c0W+nL70BaijUh7/tslqtnS0YPZ6PHdS2gHbC+hyRKok+cn aRp9Jx3sfk6K/3xeKTsoQtuvjfDYPqIfXuQ6/RZoFbOB3cMIC6wfNsm1wUuQBJNvxbc+z9vklpCuh dZnUrCSEDQ6+Epv317UwYbdMCwDfsQeDfRA1GMG6OGmfNZEG6jVC7kfeFeL5464GmVZKbbQmLXSDG kIz70FUg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oL8WX-00EAsv-Iz; Mon, 08 Aug 2022 19:34:49 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , hughd@google.com Subject: [PATCH 09/59] shmem: Convert shmem_writepage() to use a folio throughout Date: Mon, 8 Aug 2022 20:33:37 +0100 Message-Id: <20220808193430.3378317-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808193430.3378317-1-willy@infradead.org> References: <20220808193430.3378317-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659987293; a=rsa-sha256; cv=none; b=zEgtqXOfjPX9FgCcuh6/D5Z0hiAocELdCHHtC4q8I4fFma141cTJc9H3sOTigLTmDkOMV1 b8ikkiVvR1lxZaVy6u/r9wC027zXY+4HaYecFGAVhNoe20IM4FqG7SwmW6pa0M/DQQQzxe vhCty3p1YlggbfQWU826D8FKJUJaurE= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YszMjZqf; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659987293; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Br2UbecTvmiQlxiPKbP5mNziAK8h3pPAFMipE73VnEQ=; b=gjECgSeA0wI1BR+YdAWay6jUi5kZsobFIBa1YiczhBgcwfcIlfxuriEmGvfXrOuQHvpAUW FLfAWvZNxMyJdK79ZWRFo8KuWYLMmR90XGnF9Fh2R9ZvmCbuQh+bAJc3vGQdFduHC+O8ab cL3Hl+AYjK+RnyNiWkj5r2bsRh5c578= X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: BA11E12015D Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=YszMjZqf; dmarc=none; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: X-Stat-Signature: johoi889j6y7z6k5q6akqwka53un8nu7 X-HE-Tag: 1659987293-799848 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Even though we will split any large folio that comes in, write the code to handle large folios so as to not leave a trap for whoever tries to handle large folios in the swap cache. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 47 ++++++++++++++++++++++++----------------------- 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index de267a23248d..bcd3644eb9c3 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1328,17 +1328,18 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) * "force", drivers/gpu/drm/i915/gem/i915_gem_shmem.c gets huge pages, * and its shmem_writeback() needs them to be split when swapping. */ - if (PageTransCompound(page)) { + if (folio_test_large(folio)) { /* Ensure the subpages are still dirty */ - SetPageDirty(page); + folio_test_set_dirty(folio); if (split_huge_page(page) < 0) goto redirty; - ClearPageDirty(page); + folio = page_folio(page); + folio_clear_dirty(folio); } - BUG_ON(!PageLocked(page)); - mapping = page->mapping; - index = page->index; + BUG_ON(!folio_test_locked(folio)); + mapping = folio->mapping; + index = folio->index; inode = mapping->host; info = SHMEM_I(inode); if (info->flags & VM_LOCKED) @@ -1361,15 +1362,15 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) /* * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC * value into swapfile.c, the only way we can correctly account for a - * fallocated page arriving here is now to initialize it and write it. + * fallocated folio arriving here is now to initialize it and write it. * - * That's okay for a page already fallocated earlier, but if we have + * That's okay for a folio already fallocated earlier, but if we have * not yet completed the fallocation, then (a) we want to keep track - * of this page in case we have to undo it, and (b) it may not be a + * of this folio in case we have to undo it, and (b) it may not be a * good idea to continue anyway, once we're pushing into swap. So - * reactivate the page, and let shmem_fallocate() quit when too many. + * reactivate the folio, and let shmem_fallocate() quit when too many. */ - if (!PageUptodate(page)) { + if (!folio_test_uptodate(folio)) { if (inode->i_private) { struct shmem_falloc *shmem_falloc; spin_lock(&inode->i_lock); @@ -1385,9 +1386,9 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) if (shmem_falloc) goto redirty; } - clear_highpage(page); - flush_dcache_page(page); - SetPageUptodate(page); + folio_zero_range(folio, 0, folio_size(folio)); + flush_dcache_folio(folio); + folio_mark_uptodate(folio); } swap = folio_alloc_swap(folio); @@ -1396,7 +1397,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) /* * Add inode to shmem_unuse()'s list of swapped-out inodes, - * if it's not already there. Do it now before the page is + * if it's not already there. Do it now before the folio is * moved to swap cache, when its pagelock no longer protects * the inode from eviction. But don't unlock the mutex until * we've incremented swapped, because shmem_unuse_inode() will @@ -1406,7 +1407,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) if (list_empty(&info->swaplist)) list_add(&info->swaplist, &shmem_swaplist); - if (add_to_swap_cache(page, swap, + if (add_to_swap_cache(&folio->page, swap, __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN, NULL) == 0) { spin_lock_irq(&info->lock); @@ -1415,21 +1416,21 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) spin_unlock_irq(&info->lock); swap_shmem_alloc(swap); - shmem_delete_from_page_cache(page, swp_to_radix_entry(swap)); + shmem_delete_from_page_cache(&folio->page, swp_to_radix_entry(swap)); mutex_unlock(&shmem_swaplist_mutex); - BUG_ON(page_mapped(page)); - swap_writepage(page, wbc); + BUG_ON(folio_mapped(folio)); + swap_writepage(&folio->page, wbc); return 0; } mutex_unlock(&shmem_swaplist_mutex); - put_swap_page(page, swap); + put_swap_page(&folio->page, swap); redirty: - set_page_dirty(page); + folio_mark_dirty(folio); if (wbc->for_reclaim) - return AOP_WRITEPAGE_ACTIVATE; /* Return with page locked */ - unlock_page(page); + return AOP_WRITEPAGE_ACTIVATE; /* Return with folio locked */ + folio_unlock(folio); return 0; } -- 2.35.1