From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
intel-gfx@lists.freedesktop.org, linux-afs@lists.infradead.org,
linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
netdev@vger.kernel.org
Subject: [PATCH 08/13] i915: Convert i915_gpu_error to use a folio_batch
Date: Wed, 21 Jun 2023 17:45:52 +0100 [thread overview]
Message-ID: <20230621164557.3510324-9-willy@infradead.org> (raw)
In-Reply-To: <20230621164557.3510324-1-willy@infradead.org>
Remove one of the last remaining users of pagevec.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
drivers/gpu/drm/i915/i915_gpu_error.c | 50 +++++++++++++--------------
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index ec368e700235..0c38bfb60c9a 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -187,64 +187,64 @@ i915_error_printer(struct drm_i915_error_state_buf *e)
}
/* single threaded page allocator with a reserved stash for emergencies */
-static void pool_fini(struct pagevec *pv)
+static void pool_fini(struct folio_batch *fbatch)
{
- pagevec_release(pv);
+ folio_batch_release(fbatch);
}
-static int pool_refill(struct pagevec *pv, gfp_t gfp)
+static int pool_refill(struct folio_batch *fbatch, gfp_t gfp)
{
- while (pagevec_space(pv)) {
- struct page *p;
+ while (folio_batch_space(fbatch)) {
+ struct folio *folio;
- p = alloc_page(gfp);
- if (!p)
+ folio = folio_alloc(gfp, 0);
+ if (!folio)
return -ENOMEM;
- pagevec_add(pv, p);
+ folio_batch_add(fbatch, folio);
}
return 0;
}
-static int pool_init(struct pagevec *pv, gfp_t gfp)
+static int pool_init(struct folio_batch *fbatch, gfp_t gfp)
{
int err;
- pagevec_init(pv);
+ folio_batch_init(fbatch);
- err = pool_refill(pv, gfp);
+ err = pool_refill(fbatch, gfp);
if (err)
- pool_fini(pv);
+ pool_fini(fbatch);
return err;
}
-static void *pool_alloc(struct pagevec *pv, gfp_t gfp)
+static void *pool_alloc(struct folio_batch *fbatch, gfp_t gfp)
{
- struct page *p;
+ struct folio *folio;
- p = alloc_page(gfp);
- if (!p && pagevec_count(pv))
- p = pv->pages[--pv->nr];
+ folio = folio_alloc(gfp, 0);
+ if (!folio && folio_batch_count(fbatch))
+ folio = fbatch->folios[--fbatch->nr];
- return p ? page_address(p) : NULL;
+ return folio ? folio_address(folio) : NULL;
}
-static void pool_free(struct pagevec *pv, void *addr)
+static void pool_free(struct folio_batch *fbatch, void *addr)
{
- struct page *p = virt_to_page(addr);
+ struct folio *folio = virt_to_folio(addr);
- if (pagevec_space(pv))
- pagevec_add(pv, p);
+ if (folio_batch_space(fbatch))
+ folio_batch_add(fbatch, folio);
else
- __free_page(p);
+ folio_put(folio);
}
#ifdef CONFIG_DRM_I915_COMPRESS_ERROR
struct i915_vma_compress {
- struct pagevec pool;
+ struct folio_batch pool;
struct z_stream_s zstream;
void *tmp;
};
@@ -381,7 +381,7 @@ static void err_compression_marker(struct drm_i915_error_state_buf *m)
#else
struct i915_vma_compress {
- struct pagevec pool;
+ struct folio_batch pool;
};
static bool compress_init(struct i915_vma_compress *c)
--
2.39.2
next prev parent reply other threads:[~2023-06-21 16:46 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-21 16:45 [PATCH 00/13] Remove pagevecs Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 01/13] afs: Convert pagevec to folio_batch in afs_extend_writeback() Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 02/13] mm: Add __folio_batch_release() Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 03/13] scatterlist: Add sg_set_folio() Matthew Wilcox (Oracle)
2023-07-30 11:01 ` Zhu Yanjun
2023-07-30 11:18 ` Matthew Wilcox
2023-07-30 13:57 ` Zhu Yanjun
2023-07-30 21:42 ` Matthew Wilcox
2023-08-18 7:05 ` Zhu Yanjun
2023-06-21 16:45 ` [PATCH 04/13] i915: Convert shmem_sg_free_table() to use a folio_batch Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 05/13] drm: Convert drm_gem_put_pages() " Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 06/13] mm: Remove check_move_unevictable_pages() Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 07/13] pagevec: Rename fbatch_count() Matthew Wilcox (Oracle)
2023-06-21 16:45 ` Matthew Wilcox (Oracle) [this message]
2023-06-21 16:45 ` [PATCH 09/13] net: Convert sunrpc from pagevec to folio_batch Matthew Wilcox (Oracle)
2023-06-21 17:50 ` Chuck Lever
2023-06-21 16:45 ` [PATCH 10/13] mm: Remove struct pagevec Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 11/13] mm: Rename invalidate_mapping_pagevec to mapping_try_invalidate Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 12/13] mm: Remove references to pagevec Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 13/13] mm: Remove unnecessary pagevec includes Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230621164557.3510324-9-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=linux-afs@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox