From: Jeremy Fitzhardinge <jeremy@goop.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>,
Linux-MM <linux-mm@kvack.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Nick Piggin <npiggin@kernel.dk>,
Xen-devel <xen-devel@lists.xensource.com>,
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Subject: [PATCH 2/9] mm: add apply_to_page_range_batch()
Date: Mon, 24 Jan 2011 14:56:00 -0800 [thread overview]
Message-ID: <7f635db45f8e921c9203fdfb904d0095b7af6480.1295653400.git.jeremy.fitzhardinge@citrix.com> (raw)
In-Reply-To: <cover.1295653400.git.jeremy.fitzhardinge@citrix.com>
In-Reply-To: <cover.1295653400.git.jeremy.fitzhardinge@citrix.com>
From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
apply_to_page_range() calls its callback function once for each pte, which
is pretty inefficient since it will almost always be operating on a batch
of adjacent ptes. apply_to_page_range_batch() calls its callback
with both a pte_t * and a count, so it can operate on multiple ptes at
once.
The callback is expected to handle all its ptes, or return an error. For
both apply_to_page_range and apply_to_page_range_batch, it is up to
the caller to work out how much progress was made if either fails with
an error.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
include/linux/mm.h | 6 +++++
mm/memory.c | 57 +++++++++++++++++++++++++++++++++++++--------------
2 files changed, 47 insertions(+), 16 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index bb898ec..5a32a8a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1533,6 +1533,12 @@ typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data);
extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
unsigned long size, pte_fn_t fn, void *data);
+typedef int (*pte_batch_fn_t)(pte_t *pte, unsigned count,
+ unsigned long addr, void *data);
+extern int apply_to_page_range_batch(struct mm_struct *mm,
+ unsigned long address, unsigned long size,
+ pte_batch_fn_t fn, void *data);
+
#ifdef CONFIG_PROC_FS
void vm_stat_account(struct mm_struct *, unsigned long, struct file *, long);
#else
diff --git a/mm/memory.c b/mm/memory.c
index 740470c..496e4e6 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2012,11 +2012,10 @@ EXPORT_SYMBOL(remap_pfn_range);
static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
unsigned long addr, unsigned long end,
- pte_fn_t fn, void *data)
+ pte_batch_fn_t fn, void *data)
{
pte_t *pte;
int err;
- pgtable_t token;
spinlock_t *uninitialized_var(ptl);
pte = (mm == &init_mm) ?
@@ -2028,25 +2027,17 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
BUG_ON(pmd_huge(*pmd));
arch_enter_lazy_mmu_mode();
-
- token = pmd_pgtable(*pmd);
-
- do {
- err = fn(pte++, addr, data);
- if (err)
- break;
- } while (addr += PAGE_SIZE, addr != end);
-
+ err = fn(pte, (end - addr) / PAGE_SIZE, addr, data);
arch_leave_lazy_mmu_mode();
if (mm != &init_mm)
- pte_unmap_unlock(pte-1, ptl);
+ pte_unmap_unlock(pte, ptl);
return err;
}
static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud,
unsigned long addr, unsigned long end,
- pte_fn_t fn, void *data)
+ pte_batch_fn_t fn, void *data)
{
pmd_t *pmd;
unsigned long next;
@@ -2068,7 +2059,7 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud,
static int apply_to_pud_range(struct mm_struct *mm, pgd_t *pgd,
unsigned long addr, unsigned long end,
- pte_fn_t fn, void *data)
+ pte_batch_fn_t fn, void *data)
{
pud_t *pud;
unsigned long next;
@@ -2090,8 +2081,9 @@ static int apply_to_pud_range(struct mm_struct *mm, pgd_t *pgd,
* Scan a region of virtual memory, filling in page tables as necessary
* and calling a provided function on each leaf page table.
*/
-int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
- unsigned long size, pte_fn_t fn, void *data)
+int apply_to_page_range_batch(struct mm_struct *mm,
+ unsigned long addr, unsigned long size,
+ pte_batch_fn_t fn, void *data)
{
pgd_t *pgd;
unsigned long next;
@@ -2109,6 +2101,39 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
return err;
}
+EXPORT_SYMBOL_GPL(apply_to_page_range_batch);
+
+struct pte_single_fn
+{
+ pte_fn_t fn;
+ void *data;
+};
+
+static int apply_pte_batch(pte_t *pte, unsigned count,
+ unsigned long addr, void *data)
+{
+ struct pte_single_fn *single = data;
+ int err = 0;
+
+ while (count--) {
+ err = single->fn(pte, addr, single->data);
+ if (err)
+ break;
+
+ addr += PAGE_SIZE;
+ pte++;
+ }
+
+ return err;
+}
+
+int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
+ unsigned long size, pte_fn_t fn, void *data)
+{
+ struct pte_single_fn single = { .fn = fn, .data = data };
+ return apply_to_page_range_batch(mm, addr, size,
+ apply_pte_batch, &single);
+}
EXPORT_SYMBOL_GPL(apply_to_page_range);
/*
--
1.7.3.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-01-24 22:56 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-24 22:55 [PATCH 0/9] Add apply_to_page_range_batch() and use it Jeremy Fitzhardinge
2011-01-24 22:55 ` [PATCH 1/9] mm: remove unused "token" argument from apply_to_page_range callback Jeremy Fitzhardinge
2011-01-24 22:56 ` Jeremy Fitzhardinge [this message]
2011-01-24 22:56 ` [PATCH 3/9] ioremap: use apply_to_page_range_batch() for ioremap_page_range() Jeremy Fitzhardinge
2011-01-24 22:56 ` [PATCH 4/9] vmalloc: use plain pte_clear() for unmaps Jeremy Fitzhardinge
2011-01-24 22:56 ` [PATCH 5/9] vmalloc: use apply_to_page_range_batch() for vunmap_page_range() Jeremy Fitzhardinge
2011-01-24 22:56 ` [PATCH 6/9] vmalloc: use apply_to_page_range_batch() for vmap_page_range_noflush() Jeremy Fitzhardinge
2011-01-24 22:56 ` [PATCH 7/9] vmalloc: use apply_to_page_range_batch() in alloc_vm_area() Jeremy Fitzhardinge
2011-01-24 22:56 ` [PATCH 8/9] xen/mmu: use apply_to_page_range_batch() in xen_remap_domain_mfn_range() Jeremy Fitzhardinge
2011-01-24 22:56 ` [PATCH 9/9] xen/grant-table: use apply_to_page_range_batch() Jeremy Fitzhardinge
2011-01-28 0:18 ` [PATCH 0/9] Add apply_to_page_range_batch() and use it Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2010-12-15 22:19 Jeremy Fitzhardinge
2010-12-15 22:19 ` [PATCH 2/9] mm: add apply_to_page_range_batch() Jeremy Fitzhardinge
2011-01-10 21:26 ` Konrad Rzeszutek Wilk
2011-01-12 2:15 ` Jeremy Fitzhardinge
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7f635db45f8e921c9203fdfb904d0095b7af6480.1295653400.git.jeremy.fitzhardinge@citrix.com \
--to=jeremy@goop.org \
--cc=akpm@linux-foundation.org \
--cc=hskinnemoen@atmel.com \
--cc=jeremy.fitzhardinge@citrix.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npiggin@kernel.dk \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox