* [PATCH]: 3/4 mm/rmap.c cleanup
@ 2004-11-21 15:44 Nikita Danilov
[not found] ` <m1zn1bmbu3.fsf@clusterfs.com>
2004-11-21 21:14 ` Andrew Morton
0 siblings, 2 replies; 4+ messages in thread
From: Nikita Danilov @ 2004-11-21 15:44 UTC (permalink / raw)
To: Linux Kernel Mailing List; +Cc: Andrew Morton, Linux MM Mailing List
identical code that
- takes mm->page_table_lock;
- drills through page tables;
- checks that correct pte is reached.
Coalesce this into page_check_address()
(Patch is for 2.6.10-rc2)
Signed-off-by: Nikita Danilov <nikita@clusterfs.com>
mm/rmap.c | 95 +++++++++++++++++++++++++++-----------------------------------
1 files changed, 42 insertions(+), 53 deletions(-)
diff -puN mm/rmap.c~rmap-cleanup mm/rmap.c
--- bk-linux/mm/rmap.c~rmap-cleanup 2004-11-21 17:01:03.038470288 +0300
+++ bk-linux-nikita/mm/rmap.c 2004-11-21 17:01:03.041469832 +0300
@@ -250,6 +250,34 @@ unsigned long page_address_in_vma(struct
}
/*
+ * Check that @page is mapped at @address into @mm.
+ *
+ * On success returns with mapped pte and locked mm->page_table_lock.
+ */
+static inline pte_t *page_check_address(struct page *page, struct mm_struct *mm,
+ unsigned long address)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ spin_lock(&mm->page_table_lock);
+ pgd = pgd_offset(mm, address);
+ if (likely(pgd_present(*pgd))) {
+ pmd = pmd_offset(pgd, address);
+ if (likely(pmd_present(*pmd))) {
+ pte = pte_offset_map(pmd, address);
+ if (likely(pte_present(*pte) &&
+ page_to_pfn(page) == pte_pfn(*pte)))
+ return pte;
+ pte_unmap(pte);
+ }
+ }
+ spin_unlock(&mm->page_table_lock);
+ return ERR_PTR(-ENOENT);
+}
+
+/*
* Subfunctions of page_referenced: page_referenced_one called
* repeatedly from either page_referenced_anon or page_referenced_file.
*/
@@ -258,8 +286,6 @@ static int page_referenced_one(struct pa
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
- pgd_t *pgd;
- pmd_t *pmd;
pte_t *pte;
int referenced = 0;
@@ -269,35 +295,18 @@ static int page_referenced_one(struct pa
if (address == -EFAULT)
goto out;
- spin_lock(&mm->page_table_lock);
-
- pgd = pgd_offset(mm, address);
- if (!pgd_present(*pgd))
- goto out_unlock;
-
- pmd = pmd_offset(pgd, address);
- if (!pmd_present(*pmd))
- goto out_unlock;
-
- pte = pte_offset_map(pmd, address);
- if (!pte_present(*pte))
- goto out_unmap;
-
- if (page_to_pfn(page) != pte_pfn(*pte))
- goto out_unmap;
-
- if (ptep_clear_flush_young(vma, address, pte))
- referenced++;
-
- if (mm != current->mm && !ignore_token && has_swap_token(mm))
- referenced++;
+ pte = page_check_address(page, mm, address);
+ if (!IS_ERR(pte)) {
+ if (ptep_clear_flush_young(vma, address, pte))
+ referenced++;
- (*mapcount)--;
+ if (mm != current->mm && !ignore_token && has_swap_token(mm))
+ referenced++;
-out_unmap:
- pte_unmap(pte);
-out_unlock:
- spin_unlock(&mm->page_table_lock);
+ (*mapcount)--;
+ pte_unmap(pte);
+ spin_unlock(&mm->page_table_lock);
+ }
out:
return referenced;
}
@@ -501,8 +510,6 @@ static int try_to_unmap_one(struct page
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
- pgd_t *pgd;
- pmd_t *pmd;
pte_t *pte;
pte_t pteval;
int ret = SWAP_AGAIN;
@@ -513,26 +520,9 @@ static int try_to_unmap_one(struct page
if (address == -EFAULT)
goto out;
- /*
- * We need the page_table_lock to protect us from page faults,
- * munmap, fork, etc...
- */
- spin_lock(&mm->page_table_lock);
-
- pgd = pgd_offset(mm, address);
- if (!pgd_present(*pgd))
- goto out_unlock;
-
- pmd = pmd_offset(pgd, address);
- if (!pmd_present(*pmd))
- goto out_unlock;
-
- pte = pte_offset_map(pmd, address);
- if (!pte_present(*pte))
- goto out_unmap;
-
- if (page_to_pfn(page) != pte_pfn(*pte))
- goto out_unmap;
+ pte = page_check_address(page, mm, address);
+ if (IS_ERR(pte))
+
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH]: 3/4 mm/rmap.c cleanup
[not found] ` <m1zn1bmbu3.fsf@clusterfs.com>
@ 2004-11-21 16:14 ` Nikita Danilov
0 siblings, 0 replies; 4+ messages in thread
From: Nikita Danilov @ 2004-11-21 16:14 UTC (permalink / raw)
To: Linux Kernel Mailing List; +Cc: Andrew Morton, Linux MM Mailing List
[-- Attachment #1: message body text --]
[-- Type: text/plain, Size: 387 bytes --]
Nikita Danilov <nikita@clusterfs.com> writes:
> Nikita Danilov <nikita@clusterfs.com> writes:
>
>> identical code that
>
> Hmm... hungry grues everywhere. First lines should have been
>
> mm/rmap.c:page_referenced_one() and mm/rmap.c:try_to_unmap_one() contain
> identical code that
>
> Patch is also but. Try again, this time attached.
This time for sure, I promise.
Nikita.
[-- Attachment #2: rmap-cleanup.patch --]
[-- Type: text/plain, Size: 3998 bytes --]
mm/rmap.c:page_referenced_one() and mm/rmap.c:try_to_unmap_one() contain
identical code that
- takes mm->page_table_lock;
- drills through page tables;
- checks that correct pte is reached.
Coalesce this into page_check_address()
mm/rmap.c | 95 +++++++++++++++++++++++++++-----------------------------------
1 files changed, 42 insertions(+), 53 deletions(-)
diff -puN mm/rmap.c~rmap-cleanup mm/rmap.c
--- bk-linux/mm/rmap.c~rmap-cleanup 2004-11-21 18:59:59.759523776 +0300
+++ bk-linux-nikita/mm/rmap.c 2004-11-21 18:59:59.761523472 +0300
@@ -250,6 +250,34 @@ unsigned long page_address_in_vma(struct
}
/*
+ * Check that @page is mapped at @address into @mm.
+ *
+ * On success returns with mapped pte and locked mm->page_table_lock.
+ */
+static inline pte_t *page_check_address(struct page *page, struct mm_struct *mm,
+ unsigned long address)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ spin_lock(&mm->page_table_lock);
+ pgd = pgd_offset(mm, address);
+ if (likely(pgd_present(*pgd))) {
+ pmd = pmd_offset(pgd, address);
+ if (likely(pmd_present(*pmd))) {
+ pte = pte_offset_map(pmd, address);
+ if (likely(pte_present(*pte) &&
+ page_to_pfn(page) == pte_pfn(*pte)))
+ return pte;
+ pte_unmap(pte);
+ }
+ }
+ spin_unlock(&mm->page_table_lock);
+ return ERR_PTR(-ENOENT);
+}
+
+/*
* Subfunctions of page_referenced: page_referenced_one called
* repeatedly from either page_referenced_anon or page_referenced_file.
*/
@@ -258,8 +286,6 @@ static int page_referenced_one(struct pa
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
- pgd_t *pgd;
- pmd_t *pmd;
pte_t *pte;
int referenced = 0;
@@ -269,35 +295,18 @@ static int page_referenced_one(struct pa
if (address == -EFAULT)
goto out;
- spin_lock(&mm->page_table_lock);
-
- pgd = pgd_offset(mm, address);
- if (!pgd_present(*pgd))
- goto out_unlock;
-
- pmd = pmd_offset(pgd, address);
- if (!pmd_present(*pmd))
- goto out_unlock;
-
- pte = pte_offset_map(pmd, address);
- if (!pte_present(*pte))
- goto out_unmap;
-
- if (page_to_pfn(page) != pte_pfn(*pte))
- goto out_unmap;
-
- if (ptep_clear_flush_young(vma, address, pte))
- referenced++;
-
- if (mm != current->mm && !ignore_token && has_swap_token(mm))
- referenced++;
+ pte = page_check_address(page, mm, address);
+ if (!IS_ERR(pte)) {
+ if (ptep_clear_flush_young(vma, address, pte))
+ referenced++;
- (*mapcount)--;
+ if (mm != current->mm && !ignore_token && has_swap_token(mm))
+ referenced++;
-out_unmap:
- pte_unmap(pte);
-out_unlock:
- spin_unlock(&mm->page_table_lock);
+ (*mapcount)--;
+ pte_unmap(pte);
+ spin_unlock(&mm->page_table_lock);
+ }
out:
return referenced;
}
@@ -501,8 +510,6 @@ static int try_to_unmap_one(struct page
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
- pgd_t *pgd;
- pmd_t *pmd;
pte_t *pte;
pte_t pteval;
int ret = SWAP_AGAIN;
@@ -513,26 +520,9 @@ static int try_to_unmap_one(struct page
if (address == -EFAULT)
goto out;
- /*
- * We need the page_table_lock to protect us from page faults,
- * munmap, fork, etc...
- */
- spin_lock(&mm->page_table_lock);
-
- pgd = pgd_offset(mm, address);
- if (!pgd_present(*pgd))
- goto out_unlock;
-
- pmd = pmd_offset(pgd, address);
- if (!pmd_present(*pmd))
- goto out_unlock;
-
- pte = pte_offset_map(pmd, address);
- if (!pte_present(*pte))
- goto out_unmap;
-
- if (page_to_pfn(page) != pte_pfn(*pte))
- goto out_unmap;
+ pte = page_check_address(page, mm, address);
+ if (IS_ERR(pte))
+ goto out;
/*
* If the page is mlock()d, we cannot swap it out.
@@ -598,7 +588,6 @@ static int try_to_unmap_one(struct page
out_unmap:
pte_unmap(pte);
-out_unlock:
spin_unlock(&mm->page_table_lock);
out:
return ret;
@@ -697,7 +686,6 @@ static void try_to_unmap_cluster(unsigne
}
pte_unmap(pte);
-
out_unlock:
spin_unlock(&mm->page_table_lock);
}
@@ -849,3 +837,4 @@ int try_to_unmap(struct page *page)
ret = SWAP_SUCCESS;
return ret;
}
+
_
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH]: 3/4 mm/rmap.c cleanup
2004-11-21 15:44 [PATCH]: 3/4 mm/rmap.c cleanup Nikita Danilov
[not found] ` <m1zn1bmbu3.fsf@clusterfs.com>
@ 2004-11-21 21:14 ` Andrew Morton
2004-11-22 14:31 ` Hugh Dickins
1 sibling, 1 reply; 4+ messages in thread
From: Andrew Morton @ 2004-11-21 21:14 UTC (permalink / raw)
To: Nikita Danilov; +Cc: Linux-Kernel, AKPM, linux-mm
Nikita Danilov <nikita@clusterfs.com> wrote:
>
> mm/rmap.c:page_referenced_one() and mm/rmap.c:try_to_unmap_one() contain
> identical code that
>
> - takes mm->page_table_lock;
>
> - drills through page tables;
>
> - checks that correct pte is reached.
>
> Coalesce this into page_check_address()
Looks sane, but it comes at a bad time. Please rework and resubmit after
the 4-level pagetable code is merged into Linus's tree, post-2.6.10.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH]: 3/4 mm/rmap.c cleanup
2004-11-21 21:14 ` Andrew Morton
@ 2004-11-22 14:31 ` Hugh Dickins
0 siblings, 0 replies; 4+ messages in thread
From: Hugh Dickins @ 2004-11-22 14:31 UTC (permalink / raw)
To: Andrew Morton; +Cc: Nikita Danilov, Linux-Kernel, linux-mm
On Sun, 21 Nov 2004, Andrew Morton wrote:
> Nikita Danilov <nikita@clusterfs.com> wrote:
> >
> > mm/rmap.c:page_referenced_one() and mm/rmap.c:try_to_unmap_one() contain
> > identical code that
> >
> > - takes mm->page_table_lock;
> >
> > - drills through page tables;
> >
> > - checks that correct pte is reached.
> >
> > Coalesce this into page_check_address()
>
> Looks sane, but it comes at a bad time. Please rework and resubmit after
> the 4-level pagetable code is merged into Linus's tree, post-2.6.10.
Personally, I prefer the straightforward way it looks without Nikita's
patch. But it is a matter of personal taste, and I may well be in the
minority.
Would be better justified if the common function were not "inline"?
Hugh
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2004-11-22 14:31 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-11-21 15:44 [PATCH]: 3/4 mm/rmap.c cleanup Nikita Danilov
[not found] ` <m1zn1bmbu3.fsf@clusterfs.com>
2004-11-21 16:14 ` Nikita Danilov
2004-11-21 21:14 ` Andrew Morton
2004-11-22 14:31 ` Hugh Dickins
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox