* [PATCH] mm, swap: Fix swapoff with KSM pages
@ 2018-12-26 5:15 Huang Ying
2018-12-26 5:37 ` Huang, Ying
2018-12-28 2:55 ` Andrew Morton
0 siblings, 2 replies; 6+ messages in thread
From: Huang Ying @ 2018-12-26 5:15 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Rik van Riel,
Johannes Weiner, Minchan Kim, Shaohua Li, Daniel Jordan,
Hugh Dickins
KSM pages may be mapped to the multiple VMAs that cannot be reached
from one anon_vma. So during swapin, a new copy of the page need to
be generated if a different anon_vma is needed, please refer to
comments of ksm_might_need_to_copy() for details.
During swapoff, unuse_vma() uses anon_vma (if available) to locate VMA
and virtual address mapped to the page, so not all mappings to a
swapped out KSM page could be found. So in try_to_unuse(), even if
the swap count of a swap entry isn't zero, the page needs to be
deleted from swap cache, so that, in the next round a new page could
be allocated and swapin for the other mappings of the swapped out KSM
page.
But this contradicts with the THP swap support. Where the THP could
be deleted from swap cache only after the swap count of every swap
entry in the huge swap cluster backing the THP has reach 0. So
try_to_unuse() is changed in commit e07098294adf ("mm, THP, swap:
support to reclaim swap space for THP swapped out") to check that
before delete a page from swap cache, but this has broken KSM swapoff
too.
Fortunately, KSM is for the normal pages only, so the original
behavior for KSM pages could be restored easily via checking
PageTransCompound(). That is how this patch works.
Fixes: e07098294adf ("mm, THP, swap: support to reclaim swap space for THP swapped out")
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reported-and-Tested-and-Acked-by: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
---
mm/swapfile.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 8688ae65ef58..20d3c0f47a5f 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2197,7 +2197,8 @@ int try_to_unuse(unsigned int type, bool frontswap,
*/
if (PageSwapCache(page) &&
likely(page_private(page) == entry.val) &&
- !page_swapped(page))
+ (!PageTransCompound(page) ||
+ !swap_page_trans_huge_swapped(si, entry)))
delete_from_swap_cache(compound_head(page));
/*
--
2.19.2
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm, swap: Fix swapoff with KSM pages
2018-12-26 5:15 [PATCH] mm, swap: Fix swapoff with KSM pages Huang Ying
@ 2018-12-26 5:37 ` Huang, Ying
2018-12-26 5:37 ` Huang, Ying
2018-12-28 2:55 ` Andrew Morton
1 sibling, 1 reply; 6+ messages in thread
From: Huang, Ying @ 2018-12-26 5:37 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Rik van Riel, Johannes Weiner,
Minchan Kim, Shaohua Li, Daniel Jordan, Hugh Dickins
Hi, Andrew,
This patch is based on linus' tree instead of the head of mmotm tree
because it is to fix a bug there.
The bug is introduced by commit e07098294adf ("mm, THP, swap: support to
reclaim swap space for THP swapped out"), which is merged by v4.14-rc1.
So I think we should backport the fix to from 4.14 on. But Hugh thinks
it may be rare for the KSM pages being in the swap device when swapoff,
so nobody reports the bug so far.
Best Regards,
Huang, Ying
Huang Ying <ying.huang@intel.com> writes:
> KSM pages may be mapped to the multiple VMAs that cannot be reached
> from one anon_vma. So during swapin, a new copy of the page need to
> be generated if a different anon_vma is needed, please refer to
> comments of ksm_might_need_to_copy() for details.
>
> During swapoff, unuse_vma() uses anon_vma (if available) to locate VMA
> and virtual address mapped to the page, so not all mappings to a
> swapped out KSM page could be found. So in try_to_unuse(), even if
> the swap count of a swap entry isn't zero, the page needs to be
> deleted from swap cache, so that, in the next round a new page could
> be allocated and swapin for the other mappings of the swapped out KSM
> page.
>
> But this contradicts with the THP swap support. Where the THP could
> be deleted from swap cache only after the swap count of every swap
> entry in the huge swap cluster backing the THP has reach 0. So
> try_to_unuse() is changed in commit e07098294adf ("mm, THP, swap:
> support to reclaim swap space for THP swapped out") to check that
> before delete a page from swap cache, but this has broken KSM swapoff
> too.
>
> Fortunately, KSM is for the normal pages only, so the original
> behavior for KSM pages could be restored easily via checking
> PageTransCompound(). That is how this patch works.
>
> Fixes: e07098294adf ("mm, THP, swap: support to reclaim swap space for THP swapped out")
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Reported-and-Tested-and-Acked-by: Hugh Dickins <hughd@google.com>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Shaohua Li <shli@kernel.org>
> Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
> ---
> mm/swapfile.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 8688ae65ef58..20d3c0f47a5f 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -2197,7 +2197,8 @@ int try_to_unuse(unsigned int type, bool frontswap,
> */
> if (PageSwapCache(page) &&
> likely(page_private(page) == entry.val) &&
> - !page_swapped(page))
> + (!PageTransCompound(page) ||
> + !swap_page_trans_huge_swapped(si, entry)))
> delete_from_swap_cache(compound_head(page));
>
> /*
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm, swap: Fix swapoff with KSM pages
2018-12-26 5:37 ` Huang, Ying
@ 2018-12-26 5:37 ` Huang, Ying
0 siblings, 0 replies; 6+ messages in thread
From: Huang, Ying @ 2018-12-26 5:37 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Rik van Riel, Johannes Weiner,
Minchan Kim, Shaohua Li, Daniel Jordan, Hugh Dickins
Hi, Andrew,
This patch is based on linus' tree instead of the head of mmotm tree
because it is to fix a bug there.
The bug is introduced by commit e07098294adf ("mm, THP, swap: support to
reclaim swap space for THP swapped out"), which is merged by v4.14-rc1.
So I think we should backport the fix to from 4.14 on. But Hugh thinks
it may be rare for the KSM pages being in the swap device when swapoff,
so nobody reports the bug so far.
Best Regards,
Huang, Ying
Huang Ying <ying.huang@intel.com> writes:
> KSM pages may be mapped to the multiple VMAs that cannot be reached
> from one anon_vma. So during swapin, a new copy of the page need to
> be generated if a different anon_vma is needed, please refer to
> comments of ksm_might_need_to_copy() for details.
>
> During swapoff, unuse_vma() uses anon_vma (if available) to locate VMA
> and virtual address mapped to the page, so not all mappings to a
> swapped out KSM page could be found. So in try_to_unuse(), even if
> the swap count of a swap entry isn't zero, the page needs to be
> deleted from swap cache, so that, in the next round a new page could
> be allocated and swapin for the other mappings of the swapped out KSM
> page.
>
> But this contradicts with the THP swap support. Where the THP could
> be deleted from swap cache only after the swap count of every swap
> entry in the huge swap cluster backing the THP has reach 0. So
> try_to_unuse() is changed in commit e07098294adf ("mm, THP, swap:
> support to reclaim swap space for THP swapped out") to check that
> before delete a page from swap cache, but this has broken KSM swapoff
> too.
>
> Fortunately, KSM is for the normal pages only, so the original
> behavior for KSM pages could be restored easily via checking
> PageTransCompound(). That is how this patch works.
>
> Fixes: e07098294adf ("mm, THP, swap: support to reclaim swap space for THP swapped out")
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Reported-and-Tested-and-Acked-by: Hugh Dickins <hughd@google.com>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Shaohua Li <shli@kernel.org>
> Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
> ---
> mm/swapfile.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 8688ae65ef58..20d3c0f47a5f 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -2197,7 +2197,8 @@ int try_to_unuse(unsigned int type, bool frontswap,
> */
> if (PageSwapCache(page) &&
> likely(page_private(page) == entry.val) &&
> - !page_swapped(page))
> + (!PageTransCompound(page) ||
> + !swap_page_trans_huge_swapped(si, entry)))
> delete_from_swap_cache(compound_head(page));
>
> /*
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm, swap: Fix swapoff with KSM pages
2018-12-26 5:15 [PATCH] mm, swap: Fix swapoff with KSM pages Huang Ying
2018-12-26 5:37 ` Huang, Ying
@ 2018-12-28 2:55 ` Andrew Morton
2018-12-28 16:56 ` Vineeth Pillai
1 sibling, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2018-12-28 2:55 UTC (permalink / raw)
To: Huang Ying
Cc: linux-mm, linux-kernel, Rik van Riel, Johannes Weiner,
Minchan Kim, Shaohua Li, Daniel Jordan, Hugh Dickins,
Vineeth Remanan Pillai, Kelley Nielsen
On Wed, 26 Dec 2018 13:15:22 +0800 Huang Ying <ying.huang@intel.com> wrote:
> KSM pages may be mapped to the multiple VMAs that cannot be reached
> from one anon_vma. So during swapin, a new copy of the page need to
> be generated if a different anon_vma is needed, please refer to
> comments of ksm_might_need_to_copy() for details.
>
> During swapoff, unuse_vma() uses anon_vma (if available) to locate VMA
> and virtual address mapped to the page, so not all mappings to a
> swapped out KSM page could be found. So in try_to_unuse(), even if
> the swap count of a swap entry isn't zero, the page needs to be
> deleted from swap cache, so that, in the next round a new page could
> be allocated and swapin for the other mappings of the swapped out KSM
> page.
>
> But this contradicts with the THP swap support. Where the THP could
> be deleted from swap cache only after the swap count of every swap
> entry in the huge swap cluster backing the THP has reach 0. So
> try_to_unuse() is changed in commit e07098294adf ("mm, THP, swap:
> support to reclaim swap space for THP swapped out") to check that
> before delete a page from swap cache, but this has broken KSM swapoff
> too.
>
> Fortunately, KSM is for the normal pages only, so the original
> behavior for KSM pages could be restored easily via checking
> PageTransCompound(). That is how this patch works.
>
> ...
>
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -2197,7 +2197,8 @@ int try_to_unuse(unsigned int type, bool frontswap,
> */
> if (PageSwapCache(page) &&
> likely(page_private(page) == entry.val) &&
> - !page_swapped(page))
> + (!PageTransCompound(page) ||
> + !swap_page_trans_huge_swapped(si, entry)))
> delete_from_swap_cache(compound_head(page));
>
The patch "mm, swap: rid swapoff of quadratic complexity" changes this
code significantly. There are a few issues with that patch so I'll
drop it for now.
Vineeth, please ensure that future versions retain the above fix,
thanks.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm, swap: Fix swapoff with KSM pages
2018-12-28 2:55 ` Andrew Morton
@ 2018-12-28 16:56 ` Vineeth Pillai
2018-12-28 16:56 ` Vineeth Pillai
0 siblings, 1 reply; 6+ messages in thread
From: Vineeth Pillai @ 2018-12-28 16:56 UTC (permalink / raw)
To: Andrew Morton
Cc: Huang Ying, linux-mm, linux-kernel, Rik van Riel,
Johannes Weiner, Minchan Kim, Shaohua Li, Daniel Jordan,
Hugh Dickins, Kelley Nielsen
[-- Attachment #1: Type: text/plain, Size: 2372 bytes --]
Thanks for letting me know Andrew! I shall include all the fixes in the
next iteration.
Thanks,
Vineeth
On Thu, Dec 27, 2018 at 9:55 PM Andrew Morton <akpm@linux-foundation.org>
wrote:
> On Wed, 26 Dec 2018 13:15:22 +0800 Huang Ying <ying.huang@intel.com>
> wrote:
>
> > KSM pages may be mapped to the multiple VMAs that cannot be reached
> > from one anon_vma. So during swapin, a new copy of the page need to
> > be generated if a different anon_vma is needed, please refer to
> > comments of ksm_might_need_to_copy() for details.
> >
> > During swapoff, unuse_vma() uses anon_vma (if available) to locate VMA
> > and virtual address mapped to the page, so not all mappings to a
> > swapped out KSM page could be found. So in try_to_unuse(), even if
> > the swap count of a swap entry isn't zero, the page needs to be
> > deleted from swap cache, so that, in the next round a new page could
> > be allocated and swapin for the other mappings of the swapped out KSM
> > page.
> >
> > But this contradicts with the THP swap support. Where the THP could
> > be deleted from swap cache only after the swap count of every swap
> > entry in the huge swap cluster backing the THP has reach 0. So
> > try_to_unuse() is changed in commit e07098294adf ("mm, THP, swap:
> > support to reclaim swap space for THP swapped out") to check that
> > before delete a page from swap cache, but this has broken KSM swapoff
> > too.
> >
> > Fortunately, KSM is for the normal pages only, so the original
> > behavior for KSM pages could be restored easily via checking
> > PageTransCompound(). That is how this patch works.
> >
> > ...
> >
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -2197,7 +2197,8 @@ int try_to_unuse(unsigned int type, bool frontswap,
> > */
> > if (PageSwapCache(page) &&
> > likely(page_private(page) == entry.val) &&
> > - !page_swapped(page))
> > + (!PageTransCompound(page) ||
> > + !swap_page_trans_huge_swapped(si, entry)))
> > delete_from_swap_cache(compound_head(page));
> >
>
> The patch "mm, swap: rid swapoff of quadratic complexity" changes this
> code significantly. There are a few issues with that patch so I'll
> drop it for now.
>
> Vineeth, please ensure that future versions retain the above fix,
> thanks.
>
>
>
[-- Attachment #2: Type: text/html, Size: 3439 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] mm, swap: Fix swapoff with KSM pages
2018-12-28 16:56 ` Vineeth Pillai
@ 2018-12-28 16:56 ` Vineeth Pillai
0 siblings, 0 replies; 6+ messages in thread
From: Vineeth Pillai @ 2018-12-28 16:56 UTC (permalink / raw)
To: Andrew Morton
Cc: Huang Ying, linux-mm, linux-kernel, Rik van Riel,
Johannes Weiner, Minchan Kim, Shaohua Li, Daniel Jordan,
Hugh Dickins, Kelley Nielsen
[-- Attachment #1: Type: text/plain, Size: 2372 bytes --]
Thanks for letting me know Andrew! I shall include all the fixes in the
next iteration.
Thanks,
Vineeth
On Thu, Dec 27, 2018 at 9:55 PM Andrew Morton <akpm@linux-foundation.org>
wrote:
> On Wed, 26 Dec 2018 13:15:22 +0800 Huang Ying <ying.huang@intel.com>
> wrote:
>
> > KSM pages may be mapped to the multiple VMAs that cannot be reached
> > from one anon_vma. So during swapin, a new copy of the page need to
> > be generated if a different anon_vma is needed, please refer to
> > comments of ksm_might_need_to_copy() for details.
> >
> > During swapoff, unuse_vma() uses anon_vma (if available) to locate VMA
> > and virtual address mapped to the page, so not all mappings to a
> > swapped out KSM page could be found. So in try_to_unuse(), even if
> > the swap count of a swap entry isn't zero, the page needs to be
> > deleted from swap cache, so that, in the next round a new page could
> > be allocated and swapin for the other mappings of the swapped out KSM
> > page.
> >
> > But this contradicts with the THP swap support. Where the THP could
> > be deleted from swap cache only after the swap count of every swap
> > entry in the huge swap cluster backing the THP has reach 0. So
> > try_to_unuse() is changed in commit e07098294adf ("mm, THP, swap:
> > support to reclaim swap space for THP swapped out") to check that
> > before delete a page from swap cache, but this has broken KSM swapoff
> > too.
> >
> > Fortunately, KSM is for the normal pages only, so the original
> > behavior for KSM pages could be restored easily via checking
> > PageTransCompound(). That is how this patch works.
> >
> > ...
> >
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -2197,7 +2197,8 @@ int try_to_unuse(unsigned int type, bool frontswap,
> > */
> > if (PageSwapCache(page) &&
> > likely(page_private(page) == entry.val) &&
> > - !page_swapped(page))
> > + (!PageTransCompound(page) ||
> > + !swap_page_trans_huge_swapped(si, entry)))
> > delete_from_swap_cache(compound_head(page));
> >
>
> The patch "mm, swap: rid swapoff of quadratic complexity" changes this
> code significantly. There are a few issues with that patch so I'll
> drop it for now.
>
> Vineeth, please ensure that future versions retain the above fix,
> thanks.
>
>
>
[-- Attachment #2: Type: text/html, Size: 3439 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2018-12-28 16:56 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-26 5:15 [PATCH] mm, swap: Fix swapoff with KSM pages Huang Ying
2018-12-26 5:37 ` Huang, Ying
2018-12-26 5:37 ` Huang, Ying
2018-12-28 2:55 ` Andrew Morton
2018-12-28 16:56 ` Vineeth Pillai
2018-12-28 16:56 ` Vineeth Pillai
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox