* [PATCH] mm: mempolicy: fix potential pte_unmap_unlock pte error
@ 2020-10-19 7:48 Shijie Luo
2020-10-19 10:50 ` Oscar Salvador
0 siblings, 1 reply; 3+ messages in thread
From: Shijie Luo @ 2020-10-19 7:48 UTC (permalink / raw)
To: akpm
Cc: linux-mm, linux-kernel, osalvador, mhocko, linmiaohe, linfeilong,
luoshijie1
When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL
bits, code breaks and passing origin pte - 1 to pte_unmap_unlock seems like
not a good idea.
queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't migrate
misplaced pages but returns with EIO when encountering such a page. Since
commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT
is specified") and early break on the first pte in the range results in
pte_unmap_unlock on an underflow pte. This can lead to lockups later on when
somebody tries to lock the pte resp. page_table_lock again..
Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
MPOL_MF_STRICT is specified")
Signed-off-by: Shijie Luo <luoshijie1@huawei.com>
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/mempolicy.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 3fde772ef5ef..3ca4898f3f24 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -525,7 +525,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long flags = qp->flags;
int ret;
bool has_unmovable = false;
- pte_t *pte;
+ pte_t *pte, *mapped_pte;
spinlock_t *ptl;
ptl = pmd_trans_huge_lock(pmd, vma);
@@ -539,7 +539,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
if (pmd_trans_unstable(pmd))
return 0;
- pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+ mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
for (; addr != end; pte++, addr += PAGE_SIZE) {
if (!pte_present(*pte))
continue;
@@ -571,7 +571,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
} else
break;
}
- pte_unmap_unlock(pte - 1, ptl);
+ pte_unmap_unlock(mapped_pte, ptl);
cond_resched();
if (has_unmovable)
--
2.19.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] mm: mempolicy: fix potential pte_unmap_unlock pte error
2020-10-19 7:48 [PATCH] mm: mempolicy: fix potential pte_unmap_unlock pte error Shijie Luo
@ 2020-10-19 10:50 ` Oscar Salvador
2020-10-19 11:32 ` Michal Hocko
0 siblings, 1 reply; 3+ messages in thread
From: Oscar Salvador @ 2020-10-19 10:50 UTC (permalink / raw)
To: Shijie Luo; +Cc: akpm, linux-mm, linux-kernel, mhocko, linmiaohe, linfeilong
On Mon, Oct 19, 2020 at 03:48:53AM -0400, Shijie Luo wrote:
> When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL
> bits, code breaks and passing origin pte - 1 to pte_unmap_unlock seems like
> not a good idea.
I think the above is already explained below?
> queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't migrate
> misplaced pages but returns with EIO when encountering such a page. Since
> commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT
> is specified") and early break on the first pte in the range results in
> pte_unmap_unlock on an underflow pte. This can lead to lockups later on when
> somebody tries to lock the pte resp. page_table_lock again..
>
> Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
> MPOL_MF_STRICT is specified")
>
> Signed-off-by: Shijie Luo <luoshijie1@huawei.com>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Anyway, LGTM:
Reviewed-by: Oscar Salvador <osalvador@suse.de>
--
Oscar Salvador
SUSE L3
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] mm: mempolicy: fix potential pte_unmap_unlock pte error
2020-10-19 10:50 ` Oscar Salvador
@ 2020-10-19 11:32 ` Michal Hocko
0 siblings, 0 replies; 3+ messages in thread
From: Michal Hocko @ 2020-10-19 11:32 UTC (permalink / raw)
To: Oscar Salvador
Cc: Shijie Luo, akpm, linux-mm, linux-kernel, linmiaohe, linfeilong
On Mon 19-10-20 12:50:34, Oscar Salvador wrote:
> On Mon, Oct 19, 2020 at 03:48:53AM -0400, Shijie Luo wrote:
> > When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL
> > bits, code breaks and passing origin pte - 1 to pte_unmap_unlock seems like
> > not a good idea.
>
> I think the above is already explained below?
Yes
> > queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't migrate
> > misplaced pages but returns with EIO when encountering such a page. Since
> > commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT
> > is specified") and early break on the first pte in the range results in
> > pte_unmap_unlock on an underflow pte. This can lead to lockups later on when
> > somebody tries to lock the pte resp. page_table_lock again..
> >
> > Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
> > MPOL_MF_STRICT is specified")
Cc: stable
is due as well. There are even security concerns and I wouldn't be
surprised if this gained a CVE.
> > Signed-off-by: Shijie Luo <luoshijie1@huawei.com>
> > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>
> Anyway, LGTM:
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Michal Hocko <mhocko@suse.com>
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-10-19 11:32 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-19 7:48 [PATCH] mm: mempolicy: fix potential pte_unmap_unlock pte error Shijie Luo
2020-10-19 10:50 ` Oscar Salvador
2020-10-19 11:32 ` Michal Hocko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox