From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f173.google.com (mail-we0-f173.google.com [74.125.82.173]) by kanga.kvack.org (Postfix) with ESMTP id B55406B0035 for ; Fri, 13 Jun 2014 12:34:59 -0400 (EDT) Received: by mail-we0-f173.google.com with SMTP id t60so3075073wes.32 for ; Fri, 13 Jun 2014 09:34:59 -0700 (PDT) Received: from fireflyinternet.com (mail.fireflyinternet.com. [87.106.93.118]) by mx.google.com with ESMTP id gg4si7471611wjd.15.2014.06.13.09.34.57 for ; Fri, 13 Jun 2014 09:34:58 -0700 (PDT) Date: Fri, 13 Jun 2014 17:34:54 +0100 From: Chris Wilson Subject: Re: [PATCH 1/2] mm: Report attempts to overwrite PTE from remap_pfn_range() Message-ID: <20140613163454.GM6451@nuc-i3427.alporthouse.com> References: <1402676778-27174-1-git-send-email-chris@chris-wilson.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1402676778-27174-1-git-send-email-chris@chris-wilson.co.uk> Sender: owner-linux-mm@kvack.org List-ID: To: intel-gfx@lists.freedesktop.org Cc: Andrew Morton , "Kirill A. Shutemov" , Peter Zijlstra , Rik van Riel , Mel Gorman , Cyrill Gorcunov , Johannes Weiner , linux-mm@kvack.org On Fri, Jun 13, 2014 at 05:26:17PM +0100, Chris Wilson wrote: > When using remap_pfn_range() from a fault handler, we are exposed to > races between concurrent faults. Rather than hitting a BUG, report the > error back to the caller, like vm_insert_pfn(). > > Signed-off-by: Chris Wilson > Cc: Andrew Morton > Cc: "Kirill A. Shutemov" > Cc: Peter Zijlstra > Cc: Rik van Riel > Cc: Mel Gorman > Cc: Cyrill Gorcunov > Cc: Johannes Weiner > Cc: linux-mm@kvack.org > --- > mm/memory.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 037b812a9531..6603a9e6a731 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2306,19 +2306,23 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd, > { > pte_t *pte; > spinlock_t *ptl; > + int ret = 0; > > pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); > if (!pte) > return -ENOMEM; > arch_enter_lazy_mmu_mode(); > do { > - BUG_ON(!pte_none(*pte)); > + if (!pte_none(*pte)) { > + ret = -EBUSY; > + break; > + } > set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot))); > pfn++; > } while (pte++, addr += PAGE_SIZE, addr != end); > arch_leave_lazy_mmu_mode(); > pte_unmap_unlock(pte - 1, ptl); Oh. That will want the EBUSY path to increment pte or we will try to unmap the wrong page. -Chris -- Chris Wilson, Intel Open Source Technology Centre -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org