From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f49.google.com (mail-pa0-f49.google.com [209.85.220.49]) by kanga.kvack.org (Postfix) with ESMTP id D53286B0035 for ; Tue, 29 Jul 2014 09:37:50 -0400 (EDT) Received: by mail-pa0-f49.google.com with SMTP id hz1so12300814pad.22 for ; Tue, 29 Jul 2014 06:37:50 -0700 (PDT) Received: from mailout3.w1.samsung.com (mailout3.w1.samsung.com. [210.118.77.13]) by mx.google.com with ESMTPS id uv5si8589649pab.40.2014.07.29.06.37.49 for (version=TLSv1 cipher=RC4-MD5 bits=128/128); Tue, 29 Jul 2014 06:37:49 -0700 (PDT) Received: from eucpsbgm2.samsung.com (unknown [203.254.199.245]) by mailout3.w1.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N9H00FMB76ZFW80@mailout3.w1.samsung.com> for linux-mm@kvack.org; Tue, 29 Jul 2014 14:37:47 +0100 (BST) Message-id: <53D7A251.7010509@samsung.com> Date: Tue, 29 Jul 2014 17:32:01 +0400 From: Andrey Ryabinin MIME-version: 1.0 Subject: Re: [PATCH 1/2] mm: close race between do_fault_around() and fault_around_bytes_set() References: <1406633609-17586-1-git-send-email-kirill.shutemov@linux.intel.com> <1406633609-17586-2-git-send-email-kirill.shutemov@linux.intel.com> In-reply-to: <1406633609-17586-2-git-send-email-kirill.shutemov@linux.intel.com> Content-type: text/plain; charset=UTF-8 Content-transfer-encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: "Kirill A. Shutemov" , Andrew Morton Cc: Dave Hansen , Sasha Levin , David Rientjes , linux-mm@kvack.org On 07/29/14 15:33, Kirill A. Shutemov wrote: > Things can go wrong if fault_around_bytes will be changed under > do_fault_around(): between fault_around_mask() and fault_around_pages(). > > Let's read fault_around_bytes only once during do_fault_around() and > calculate mask based on the reading. > > Note: fault_around_bytes can only be updated via debug interface. Also > I've tried but was not able to trigger a bad behaviour without the > patch. So I would not consider this patch as urgent. > > Signed-off-by: Kirill A. Shutemov > --- > mm/memory.c | 17 +++++++++++------ > 1 file changed, 11 insertions(+), 6 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 9d66bc66f338..2ce07dc9b52b 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2772,12 +2772,12 @@ static unsigned long fault_around_bytes = rounddown_pow_of_two(65536); > > static inline unsigned long fault_around_pages(void) > { > - return fault_around_bytes >> PAGE_SHIFT; > + return ACCESS_ONCE(fault_around_bytes) >> PAGE_SHIFT; > } > > -static inline unsigned long fault_around_mask(void) > +static inline unsigned long fault_around_mask(unsigned long nr_pages) > { > - return ~(fault_around_bytes - 1) & PAGE_MASK; > + return ~(nr_pages * PAGE_SIZE - 1) & PAGE_MASK; > } > > > @@ -2844,12 +2844,17 @@ late_initcall(fault_around_debugfs); > static void do_fault_around(struct vm_area_struct *vma, unsigned long address, > pte_t *pte, pgoff_t pgoff, unsigned int flags) > { > - unsigned long start_addr; > + unsigned long start_addr, nr_pages; > pgoff_t max_pgoff; > struct vm_fault vmf; > int off; > > - start_addr = max(address & fault_around_mask(), vma->vm_start); > + nr_pages = fault_around_pages(); > + /* race with fault_around_bytes_set() */ > + if (nr_pages <= 1) unlikely() ? > + return; > + > + start_addr = max(address & fault_around_mask(nr_pages), vma->vm_start); > off = ((address - start_addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1); > pte -= off; > pgoff -= off; > @@ -2861,7 +2866,7 @@ static void do_fault_around(struct vm_area_struct *vma, unsigned long address, > max_pgoff = pgoff - ((start_addr >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) + > PTRS_PER_PTE - 1; > max_pgoff = min3(max_pgoff, vma_pages(vma) + vma->vm_pgoff - 1, > - pgoff + fault_around_pages() - 1); > + pgoff + nr_pages - 1); > > /* Check if it makes any sense to call ->map_pages */ > while (!pte_none(*pte)) { > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org