From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qg0-f51.google.com (mail-qg0-f51.google.com [209.85.192.51]) by kanga.kvack.org (Postfix) with ESMTP id 6CBDD6B0005 for ; Wed, 6 Jan 2016 14:04:36 -0500 (EST) Received: by mail-qg0-f51.google.com with SMTP id 6so231560315qgy.1 for ; Wed, 06 Jan 2016 11:04:36 -0800 (PST) Received: from mail-qk0-x234.google.com (mail-qk0-x234.google.com. [2607:f8b0:400d:c09::234]) by mx.google.com with ESMTPS id v17si113584916qhb.37.2016.01.06.11.04.35 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Jan 2016 11:04:35 -0800 (PST) Received: by mail-qk0-x234.google.com with SMTP id q19so102319710qke.3 for ; Wed, 06 Jan 2016 11:04:35 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <1452103263-1592-3-git-send-email-ross.zwisler@linux.intel.com> References: <1452103263-1592-1-git-send-email-ross.zwisler@linux.intel.com> <1452103263-1592-3-git-send-email-ross.zwisler@linux.intel.com> Date: Wed, 6 Jan 2016 11:04:35 -0800 Message-ID: Subject: Re: [PATCH v7 2/9] dax: fix conversion of holes to PMDs From: Dan Williams Content-Type: text/plain; charset=UTF-8 Sender: owner-linux-mm@kvack.org List-ID: To: Ross Zwisler Cc: "linux-kernel@vger.kernel.org" , "H. Peter Anvin" , "J. Bruce Fields" , Theodore Ts'o , Alexander Viro , Andreas Dilger , Andrew Morton , Dave Chinner , Dave Hansen , Ingo Molnar , Jan Kara , Jeff Layton , Matthew Wilcox , Matthew Wilcox , Thomas Gleixner , linux-ext4 , linux-fsdevel , Linux MM , "linux-nvdimm@lists.01.org" , X86 ML , XFS Developers On Wed, Jan 6, 2016 at 10:00 AM, Ross Zwisler wrote: > When we get a DAX PMD fault for a write it is possible that there could be > some number of 4k zero pages already present for the same range that were > inserted to service reads from a hole. These 4k zero pages need to be > unmapped from the VMAs and removed from the struct address_space radix tree > before the real DAX PMD entry can be inserted. > > For PTE faults this same use case also exists and is handled by a > combination of unmap_mapping_range() to unmap the VMAs and > delete_from_page_cache() to remove the page from the address_space radix > tree. > > For PMD faults we do have a call to unmap_mapping_range() (protected by a > buffer_new() check), but nothing clears out the radix tree entry. The > buffer_new() check is also incorrect as the current ext4 and XFS filesystem > code will never return a buffer_head with BH_New set, even when allocating > new blocks over a hole. Instead the filesystem will zero the blocks > manually and return a buffer_head with only BH_Mapped set. > > Fix this situation by removing the buffer_new() check and adding a call to > truncate_inode_pages_range() to clear out the radix tree entries before we > insert the DAX PMD. > > Signed-off-by: Ross Zwisler Replaced the current contents of v6 in -mm from next-20160106 with this v7 set and it looks good. Reported-by: Dan Williams Tested-by: Dan Williams One question below... > --- > fs/dax.c | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/fs/dax.c b/fs/dax.c > index 03cc4a3..9dc0c97 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -594,6 +594,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > bool write = flags & FAULT_FLAG_WRITE; > struct block_device *bdev; > pgoff_t size, pgoff; > + loff_t lstart, lend; > sector_t block; > int result = 0; > > @@ -647,15 +648,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > goto fallback; > } > > - /* > - * If we allocated new storage, make sure no process has any > - * zero pages covering this hole > - */ > - if (buffer_new(&bh)) { > - i_mmap_unlock_read(mapping); > - unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0); > - i_mmap_lock_read(mapping); > - } > + /* make sure no process has any zero pages covering this hole */ > + lstart = pgoff << PAGE_SHIFT; > + lend = lstart + PMD_SIZE - 1; /* inclusive */ > + i_mmap_unlock_read(mapping); > + unmap_mapping_range(mapping, lstart, PMD_SIZE, 0); > + truncate_inode_pages_range(mapping, lstart, lend); Do we need to do both unmap and truncate given that truncate_inode_page() optionally does an unmap_mapping_range() internally? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org