From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id 7B5936B0266 for ; Thu, 19 Oct 2017 22:46:01 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id b6so6935732pff.18 for ; Thu, 19 Oct 2017 19:46:01 -0700 (PDT) Received: from mga09.intel.com (mga09.intel.com. [134.134.136.24]) by mx.google.com with ESMTPS id d4si10313384pfl.429.2017.10.19.19.46.00 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 19 Oct 2017 19:46:00 -0700 (PDT) Subject: [PATCH v3 07/13] dax: warn if dma collides with truncate From: Dan Williams Date: Thu, 19 Oct 2017 19:39:34 -0700 Message-ID: <150846717482.24336.5408601305480590234.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <150846713528.24336.4459262264611579791.stgit@dwillia2-desk3.amr.corp.intel.com> References: <150846713528.24336.4459262264611579791.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org Cc: Jan Kara , Matthew Wilcox , linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, Jeff Moyer , linux-fsdevel@vger.kernel.org, Ross Zwisler , hch@lst.de Catch cases where truncate encounters pages that are still under active dma. This warning is a canary for potential data corruption as truncated blocks could be allocated to a new file while the device is still perform i/o. Cc: Jan Kara Cc: Jeff Moyer Cc: Christoph Hellwig Cc: Matthew Wilcox Cc: Ross Zwisler Signed-off-by: Dan Williams --- fs/dax.c | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/fs/dax.c b/fs/dax.c index ac6497dcfebd..b03f547b36e7 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -437,6 +437,38 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index, return entry; } +static unsigned long dax_entry_size(void *entry) +{ + if (dax_is_zero_entry(entry)) + return 0; + else if (dax_is_pmd_entry(entry)) + return HPAGE_SIZE; + else + return PAGE_SIZE; +} + +static void dax_check_truncate(void *entry) +{ + unsigned long pfn = dax_radix_pfn(entry); + unsigned long size = dax_entry_size(entry); + unsigned long end_pfn; + + if (!size) + return; + end_pfn = pfn + size / PAGE_SIZE; + for (; pfn < end_pfn; pfn++) { + struct page *page = pfn_to_page(pfn); + + /* + * devmap pages are idle when their count is 1 and the + * only path that increases their count is + * get_user_pages(). + */ + WARN_ONCE(page_ref_count(page) > 1, + "dax-dma truncate collision\n"); + } +} + static int __dax_invalidate_mapping_entry(struct address_space *mapping, pgoff_t index, bool trunc) { @@ -452,6 +484,7 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping, (radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_DIRTY) || radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_TOWRITE))) goto out; + dax_check_truncate(entry); radix_tree_delete(page_tree, index); mapping->nrexceptional--; ret = 1; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org