From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id 8C3306B02BF for ; Mon, 19 Feb 2018 14:48:04 -0500 (EST) Received: by mail-pf0-f200.google.com with SMTP id w73so3679429pfd.18 for ; Mon, 19 Feb 2018 11:48:04 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id 62-v6si7756082ply.15.2018.02.19.11.46.28 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 19 Feb 2018 11:46:28 -0800 (PST) From: Matthew Wilcox Subject: [PATCH v7 47/61] shmem: Convert shmem_tag_pins to XArray Date: Mon, 19 Feb 2018 11:45:42 -0800 Message-Id: <20180219194556.6575-48-willy@infradead.org> In-Reply-To: <20180219194556.6575-1-willy@infradead.org> References: <20180219194556.6575-1-willy@infradead.org> Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org From: Matthew Wilcox Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock for a (potentially) long time. We could replicate the same locking behaviour with the xarray, but would have to be careful that the xa_node wasn't RCU-freed under us before we took the lock. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 0179f9aa7d0e..5b70fbdec605 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence) static void shmem_tag_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; + XA_STATE(xas, &mapping->pages, 0); struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->pages); - radix_tree_tag_set(&mapping->pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->pages); - } + xas_lock_irq(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(&xas, SHMEM_TAG_PINNED); - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } /* -- 2.16.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org