From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFE4DC83B9A for ; Fri, 9 Oct 2020 19:51:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15D34223FB for ; Fri, 9 Oct 2020 19:51:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15D34223FB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 425596B0071; Fri, 9 Oct 2020 15:51:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3AD226B00A3; Fri, 9 Oct 2020 15:51:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 274C46B009F; Fri, 9 Oct 2020 15:51:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id E248F6B00A3 for ; Fri, 9 Oct 2020 15:51:57 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 78C17362D for ; Fri, 9 Oct 2020 19:51:57 +0000 (UTC) X-FDA: 77353432674.01.dress12_390c3de271e3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 363391004C7B1; Fri, 9 Oct 2020 19:51:57 +0000 (UTC) X-HE-Tag: dress12_390c3de271e3 X-Filterd-Recvd-Size: 15845 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf50.hostedemail.com (Postfix) with ESMTP; Fri, 9 Oct 2020 19:51:55 +0000 (UTC) IronPort-SDR: KCkZJwBpLZ1IFwz/s1RHm//ZGLXwWcZ6VBEhyUsDSQ5JOkjUdIvRd6HM0eiih3w/hAVhKSElMT HfzF1YzLsWdw== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144850783" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144850783" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:55 -0700 IronPort-SDR: vmw0BK7OYY8h0fZsnTgtJRpvmXsWJvpJiZ/O3ONyZujh0oi54UYOLunW4B1xesRkeTs3uxMx1d pOHAzA+ZEWzg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="529053305" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:54 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 19/58] fs/hfsplus: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:54 -0700 Message-Id: <20201009195033.3208459-20-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Signed-off-by: Ira Weiny --- fs/hfsplus/bitmap.c | 20 ++++----- fs/hfsplus/bnode.c | 102 ++++++++++++++++++++++---------------------- fs/hfsplus/btree.c | 18 ++++---- 3 files changed, 70 insertions(+), 70 deletions(-) diff --git a/fs/hfsplus/bitmap.c b/fs/hfsplus/bitmap.c index cebce0cfe340..9ec7c1559a0c 100644 --- a/fs/hfsplus/bitmap.c +++ b/fs/hfsplus/bitmap.c @@ -39,7 +39,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 = size, start =3D size; goto out; } - pptr =3D kmap(page); + pptr =3D kmap_thread(page); curr =3D pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32; i =3D offset % 32; offset &=3D ~(PAGE_CACHE_BITS - 1); @@ -74,7 +74,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 = size, } curr++; } - kunmap(page); + kunmap_thread(page); offset +=3D PAGE_CACHE_BITS; if (offset >=3D size) break; @@ -84,7 +84,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 = size, start =3D size; goto out; } - curr =3D pptr =3D kmap(page); + curr =3D pptr =3D kmap_thread(page); if ((size ^ offset) / PAGE_CACHE_BITS) end =3D pptr + PAGE_CACHE_BITS / 32; else @@ -127,7 +127,7 @@ int hfsplus_block_allocate(struct super_block *sb, u3= 2 size, len -=3D 32; } set_page_dirty(page); - kunmap(page); + kunmap_thread(page); offset +=3D PAGE_CACHE_BITS; page =3D read_mapping_page(mapping, offset / PAGE_CACHE_BITS, NULL); @@ -135,7 +135,7 @@ int hfsplus_block_allocate(struct super_block *sb, u3= 2 size, start =3D size; goto out; } - pptr =3D kmap(page); + pptr =3D kmap_thread(page); curr =3D pptr; end =3D pptr + PAGE_CACHE_BITS / 32; } @@ -151,7 +151,7 @@ int hfsplus_block_allocate(struct super_block *sb, u3= 2 size, done: *curr =3D cpu_to_be32(n); set_page_dirty(page); - kunmap(page); + kunmap_thread(page); *max =3D offset + (curr - pptr) * 32 + i - start; sbi->free_blocks -=3D *max; hfsplus_mark_mdb_dirty(sb); @@ -185,7 +185,7 @@ int hfsplus_block_free(struct super_block *sb, u32 of= fset, u32 count) page =3D read_mapping_page(mapping, pnr, NULL); if (IS_ERR(page)) goto kaboom; - pptr =3D kmap(page); + pptr =3D kmap_thread(page); curr =3D pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32; end =3D pptr + PAGE_CACHE_BITS / 32; len =3D count; @@ -215,11 +215,11 @@ int hfsplus_block_free(struct super_block *sb, u32 = offset, u32 count) if (!count) break; set_page_dirty(page); - kunmap(page); + kunmap_thread(page); page =3D read_mapping_page(mapping, ++pnr, NULL); if (IS_ERR(page)) goto kaboom; - pptr =3D kmap(page); + pptr =3D kmap_thread(page); curr =3D pptr; end =3D pptr + PAGE_CACHE_BITS / 32; } @@ -231,7 +231,7 @@ int hfsplus_block_free(struct super_block *sb, u32 of= fset, u32 count) } out: set_page_dirty(page); - kunmap(page); + kunmap_thread(page); sbi->free_blocks +=3D len; hfsplus_mark_mdb_dirty(sb); mutex_unlock(&sbi->alloc_mutex); diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c index 177fae4e6581..62757d92fbbd 100644 --- a/fs/hfsplus/bnode.c +++ b/fs/hfsplus/bnode.c @@ -29,14 +29,14 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf= , int off, int len) off &=3D ~PAGE_MASK; =20 l =3D min_t(int, len, PAGE_SIZE - off); - memcpy(buf, kmap(*pagep) + off, l); - kunmap(*pagep); + memcpy(buf, kmap_thread(*pagep) + off, l); + kunmap_thread(*pagep); =20 while ((len -=3D l) !=3D 0) { buf +=3D l; l =3D min_t(int, len, PAGE_SIZE); - memcpy(buf, kmap(*++pagep), l); - kunmap(*pagep); + memcpy(buf, kmap_thread(*++pagep), l); + kunmap_thread(*pagep); } } =20 @@ -82,16 +82,16 @@ void hfs_bnode_write(struct hfs_bnode *node, void *bu= f, int off, int len) off &=3D ~PAGE_MASK; =20 l =3D min_t(int, len, PAGE_SIZE - off); - memcpy(kmap(*pagep) + off, buf, l); + memcpy(kmap_thread(*pagep) + off, buf, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); =20 while ((len -=3D l) !=3D 0) { buf +=3D l; l =3D min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++pagep), buf, l); + memcpy(kmap_thread(*++pagep), buf, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } } =20 @@ -112,15 +112,15 @@ void hfs_bnode_clear(struct hfs_bnode *node, int of= f, int len) off &=3D ~PAGE_MASK; =20 l =3D min_t(int, len, PAGE_SIZE - off); - memset(kmap(*pagep) + off, 0, l); + memset(kmap_thread(*pagep) + off, 0, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); =20 while ((len -=3D l) !=3D 0) { l =3D min_t(int, len, PAGE_SIZE); - memset(kmap(*++pagep), 0, l); + memset(kmap_thread(*++pagep), 0, l); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } } =20 @@ -142,24 +142,24 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int= dst, =20 if (src =3D=3D dst) { l =3D min_t(int, len, PAGE_SIZE - src); - memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l); - kunmap(*src_page); + memcpy(kmap_thread(*dst_page) + src, kmap_thread(*src_page) + src, l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); =20 while ((len -=3D l) !=3D 0) { l =3D min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++dst_page), kmap(*++src_page), l); - kunmap(*src_page); + memcpy(kmap_thread(*++dst_page), kmap_thread(*++src_page), l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } } else { void *src_ptr, *dst_ptr; =20 do { - src_ptr =3D kmap(*src_page) + src; - dst_ptr =3D kmap(*dst_page) + dst; + src_ptr =3D kmap_thread(*src_page) + src; + dst_ptr =3D kmap_thread(*dst_page) + dst; if (PAGE_SIZE - src < PAGE_SIZE - dst) { l =3D PAGE_SIZE - src; src =3D 0; @@ -171,9 +171,9 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int d= st, } l =3D min(len, l); memcpy(dst_ptr, src_ptr, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (!dst) dst_page++; else @@ -202,27 +202,27 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst= , int src, int len) =20 if (src =3D=3D dst) { while (src < len) { - memmove(kmap(*dst_page), kmap(*src_page), src); - kunmap(*src_page); + memmove(kmap_thread(*dst_page), kmap_thread(*src_page), src); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); len -=3D src; src =3D PAGE_SIZE; src_page--; dst_page--; } src -=3D len; - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, len); - kunmap(*src_page); + memmove(kmap_thread(*dst_page) + src, + kmap_thread(*src_page) + src, len); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } else { void *src_ptr, *dst_ptr; =20 do { - src_ptr =3D kmap(*src_page) + src; - dst_ptr =3D kmap(*dst_page) + dst; + src_ptr =3D kmap_thread(*src_page) + src; + dst_ptr =3D kmap_thread(*dst_page) + dst; if (src < dst) { l =3D src; src =3D PAGE_SIZE; @@ -234,9 +234,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, = int src, int len) } l =3D min(len, l); memmove(dst_ptr - l, src_ptr - l, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (dst =3D=3D PAGE_SIZE) dst_page--; else @@ -251,26 +251,26 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst= , int src, int len) =20 if (src =3D=3D dst) { l =3D min_t(int, len, PAGE_SIZE - src); - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, l); - kunmap(*src_page); + memmove(kmap_thread(*dst_page) + src, + kmap_thread(*src_page) + src, l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); =20 while ((len -=3D l) !=3D 0) { l =3D min_t(int, len, PAGE_SIZE); - memmove(kmap(*++dst_page), - kmap(*++src_page), l); - kunmap(*src_page); + memmove(kmap_thread(*++dst_page), + kmap_thread(*++src_page), l); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); } } else { void *src_ptr, *dst_ptr; =20 do { - src_ptr =3D kmap(*src_page) + src; - dst_ptr =3D kmap(*dst_page) + dst; + src_ptr =3D kmap_thread(*src_page) + src; + dst_ptr =3D kmap_thread(*dst_page) + dst; if (PAGE_SIZE - src < PAGE_SIZE - dst) { l =3D PAGE_SIZE - src; @@ -283,9 +283,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, = int src, int len) } l =3D min(len, l); memmove(dst_ptr, src_ptr, l); - kunmap(*src_page); + kunmap_thread(*src_page); set_page_dirty(*dst_page); - kunmap(*dst_page); + kunmap_thread(*dst_page); if (!dst) dst_page++; else @@ -502,14 +502,14 @@ struct hfs_bnode *hfs_bnode_find(struct hfs_btree *= tree, u32 num) if (!test_bit(HFS_BNODE_NEW, &node->flags)) return node; =20 - desc =3D (struct hfs_bnode_desc *)(kmap(node->page[0]) + + desc =3D (struct hfs_bnode_desc *)(kmap_thread(node->page[0]) + node->page_offset); node->prev =3D be32_to_cpu(desc->prev); node->next =3D be32_to_cpu(desc->next); node->num_recs =3D be16_to_cpu(desc->num_recs); node->type =3D desc->type; node->height =3D desc->height; - kunmap(node->page[0]); + kunmap_thread(node->page[0]); =20 switch (node->type) { case HFS_NODE_HEADER: @@ -593,14 +593,14 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree= *tree, u32 num) } =20 pagep =3D node->page; - memset(kmap(*pagep) + node->page_offset, 0, + memset(kmap_thread(*pagep) + node->page_offset, 0, min_t(int, PAGE_SIZE, tree->node_size)); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); for (i =3D 1; i < tree->pages_per_bnode; i++) { - memset(kmap(*++pagep), 0, PAGE_SIZE); + memset(kmap_thread(*++pagep), 0, PAGE_SIZE); set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); } clear_bit(HFS_BNODE_NEW, &node->flags); wake_up(&node->lock_wq); diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c index 66774f4cb4fd..74fcef3a1628 100644 --- a/fs/hfsplus/btree.c +++ b/fs/hfsplus/btree.c @@ -394,7 +394,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tr= ee) =20 off +=3D node->page_offset; pagep =3D node->page + (off >> PAGE_SHIFT); - data =3D kmap(*pagep); + data =3D kmap_thread(*pagep); off &=3D ~PAGE_MASK; idx =3D 0; =20 @@ -407,7 +407,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tr= ee) idx +=3D i; data[off] |=3D m; set_page_dirty(*pagep); - kunmap(*pagep); + kunmap_thread(*pagep); tree->free_nodes--; mark_inode_dirty(tree->inode); hfs_bnode_put(node); @@ -417,14 +417,14 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *= tree) } } if (++off >=3D PAGE_SIZE) { - kunmap(*pagep); - data =3D kmap(*++pagep); + kunmap_thread(*pagep); + data =3D kmap_thread(*++pagep); off =3D 0; } idx +=3D 8; len--; } - kunmap(*pagep); + kunmap_thread(*pagep); nidx =3D node->next; if (!nidx) { hfs_dbg(BNODE_MOD, "create new bmap node\n"); @@ -440,7 +440,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tr= ee) off =3D off16; off +=3D node->page_offset; pagep =3D node->page + (off >> PAGE_SHIFT); - data =3D kmap(*pagep); + data =3D kmap_thread(*pagep); off &=3D ~PAGE_MASK; } } @@ -490,7 +490,7 @@ void hfs_bmap_free(struct hfs_bnode *node) } off +=3D node->page_offset + nidx / 8; page =3D node->page[off >> PAGE_SHIFT]; - data =3D kmap(page); + data =3D kmap_thread(page); off &=3D ~PAGE_MASK; m =3D 1 << (~nidx & 7); byte =3D data[off]; @@ -498,13 +498,13 @@ void hfs_bmap_free(struct hfs_bnode *node) pr_crit("trying to free free bnode " "%u(%d)\n", node->this, node->type); - kunmap(page); + kunmap_thread(page); hfs_bnode_put(node); return; } data[off] =3D byte & ~m; set_page_dirty(page); - kunmap(page); + kunmap_thread(page); hfs_bnode_put(node); tree->free_nodes++; mark_inode_dirty(tree->inode); --=20 2.28.0.rc0.12.gb6a658bd00c9