From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FCE0C04EBE for ; Fri, 9 Oct 2020 19:51:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E84022282 for ; Fri, 9 Oct 2020 19:51:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E84022282 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A1B0C6B006E; Fri, 9 Oct 2020 15:51:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A5F26B0093; Fri, 9 Oct 2020 15:51:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 808E86B006E; Fri, 9 Oct 2020 15:51:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id 398506B006E for ; Fri, 9 Oct 2020 15:51:37 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A268D824999B for ; Fri, 9 Oct 2020 19:51:36 +0000 (UTC) X-FDA: 77353431792.16.hope74_6112ab0271e3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 45C1D101D8B7E; Fri, 9 Oct 2020 19:51:36 +0000 (UTC) X-HE-Tag: hope74_6112ab0271e3 X-Filterd-Recvd-Size: 24257 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf21.hostedemail.com (Postfix) with ESMTP; Fri, 9 Oct 2020 19:51:34 +0000 (UTC) IronPort-SDR: 1q/+J1BJFVevkDdL8UuFLdqHCGH3XKGzDrowx2g/bIggJlc9CeSbBHphdHPlJ1RX2ThcSCfppD SoGBm/WVmSzA== X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250225984" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="250225984" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:33 -0700 IronPort-SDR: HIW7PNpXrID3gOC4BJBiMK8gMhcX7pYwx4FKSDabG5rp79Jg8ur1teAkpcUK61qJFFQS9ID+ws 9ehpgS1/cOeQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="349957192" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:33 -0700 From: ira.weiny@intel.com To: Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Chris Mason , Josef Bacik , David Sterba , x86@kernel.org, Dave Hansen , Dan Williams , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, kexec@lists.infradead.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org, io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, cluster-devel@redhat.com, ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, linux-cachefs@redhat.com, samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org Subject: [PATCH RFC PKS/PMEM 13/58] fs/btrfs: Utilize new kmap_thread() Date: Fri, 9 Oct 2020 12:49:48 -0700 Message-Id: <20201009195033.3208459-14-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com> References: <20201009195033.3208459-1-ira.weiny@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ira Weiny The kmap() calls in this FS are localized to a single thread. To avoid the over head of global PKRS updates use the new kmap_thread() call. Cc: Chris Mason Cc: Josef Bacik Cc: David Sterba Signed-off-by: Ira Weiny --- fs/btrfs/check-integrity.c | 4 ++-- fs/btrfs/compression.c | 4 ++-- fs/btrfs/inode.c | 16 ++++++++-------- fs/btrfs/lzo.c | 24 ++++++++++++------------ fs/btrfs/raid56.c | 34 +++++++++++++++++----------------- fs/btrfs/reflink.c | 8 ++++---- fs/btrfs/send.c | 4 ++-- fs/btrfs/zlib.c | 32 ++++++++++++++++---------------- fs/btrfs/zstd.c | 20 ++++++++++---------- 9 files changed, 73 insertions(+), 73 deletions(-) diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c index 81a8c87a5afb..9e5a02512ab5 100644 --- a/fs/btrfs/check-integrity.c +++ b/fs/btrfs/check-integrity.c @@ -2706,7 +2706,7 @@ static void __btrfsic_submit_bio(struct bio *bio) =20 bio_for_each_segment(bvec, bio, iter) { BUG_ON(bvec.bv_len !=3D PAGE_SIZE); - mapped_datav[i] =3D kmap(bvec.bv_page); + mapped_datav[i] =3D kmap_thread(bvec.bv_page); i++; =20 if (dev_state->state->print_mask & @@ -2720,7 +2720,7 @@ static void __btrfsic_submit_bio(struct bio *bio) bio, &bio_is_patched, bio->bi_opf); bio_for_each_segment(bvec, bio, iter) - kunmap(bvec.bv_page); + kunmap_thread(bvec.bv_page); kfree(mapped_datav); } else if (NULL !=3D dev_state && (bio->bi_opf & REQ_PREFLUSH)) { if (dev_state->state->print_mask & diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 1ab56a734e70..5944fb36d68a 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -1626,7 +1626,7 @@ static void heuristic_collect_sample(struct inode *= inode, u64 start, u64 end, curr_sample_pos =3D 0; while (index < index_end) { page =3D find_get_page(inode->i_mapping, index); - in_data =3D kmap(page); + in_data =3D kmap_thread(page); /* Handle case where the start is not aligned to PAGE_SIZE */ i =3D start % PAGE_SIZE; while (i < PAGE_SIZE - SAMPLING_READ_SIZE) { @@ -1639,7 +1639,7 @@ static void heuristic_collect_sample(struct inode *= inode, u64 start, u64 end, start +=3D SAMPLING_INTERVAL; curr_sample_pos +=3D SAMPLING_READ_SIZE; } - kunmap(page); + kunmap_thread(page); put_page(page); =20 index++; diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 9570458aa847..9710a52c6c42 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -4603,7 +4603,7 @@ int btrfs_truncate_block(struct inode *inode, loff_= t from, loff_t len, if (offset !=3D blocksize) { if (!len) len =3D blocksize - offset; - kaddr =3D kmap(page); + kaddr =3D kmap_thread(page); if (front) memset(kaddr + (block_start - page_offset(page)), 0, offset); @@ -4611,7 +4611,7 @@ int btrfs_truncate_block(struct inode *inode, loff_= t from, loff_t len, memset(kaddr + (block_start - page_offset(page)) + offset, 0, len); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } ClearPageChecked(page); set_page_dirty(page); @@ -6509,9 +6509,9 @@ static noinline int uncompress_inline(struct btrfs_= path *path, */ =20 if (max_size + pg_offset < PAGE_SIZE) { - char *map =3D kmap(page); + char *map =3D kmap_thread(page); memset(map + pg_offset + max_size, 0, PAGE_SIZE - max_size - pg_offset= ); - kunmap(page); + kunmap_thread(page); } kfree(tmp); return ret; @@ -6704,7 +6704,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_in= ode *inode, goto out; } } else { - map =3D kmap(page); + map =3D kmap_thread(page); read_extent_buffer(leaf, map + pg_offset, ptr, copy_size); if (pg_offset + copy_size < PAGE_SIZE) { @@ -6712,7 +6712,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_in= ode *inode, PAGE_SIZE - pg_offset - copy_size); } - kunmap(page); + kunmap_thread(page); } flush_dcache_page(page); } @@ -8326,10 +8326,10 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vm= f) zero_start =3D PAGE_SIZE; =20 if (zero_start !=3D PAGE_SIZE) { - kaddr =3D kmap(page); + kaddr =3D kmap_thread(page); memset(kaddr + zero_start, 0, PAGE_SIZE - zero_start); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } ClearPageChecked(page); set_page_dirty(page); diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c index aa9cd11f4b78..f29dcc9ec573 100644 --- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -140,7 +140,7 @@ int lzo_compress_pages(struct list_head *ws, struct a= ddress_space *mapping, *total_in =3D 0; =20 in_page =3D find_get_page(mapping, start >> PAGE_SHIFT); - data_in =3D kmap(in_page); + data_in =3D kmap_thread(in_page); =20 /* * store the size of all chunks of compressed data in @@ -151,7 +151,7 @@ int lzo_compress_pages(struct list_head *ws, struct a= ddress_space *mapping, ret =3D -ENOMEM; goto out; } - cpage_out =3D kmap(out_page); + cpage_out =3D kmap_thread(out_page); out_offset =3D LZO_LEN; tot_out =3D LZO_LEN; pages[0] =3D out_page; @@ -209,7 +209,7 @@ int lzo_compress_pages(struct list_head *ws, struct a= ddress_space *mapping, if (out_len =3D=3D 0 && tot_in >=3D len) break; =20 - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages =3D=3D nr_dest_pages) { out_page =3D NULL; ret =3D -E2BIG; @@ -221,7 +221,7 @@ int lzo_compress_pages(struct list_head *ws, struct a= ddress_space *mapping, ret =3D -ENOMEM; goto out; } - cpage_out =3D kmap(out_page); + cpage_out =3D kmap_thread(out_page); pages[nr_pages++] =3D out_page; =20 pg_bytes_left =3D PAGE_SIZE; @@ -243,12 +243,12 @@ int lzo_compress_pages(struct list_head *ws, struct= address_space *mapping, break; =20 bytes_left =3D len - tot_in; - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); =20 start +=3D PAGE_SIZE; in_page =3D find_get_page(mapping, start >> PAGE_SHIFT); - data_in =3D kmap(in_page); + data_in =3D kmap_thread(in_page); in_len =3D min(bytes_left, PAGE_SIZE); } =20 @@ -258,10 +258,10 @@ int lzo_compress_pages(struct list_head *ws, struct= address_space *mapping, } =20 /* store the size of all chunks of compressed data */ - cpage_out =3D kmap(pages[0]); + cpage_out =3D kmap_thread(pages[0]); write_compress_length(cpage_out, tot_out); =20 - kunmap(pages[0]); + kunmap_thread(pages[0]); =20 ret =3D 0; *total_out =3D tot_out; @@ -269,10 +269,10 @@ int lzo_compress_pages(struct list_head *ws, struct= address_space *mapping, out: *out_pages =3D nr_pages; if (out_page) - kunmap(out_page); + kunmap_thread(out_page); =20 if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } =20 @@ -305,7 +305,7 @@ int lzo_decompress_bio(struct list_head *ws, struct c= ompressed_bio *cb) u64 disk_start =3D cb->start; struct bio *orig_bio =3D cb->orig_bio; =20 - data_in =3D kmap(pages_in[0]); + data_in =3D kmap_thread(pages_in[0]); tot_len =3D read_compress_length(data_in); /* * Compressed data header check. @@ -387,7 +387,7 @@ int lzo_decompress_bio(struct list_head *ws, struct c= ompressed_bio *cb) else kunmap(pages_in[page_in_index]); =20 - data_in =3D kmap(pages_in[++page_in_index]); + data_in =3D kmap_thread(pages_in[++page_in_index]); =20 in_page_bytes_left =3D PAGE_SIZE; in_offset =3D 0; diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 255490f42b5d..34e646e4548c 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -262,13 +262,13 @@ static void cache_rbio_pages(struct btrfs_raid_bio = *rbio) if (!rbio->bio_pages[i]) continue; =20 - s =3D kmap(rbio->bio_pages[i]); - d =3D kmap(rbio->stripe_pages[i]); + s =3D kmap_thread(rbio->bio_pages[i]); + d =3D kmap_thread(rbio->stripe_pages[i]); =20 copy_page(d, s); =20 - kunmap(rbio->bio_pages[i]); - kunmap(rbio->stripe_pages[i]); + kunmap_thread(rbio->bio_pages[i]); + kunmap_thread(rbio->stripe_pages[i]); SetPageUptodate(rbio->stripe_pages[i]); } set_bit(RBIO_CACHE_READY_BIT, &rbio->flags); @@ -1241,13 +1241,13 @@ static noinline void finish_rmw(struct btrfs_raid= _bio *rbio) /* first collect one page from each data stripe */ for (stripe =3D 0; stripe < nr_data; stripe++) { p =3D page_in_rbio(rbio, stripe, pagenr, 0); - pointers[stripe] =3D kmap(p); + pointers[stripe] =3D kmap_thread(p); } =20 /* then add the parity stripe */ p =3D rbio_pstripe_page(rbio, pagenr); SetPageUptodate(p); - pointers[stripe++] =3D kmap(p); + pointers[stripe++] =3D kmap_thread(p); =20 if (has_qstripe) { =20 @@ -1257,7 +1257,7 @@ static noinline void finish_rmw(struct btrfs_raid_b= io *rbio) */ p =3D rbio_qstripe_page(rbio, pagenr); SetPageUptodate(p); - pointers[stripe++] =3D kmap(p); + pointers[stripe++] =3D kmap_thread(p); =20 raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE, pointers); @@ -1269,7 +1269,7 @@ static noinline void finish_rmw(struct btrfs_raid_b= io *rbio) =20 =20 for (stripe =3D 0; stripe < rbio->real_stripes; stripe++) - kunmap(page_in_rbio(rbio, stripe, pagenr, 0)); + kunmap_thread(page_in_rbio(rbio, stripe, pagenr, 0)); } =20 /* @@ -1835,7 +1835,7 @@ static void __raid_recover_end_io(struct btrfs_raid= _bio *rbio) } else { page =3D rbio_stripe_page(rbio, stripe, pagenr); } - pointers[stripe] =3D kmap(page); + pointers[stripe] =3D kmap_thread(page); } =20 /* all raid6 handling here */ @@ -1940,7 +1940,7 @@ static void __raid_recover_end_io(struct btrfs_raid= _bio *rbio) } else { page =3D rbio_stripe_page(rbio, stripe, pagenr); } - kunmap(page); + kunmap_thread(page); } } =20 @@ -2379,18 +2379,18 @@ static noinline void finish_parity_scrub(struct b= trfs_raid_bio *rbio, /* first collect one page from each data stripe */ for (stripe =3D 0; stripe < nr_data; stripe++) { p =3D page_in_rbio(rbio, stripe, pagenr, 0); - pointers[stripe] =3D kmap(p); + pointers[stripe] =3D kmap_thread(p); } =20 /* then add the parity stripe */ - pointers[stripe++] =3D kmap(p_page); + pointers[stripe++] =3D kmap_thread(p_page); =20 if (has_qstripe) { /* * raid6, add the qstripe and call the * library function to fill in our p/q */ - pointers[stripe++] =3D kmap(q_page); + pointers[stripe++] =3D kmap_thread(q_page); =20 raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE, pointers); @@ -2402,17 +2402,17 @@ static noinline void finish_parity_scrub(struct b= trfs_raid_bio *rbio, =20 /* Check scrubbing parity and repair it */ p =3D rbio_stripe_page(rbio, rbio->scrubp, pagenr); - parity =3D kmap(p); + parity =3D kmap_thread(p); if (memcmp(parity, pointers[rbio->scrubp], PAGE_SIZE)) copy_page(parity, pointers[rbio->scrubp]); else /* Parity is right, needn't writeback */ bitmap_clear(rbio->dbitmap, pagenr, 1); - kunmap(p); + kunmap_thread(p); =20 for (stripe =3D 0; stripe < nr_data; stripe++) - kunmap(page_in_rbio(rbio, stripe, pagenr, 0)); - kunmap(p_page); + kunmap_thread(page_in_rbio(rbio, stripe, pagenr, 0)); + kunmap_thread(p_page); } =20 __free_page(p_page); diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c index 5cd02514cf4d..10e53d7eba8c 100644 --- a/fs/btrfs/reflink.c +++ b/fs/btrfs/reflink.c @@ -92,10 +92,10 @@ static int copy_inline_to_page(struct inode *inode, if (comp_type =3D=3D BTRFS_COMPRESS_NONE) { char *map; =20 - map =3D kmap(page); + map =3D kmap_thread(page); memcpy(map, data_start, datal); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } else { ret =3D btrfs_decompress(comp_type, data_start, page, 0, inline_size, datal); @@ -119,10 +119,10 @@ static int copy_inline_to_page(struct inode *inode, if (datal < block_size) { char *map; =20 - map =3D kmap(page); + map =3D kmap_thread(page); memset(map + datal, 0, block_size - datal); flush_dcache_page(page); - kunmap(page); + kunmap_thread(page); } =20 SetPageUptodate(page); diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index d9813a5b075a..06c383d3dc43 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -4863,9 +4863,9 @@ static ssize_t fill_read_buf(struct send_ctx *sctx,= u64 offset, u32 len) } } =20 - addr =3D kmap(page); + addr =3D kmap_thread(page); memcpy(sctx->read_buf + ret, addr + pg_offset, cur_len); - kunmap(page); + kunmap_thread(page); unlock_page(page); put_page(page); index++; diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index 05615a1099db..45b7a907bab3 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -126,7 +126,7 @@ int zlib_compress_pages(struct list_head *ws, struct = address_space *mapping, ret =3D -ENOMEM; goto out; } - cpage_out =3D kmap(out_page); + cpage_out =3D kmap_thread(out_page); pages[0] =3D out_page; nr_pages =3D 1; =20 @@ -149,12 +149,12 @@ int zlib_compress_pages(struct list_head *ws, struc= t address_space *mapping, =20 for (i =3D 0; i < in_buf_pages; i++) { if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } in_page =3D find_get_page(mapping, start >> PAGE_SHIFT); - data_in =3D kmap(in_page); + data_in =3D kmap_thread(in_page); memcpy(workspace->buf + i * PAGE_SIZE, data_in, PAGE_SIZE); start +=3D PAGE_SIZE; @@ -162,12 +162,12 @@ int zlib_compress_pages(struct list_head *ws, struc= t address_space *mapping, workspace->strm.next_in =3D workspace->buf; } else { if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } in_page =3D find_get_page(mapping, start >> PAGE_SHIFT); - data_in =3D kmap(in_page); + data_in =3D kmap_thread(in_page); start +=3D PAGE_SIZE; workspace->strm.next_in =3D data_in; } @@ -196,7 +196,7 @@ int zlib_compress_pages(struct list_head *ws, struct = address_space *mapping, * the stream end if required */ if (workspace->strm.avail_out =3D=3D 0) { - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages =3D=3D nr_dest_pages) { out_page =3D NULL; ret =3D -E2BIG; @@ -207,7 +207,7 @@ int zlib_compress_pages(struct list_head *ws, struct = address_space *mapping, ret =3D -ENOMEM; goto out; } - cpage_out =3D kmap(out_page); + cpage_out =3D kmap_thread(out_page); pages[nr_pages] =3D out_page; nr_pages++; workspace->strm.avail_out =3D PAGE_SIZE; @@ -234,7 +234,7 @@ int zlib_compress_pages(struct list_head *ws, struct = address_space *mapping, goto out; } else if (workspace->strm.avail_out =3D=3D 0) { /* get another page for the stream end */ - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages =3D=3D nr_dest_pages) { out_page =3D NULL; ret =3D -E2BIG; @@ -245,7 +245,7 @@ int zlib_compress_pages(struct list_head *ws, struct = address_space *mapping, ret =3D -ENOMEM; goto out; } - cpage_out =3D kmap(out_page); + cpage_out =3D kmap_thread(out_page); pages[nr_pages] =3D out_page; nr_pages++; workspace->strm.avail_out =3D PAGE_SIZE; @@ -265,10 +265,10 @@ int zlib_compress_pages(struct list_head *ws, struc= t address_space *mapping, out: *out_pages =3D nr_pages; if (out_page) - kunmap(out_page); + kunmap_thread(out_page); =20 if (in_page) { - kunmap(in_page); + kunmap_thread(in_page); put_page(in_page); } return ret; @@ -289,7 +289,7 @@ int zlib_decompress_bio(struct list_head *ws, struct = compressed_bio *cb) u64 disk_start =3D cb->start; struct bio *orig_bio =3D cb->orig_bio; =20 - data_in =3D kmap(pages_in[page_in_index]); + data_in =3D kmap_thread(pages_in[page_in_index]); workspace->strm.next_in =3D data_in; workspace->strm.avail_in =3D min_t(size_t, srclen, PAGE_SIZE); workspace->strm.total_in =3D 0; @@ -311,7 +311,7 @@ int zlib_decompress_bio(struct list_head *ws, struct = compressed_bio *cb) =20 if (Z_OK !=3D zlib_inflateInit2(&workspace->strm, wbits)) { pr_warn("BTRFS: inflateInit failed\n"); - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); return -EIO; } while (workspace->strm.total_in < srclen) { @@ -339,13 +339,13 @@ int zlib_decompress_bio(struct list_head *ws, struc= t compressed_bio *cb) =20 if (workspace->strm.avail_in =3D=3D 0) { unsigned long tmp; - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); page_in_index++; if (page_in_index >=3D total_pages_in) { data_in =3D NULL; break; } - data_in =3D kmap(pages_in[page_in_index]); + data_in =3D kmap_thread(pages_in[page_in_index]); workspace->strm.next_in =3D data_in; tmp =3D srclen - workspace->strm.total_in; workspace->strm.avail_in =3D min(tmp, @@ -359,7 +359,7 @@ int zlib_decompress_bio(struct list_head *ws, struct = compressed_bio *cb) done: zlib_inflateEnd(&workspace->strm); if (data_in) - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); if (!ret) zero_fill_bio(orig_bio); return ret; diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c index 9a4871636c6c..48e03f6dcef7 100644 --- a/fs/btrfs/zstd.c +++ b/fs/btrfs/zstd.c @@ -399,7 +399,7 @@ int zstd_compress_pages(struct list_head *ws, struct = address_space *mapping, =20 /* map in the first page of input data */ in_page =3D find_get_page(mapping, start >> PAGE_SHIFT); - workspace->in_buf.src =3D kmap(in_page); + workspace->in_buf.src =3D kmap_thread(in_page); workspace->in_buf.pos =3D 0; workspace->in_buf.size =3D min_t(size_t, len, PAGE_SIZE); =20 @@ -411,7 +411,7 @@ int zstd_compress_pages(struct list_head *ws, struct = address_space *mapping, goto out; } pages[nr_pages++] =3D out_page; - workspace->out_buf.dst =3D kmap(out_page); + workspace->out_buf.dst =3D kmap_thread(out_page); workspace->out_buf.pos =3D 0; workspace->out_buf.size =3D min_t(size_t, max_out, PAGE_SIZE); =20 @@ -446,7 +446,7 @@ int zstd_compress_pages(struct list_head *ws, struct = address_space *mapping, if (workspace->out_buf.pos =3D=3D workspace->out_buf.size) { tot_out +=3D PAGE_SIZE; max_out -=3D PAGE_SIZE; - kunmap(out_page); + kunmap_thread(out_page); if (nr_pages =3D=3D nr_dest_pages) { out_page =3D NULL; ret =3D -E2BIG; @@ -458,7 +458,7 @@ int zstd_compress_pages(struct list_head *ws, struct = address_space *mapping, goto out; } pages[nr_pages++] =3D out_page; - workspace->out_buf.dst =3D kmap(out_page); + workspace->out_buf.dst =3D kmap_thread(out_page); workspace->out_buf.pos =3D 0; workspace->out_buf.size =3D min_t(size_t, max_out, PAGE_SIZE); @@ -479,7 +479,7 @@ int zstd_compress_pages(struct list_head *ws, struct = address_space *mapping, start +=3D PAGE_SIZE; len -=3D PAGE_SIZE; in_page =3D find_get_page(mapping, start >> PAGE_SHIFT); - workspace->in_buf.src =3D kmap(in_page); + workspace->in_buf.src =3D kmap_thread(in_page); workspace->in_buf.pos =3D 0; workspace->in_buf.size =3D min_t(size_t, len, PAGE_SIZE); } @@ -518,7 +518,7 @@ int zstd_compress_pages(struct list_head *ws, struct = address_space *mapping, goto out; } pages[nr_pages++] =3D out_page; - workspace->out_buf.dst =3D kmap(out_page); + workspace->out_buf.dst =3D kmap_thread(out_page); workspace->out_buf.pos =3D 0; workspace->out_buf.size =3D min_t(size_t, max_out, PAGE_SIZE); } @@ -565,7 +565,7 @@ int zstd_decompress_bio(struct list_head *ws, struct = compressed_bio *cb) goto done; } =20 - workspace->in_buf.src =3D kmap(pages_in[page_in_index]); + workspace->in_buf.src =3D kmap_thread(pages_in[page_in_index]); workspace->in_buf.pos =3D 0; workspace->in_buf.size =3D min_t(size_t, srclen, PAGE_SIZE); =20 @@ -601,14 +601,14 @@ int zstd_decompress_bio(struct list_head *ws, struc= t compressed_bio *cb) break; =20 if (workspace->in_buf.pos =3D=3D workspace->in_buf.size) { - kunmap(pages_in[page_in_index++]); + kunmap_thread(pages_in[page_in_index++]); if (page_in_index >=3D total_pages_in) { workspace->in_buf.src =3D NULL; ret =3D -EIO; goto done; } srclen -=3D PAGE_SIZE; - workspace->in_buf.src =3D kmap(pages_in[page_in_index]); + workspace->in_buf.src =3D kmap_thread(pages_in[page_in_index]); workspace->in_buf.pos =3D 0; workspace->in_buf.size =3D min_t(size_t, srclen, PAGE_SIZE); } @@ -617,7 +617,7 @@ int zstd_decompress_bio(struct list_head *ws, struct = compressed_bio *cb) zero_fill_bio(orig_bio); done: if (workspace->in_buf.src) - kunmap(pages_in[page_in_index]); + kunmap_thread(pages_in[page_in_index]); return ret; } =20 --=20 2.28.0.rc0.12.gb6a658bd00c9