From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 287088E0001 for ; Mon, 21 Jan 2019 04:43:25 -0500 (EST) Received: by mail-pl1-f197.google.com with SMTP id j8so12884478plb.1 for ; Mon, 21 Jan 2019 01:43:25 -0800 (PST) Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id o7sor17320642pfi.46.2019.01.21.01.43.24 for (Google Transport Security); Mon, 21 Jan 2019 01:43:24 -0800 (PST) Subject: Re: [PATCH V14 00/18] block: support multi-page bvec References: <20190121081805.32727-1-ming.lei@redhat.com> From: Sagi Grimberg Message-ID: <61dfaa1e-e7bf-75f1-410b-ed32f97d0782@grimberg.me> Date: Mon, 21 Jan 2019 01:43:21 -0800 MIME-Version: 1.0 In-Reply-To: <20190121081805.32727-1-ming.lei@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Ming Lei , Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , Omar Sandoval , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, linux-raid@vger.kernel.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com > V14: > - drop patch(patch 4 in V13) for renaming bvec helpers, as suggested by Jens > - use mp_bvec_* as multi-page bvec helper name > - fix one build issue, which is caused by missing one converion of > bio_for_each_segment_all in fs/gfs2 > - fix one 32bit ARCH specific issue caused by segment boundary mask > overflow Hey Ming, So is nvme-tcp also affected here? The only point where I see nvme-tcp can be affected is when initializing a bvec iter using bio_segments() as everywhere else we use iters which should transparently work.. I see that loop was converted, does it mean that nvme-tcp needs to call something like? -- bio_for_each_mp_bvec(bv, bio, iter) nr_bvecs++; --