From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97B48C433F2 for ; Fri, 24 Jul 2020 07:33:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5A38D206F0 for ; Fri, 24 Jul 2020 07:33:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="kaEsNsF0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A38D206F0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E7CB88D001A; Fri, 24 Jul 2020 03:33:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E2F078D0015; Fri, 24 Jul 2020 03:33:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5DEE8D001A; Fri, 24 Jul 2020 03:33:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id A71A28D0015 for ; Fri, 24 Jul 2020 03:33:40 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6FA6F1801A0AC for ; Fri, 24 Jul 2020 07:33:40 +0000 (UTC) X-FDA: 77072154600.24.pan02_3e0d8be26f45 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 32BFB1ADA6 for ; Fri, 24 Jul 2020 07:33:40 +0000 (UTC) X-HE-Tag: pan02_3e0d8be26f45 X-Filterd-Recvd-Size: 5558 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 24 Jul 2020 07:33:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=va/h+wWyCoJg1xNOoHbk7lvmV/IMHTiU8wUk5cwnCTg=; b=kaEsNsF0KJCjckjlyqne/UaMSf XpXecFAc7NIrEdzlARVPS9ITlg04syCTNGC4u9FUYlXZowEvaTXGt6o9C2B8DnlvDfbQuVK4H9ve3 nC60QOllN2Kaap1YN+MJKZgG+53GM+8SmQ0nTUTOzb5zy2HjRmRKDz0jjIu5S4X4rqQglc4BPVegH 3ou5D7VpV4b2KQUftpWrGyL/sKRND+seG2ZbrnYmGB8ErpoUW89WledOv4OlEe2bLLWIu8MvdWANr Z0UE7OB4g1ynjiHLVMTDVN8mlvcFqdukMtWdD7lU9pgb1V6gYBKHG2R0jbAixZeBhOYYI/xlWqdOt jit9CMHA==; Received: from [2001:4bb8:18c:2acc:8dfe:be3c:592c:efc5] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jysCu-0006CQ-4J; Fri, 24 Jul 2020 07:33:28 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Song Liu , Hans de Goede , Richard Weinberger , Minchan Kim , linux-mtd@lists.infradead.org, dm-devel@redhat.com, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, drbd-dev@lists.linbit.com, linux-raid@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org Subject: [PATCH 05/14] md: update the optimal I/O size on reshape Date: Fri, 24 Jul 2020 09:33:04 +0200 Message-Id: <20200724073313.138789-6-hch@lst.de> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200724073313.138789-1-hch@lst.de> References: <20200724073313.138789-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-Rspamd-Queue-Id: 32BFB1ADA6 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The raid5 and raid10 drivers currently update the read-ahead size, but not the optimal I/O size on reshape. To prepare for deriving the read-ahead size from the optimal I/O size make sure it is updated as well. Signed-off-by: Christoph Hellwig --- drivers/md/raid10.c | 22 ++++++++++++++-------- drivers/md/raid5.c | 10 ++++++++-- 2 files changed, 22 insertions(+), 10 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index b1d0c9d4ef7757..9f88ff9bdee437 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3695,10 +3695,20 @@ static struct r10conf *setup_conf(struct mddev *m= ddev) return ERR_PTR(err); } =20 +static void raid10_set_io_opt(struct r10conf *conf) +{ + int raid_disks =3D conf->geo.raid_disks; + + if (!(conf->geo.raid_disks % conf->geo.near_copies)) + raid_disks /=3D conf->geo.near_copies; + blk_queue_io_opt(conf->mddev->queue, (conf->mddev->chunk_sectors << 9) = * + raid_disks); +} + static int raid10_run(struct mddev *mddev) { struct r10conf *conf; - int i, disk_idx, chunk_size; + int i, disk_idx; struct raid10_info *disk; struct md_rdev *rdev; sector_t size; @@ -3734,18 +3744,13 @@ static int raid10_run(struct mddev *mddev) mddev->thread =3D conf->thread; conf->thread =3D NULL; =20 - chunk_size =3D mddev->chunk_sectors << 9; if (mddev->queue) { blk_queue_max_discard_sectors(mddev->queue, mddev->chunk_sectors); blk_queue_max_write_same_sectors(mddev->queue, 0); blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - blk_queue_io_min(mddev->queue, chunk_size); - if (conf->geo.raid_disks % conf->geo.near_copies) - blk_queue_io_opt(mddev->queue, chunk_size * conf->geo.raid_disks); - else - blk_queue_io_opt(mddev->queue, chunk_size * - (conf->geo.raid_disks / conf->geo.near_copies)); + blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); + raid10_set_io_opt(conf); } =20 rdev_for_each(rdev, mddev) { @@ -4719,6 +4724,7 @@ static void end_reshape(struct r10conf *conf) stripe /=3D conf->geo.near_copies; if (conf->mddev->queue->backing_dev_info->ra_pages < 2 * stripe) conf->mddev->queue->backing_dev_info->ra_pages =3D 2 * stripe; + raid10_set_io_opt(conf); } conf->fullsync =3D 0; } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index d7780b1dd0c528..68e41ce3ca75cc 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7123,6 +7123,12 @@ static int only_parity(int raid_disk, int algo, in= t raid_disks, int max_degraded return 0; } =20 +static void raid5_set_io_opt(struct r5conf *conf) +{ + blk_queue_io_opt(conf->mddev->queue, (conf->chunk_sectors << 9) * + (conf->raid_disks - conf->max_degraded)); +} + static int raid5_run(struct mddev *mddev) { struct r5conf *conf; @@ -7412,8 +7418,7 @@ static int raid5_run(struct mddev *mddev) =20 chunk_size =3D mddev->chunk_sectors << 9; blk_queue_io_min(mddev->queue, chunk_size); - blk_queue_io_opt(mddev->queue, chunk_size * - (conf->raid_disks - conf->max_degraded)); + raid5_set_io_opt(conf); mddev->queue->limits.raid_partial_stripes_expensive =3D 1; /* * We can only discard a whole stripe. It doesn't make sense to @@ -8006,6 +8011,7 @@ static void end_reshape(struct r5conf *conf) / PAGE_SIZE); if (conf->mddev->queue->backing_dev_info->ra_pages < 2 * stripe) conf->mddev->queue->backing_dev_info->ra_pages =3D 2 * stripe; + raid5_set_io_opt(conf); } } } --=20 2.27.0