From: Song Liu <songliubraving@fb.com>
To: kernel test robot <lkp@intel.com>
Cc: Xiao Ni <xni@redhat.com>,
"kbuild-all@lists.01.org" <kbuild-all@lists.01.org>,
Linux Memory Management List <linux-mm@kvack.org>
Subject: Re: [linux-next:master 6117/8451] drivers/md/raid10.c:1707:39: warning: Uninitialized variable: first_r10bio [uninitvar]
Date: Wed, 31 Mar 2021 19:48:44 +0000 [thread overview]
Message-ID: <54844352-740A-42CD-8B51-EFB2260F9FB2@fb.com> (raw)
In-Reply-To: <202103310918.z9GIqcIY-lkp@intel.com>
> On Mar 30, 2021, at 6:59 PM, kernel test robot <lkp@intel.com> wrote:
>
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
> head: 4143e05b7b171902f4938614c2a68821e1af46bc
> commit: 254c271da0712ea8914f187588e0f81f7678ee2f [6117/8451] md/raid10: improve discard request for far layout
> compiler: aarch64-linux-gcc (GCC) 9.3.0
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
>
> cppcheck warnings: (new ones prefixed by >>)
> In file included from drivers/md/raid10.c:
>>> drivers/md/raid10.c:1707:39: warning: Uninitialized variable: first_r10bio [uninitvar]
> r10_bio->master_bio = (struct bio *)first_r10bio;
> ^
>
> vim +1707 drivers/md/raid10.c
>
> 1573
> 1574 /*
> 1575 * There are some limitations to handle discard bio
> 1576 * 1st, the discard size is bigger than stripe_size*2.
> 1577 * 2st, if the discard bio spans reshape progress, we use the old way to
> 1578 * handle discard bio
> 1579 */
> 1580 static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
> 1581 {
> 1582 struct r10conf *conf = mddev->private;
> 1583 struct geom *geo = &conf->geo;
> 1584 int far_copies = geo->far_copies;
> 1585 bool first_copy = true;
> 1586 struct r10bio *r10_bio, *first_r10bio;
> 1587 struct bio *split;
> 1588 int disk;
> 1589 sector_t chunk;
> 1590 unsigned int stripe_size;
> 1591 unsigned int stripe_data_disks;
> 1592 sector_t split_size;
> 1593 sector_t bio_start, bio_end;
> 1594 sector_t first_stripe_index, last_stripe_index;
> 1595 sector_t start_disk_offset;
> 1596 unsigned int start_disk_index;
> 1597 sector_t end_disk_offset;
> 1598 unsigned int end_disk_index;
> 1599 unsigned int remainder;
> 1600
> 1601 if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
> 1602 return -EAGAIN;
> 1603
> 1604 wait_barrier(conf);
> 1605
> 1606 /*
> 1607 * Check reshape again to avoid reshape happens after checking
> 1608 * MD_RECOVERY_RESHAPE and before wait_barrier
> 1609 */
> 1610 if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
> 1611 goto out;
> 1612
> 1613 if (geo->near_copies)
> 1614 stripe_data_disks = geo->raid_disks / geo->near_copies +
> 1615 geo->raid_disks % geo->near_copies;
> 1616 else
> 1617 stripe_data_disks = geo->raid_disks;
> 1618
> 1619 stripe_size = stripe_data_disks << geo->chunk_shift;
> 1620
> 1621 bio_start = bio->bi_iter.bi_sector;
> 1622 bio_end = bio_end_sector(bio);
> 1623
> 1624 /*
> 1625 * Maybe one discard bio is smaller than strip size or across one
> 1626 * stripe and discard region is larger than one stripe size. For far
> 1627 * offset layout, if the discard region is not aligned with stripe
> 1628 * size, there is hole when we submit discard bio to member disk.
> 1629 * For simplicity, we only handle discard bio which discard region
> 1630 * is bigger than stripe_size * 2
> 1631 */
> 1632 if (bio_sectors(bio) < stripe_size*2)
> 1633 goto out;
> 1634
> 1635 /*
> 1636 * Keep bio aligned with strip size.
> 1637 */
> 1638 div_u64_rem(bio_start, stripe_size, &remainder);
> 1639 if (remainder) {
> 1640 split_size = stripe_size - remainder;
> 1641 split = bio_split(bio, split_size, GFP_NOIO, &conf->bio_split);
> 1642 bio_chain(split, bio);
> 1643 allow_barrier(conf);
> 1644 /* Resend the fist split part */
> 1645 submit_bio_noacct(split);
> 1646 wait_barrier(conf);
> 1647 }
> 1648 div_u64_rem(bio_end, stripe_size, &remainder);
> 1649 if (remainder) {
> 1650 split_size = bio_sectors(bio) - remainder;
> 1651 split = bio_split(bio, split_size, GFP_NOIO, &conf->bio_split);
> 1652 bio_chain(split, bio);
> 1653 allow_barrier(conf);
> 1654 /* Resend the second split part */
> 1655 submit_bio_noacct(bio);
> 1656 bio = split;
> 1657 wait_barrier(conf);
> 1658 }
> 1659
> 1660 bio_start = bio->bi_iter.bi_sector;
> 1661 bio_end = bio_end_sector(bio);
> 1662
> 1663 /*
> 1664 * Raid10 uses chunk as the unit to store data. It's similar like raid0.
> 1665 * One stripe contains the chunks from all member disk (one chunk from
> 1666 * one disk at the same HBA address). For layout detail, see 'man md 4'
> 1667 */
> 1668 chunk = bio_start >> geo->chunk_shift;
> 1669 chunk *= geo->near_copies;
> 1670 first_stripe_index = chunk;
> 1671 start_disk_index = sector_div(first_stripe_index, geo->raid_disks);
> 1672 if (geo->far_offset)
> 1673 first_stripe_index *= geo->far_copies;
> 1674 start_disk_offset = (bio_start & geo->chunk_mask) +
> 1675 (first_stripe_index << geo->chunk_shift);
> 1676
> 1677 chunk = bio_end >> geo->chunk_shift;
> 1678 chunk *= geo->near_copies;
> 1679 last_stripe_index = chunk;
> 1680 end_disk_index = sector_div(last_stripe_index, geo->raid_disks);
> 1681 if (geo->far_offset)
> 1682 last_stripe_index *= geo->far_copies;
> 1683 end_disk_offset = (bio_end & geo->chunk_mask) +
> 1684 (last_stripe_index << geo->chunk_shift);
> 1685
> 1686 retry_discard:
> 1687 r10_bio = mempool_alloc(&conf->r10bio_pool, GFP_NOIO);
> 1688 r10_bio->mddev = mddev;
> 1689 r10_bio->state = 0;
> 1690 r10_bio->sectors = 0;
> 1691 memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks);
> 1692 wait_blocked_dev(mddev, r10_bio);
> 1693
> 1694 /*
> 1695 * For far layout it needs more than one r10bio to cover all regions.
> 1696 * Inspired by raid10_sync_request, we can use the first r10bio->master_bio
> 1697 * to record the discard bio. Other r10bio->master_bio record the first
> 1698 * r10bio. The first r10bio only release after all other r10bios finish.
> 1699 * The discard bio returns only first r10bio finishes
> 1700 */
> 1701 if (first_copy) {
> 1702 r10_bio->master_bio = bio;
> 1703 set_bit(R10BIO_Discard, &r10_bio->state);
> 1704 first_copy = false;
> 1705 first_r10bio = r10_bio;
> 1706 } else
>> 1707 r10_bio->master_bio = (struct bio *)first_r10bio;
This is a false alert. The function starts with first_copy = true. When we
hit else clause, first_r10bio is already initialized.
Thanks,
Song
[...]
prev parent reply other threads:[~2021-03-31 19:48 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-31 1:59 kernel test robot
2021-03-31 19:48 ` Song Liu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54844352-740A-42CD-8B51-EFB2260F9FB2@fb.com \
--to=songliubraving@fb.com \
--cc=kbuild-all@lists.01.org \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=xni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox