From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AB20C433ED for ; Thu, 13 May 2021 14:31:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2CCD560FE7 for ; Thu, 13 May 2021 14:31:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2CCD560FE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B090B6B0036; Thu, 13 May 2021 10:31:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ADEF16B006E; Thu, 13 May 2021 10:31:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A6DA6B0070; Thu, 13 May 2021 10:31:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 646896B0036 for ; Thu, 13 May 2021 10:31:19 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EB9FC181AEF3F for ; Thu, 13 May 2021 14:31:18 +0000 (UTC) X-FDA: 78136445436.06.03B56BB Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf01.hostedemail.com (Postfix) with ESMTP id 3967E50904A2 for ; Thu, 13 May 2021 14:31:14 +0000 (UTC) IronPort-SDR: Dx+IZHKlB/52oVdcLsC05IqrVHZrvVh27nTZVG3YI27PwYHI6DBAr9UhLfVrfeqxD64G5a7QoB 1Uv7QVR/1zSQ== X-IronPort-AV: E=McAfee;i="6200,9189,9982"; a="285463474" X-IronPort-AV: E=Sophos;i="5.82,296,1613462400"; d="scan'208";a="285463474" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2021 07:31:14 -0700 IronPort-SDR: 1w+Fr7BDs7KibBuHH2nvv7S7MWcDG+82O653mKtrOk34KnFyowG0bNYtCGTAELPU3W+m0/0ni1 sGt8t7L1XcUQ== X-IronPort-AV: E=Sophos;i="5.82,296,1613462400"; d="scan'208";a="626232328" Received: from xsang-optiplex-9020.sh.intel.com (HELO xsang-OptiPlex-9020) ([10.239.159.140]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2021 07:31:10 -0700 Date: Thu, 13 May 2021 22:48:09 +0800 From: Oliver Sang To: Theodore Ts'o Cc: Harshad Shirwadkar , LKML , Linux Memory Management List , lkp@lists.01.org, lkp@intel.com, dm-devel@redhat.com Subject: Re: [ext4] 21175ca434: mdadm-selftests.enchmarks/mdadm-selftests/tests/01r1fail.fail Message-ID: <20210513144809.GF20142@xsang-OptiPlex-9020> References: <20210427081539.GF32408@xsang-OptiPlex-9020> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf01.hostedemail.com: domain of oliver.sang@intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=oliver.sang@intel.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 3967E50904A2 X-Stat-Signature: f3hzwswrag8sbauoyot4rhmbtw9hwhpu Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1620916274-547540 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Theodore, On Wed, Apr 28, 2021 at 10:03:16AM -0400, Theodore Ts'o wrote: > (Hmm, why did you cc linux-km on this report? I would have thought > dm-devel would have made more sense?) > > On Tue, Apr 27, 2021 at 04:15:39PM +0800, kernel test robot wrote: > > > > FYI, we noticed the following commit (built with gcc-9): > > > > commit: 21175ca434c5d49509b73cf473618b01b0b85437 ("ext4: make prefetch_block_bitmaps default") > > https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master > > > > > in testcase: mdadm-selftests > > version: mdadm-selftests-x86_64-5d518de-1_20201008 > > with following parameters: > > > > disk: 1HDD > > test_prefix: 01r1 > > ucode: 0x21 > > So this failure makes no sense to me. Looking at the kmesg failure > logs, it's failing in the md layer: just FYI. we rerun the tests for both parent and this commit, up to 56 times. the failure seems persistent on the commit, though not always. but the test never failed on parent. f68f4063855903fd 21175ca434c5d49509b73cf4736 ---------------- --------------------------- fail:runs %reproduction fail:runs | | | :56 61% 34:56 mdadm-selftests.enchmarks/mdadm-selftests/tests/01r1fail.fail > > kern :info : [ 99.775514] md/raid1:md0: not clean -- starting background reconstruction > kern :info : [ 99.783372] md/raid1:md0: active with 3 out of 4 mirrors > kern :info : [ 99.789735] md0: detected capacity change from 0 to 37888 > kern :info : [ 99.796216] md: resync of RAID array md0 > kern :crit : [ 99.900450] md/raid1:md0: Disk failure on loop2, disabling device. > md/raid1:md0: Operation continuing on 2 devices. > kern :crit : [ 99.918281] md/raid1:md0: Disk failure on loop1, disabling device. > md/raid1:md0: Operation continuing on 1 devices. > kern :info : [ 100.835833] md: md0: resync interrupted. > kern :info : [ 101.852898] md: resync of RAID array md0 > kern :info : [ 101.858347] md: md0: resync done. > user :notice: [ 102.109684] /lkp/benchmarks/mdadm-selftests/tests/01r1fail... FAILED - see /var/tmp/01r1fail.log and /var/tmp/fail01r1fail.log for details > > The referenced commit just turns block bitmap prefetching in ext4. > This should not cause md to failure; if so, that's an md bug, not an > ext4 bug. There should not be anything that the file system is doing > that would cause the kernel to think there is a disk failure. > > By the way, the reproduction instructions aren't working currently: > > > To reproduce: > > > > git clone https://github.com/intel/lkp-tests.git > > cd lkp-tests > > bin/lkp install job.yaml # job file is attached in this email > > This fails because lkp is trying to apply a patch which does not apply > with the current version of the md tools. > > > bin/lkp split-job --compatible job.yaml > > bin/lkp run compatible-job.yaml > > And the current versions lkp don't generate a compatible-job.yaml file > when you run "lkp split-job --compatable"; instead it generates a new > yaml file with a set of random characters to generate a unique name. > (What Multics parlance would be called a "shriek name"[1] :-) > > Since I was having trouble running the reproduction; could you send > the /var/tmp/*fail.logs so we could have a bit more insight what is > going on? > > Thanks! > > - Ted >