From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7035C4361B for ; Fri, 18 Dec 2020 09:59:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 09074233E2 for ; Fri, 18 Dec 2020 09:59:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 09074233E2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 352DA6B005C; Fri, 18 Dec 2020 04:59:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 303176B005D; Fri, 18 Dec 2020 04:59:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A5276B0068; Fri, 18 Dec 2020 04:59:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id F26606B005C for ; Fri, 18 Dec 2020 04:59:21 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A80D3181C34F2 for ; Fri, 18 Dec 2020 09:59:21 +0000 (UTC) X-FDA: 77605955322.16.offer79_1f15ea92743c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 8C2C1100E7FDA for ; Fri, 18 Dec 2020 09:59:21 +0000 (UTC) X-HE-Tag: offer79_1f15ea92743c X-Filterd-Recvd-Size: 13184 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Dec 2020 09:59:19 +0000 (UTC) IronPort-SDR: 1LHVDpFd98jKHR136vG0WG+7QWBrgE5GvRwQfI+72fkpVFjUo5wRiWq23fInHJDxY+ABK8obcN wJy7Gylr8OzQ== X-IronPort-AV: E=McAfee;i="6000,8403,9838"; a="193817229" X-IronPort-AV: E=Sophos;i="5.78,430,1599548400"; d="scan'208";a="193817229" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2020 01:59:17 -0800 IronPort-SDR: iLx+Q7kqKEX58gKy2cDot+voAWCtDr1DKCmFOPslXRFCPipRgM3hvTNO120RH21i6AtorwdpYn aGA3xVkcQR2g== X-IronPort-AV: E=Sophos;i="5.78,430,1599548400"; d="scan'208";a="370502444" Received: from rongch2-mobl.ccr.corp.intel.com (HELO [10.249.170.241]) ([10.249.170.241]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2020 01:59:14 -0800 Subject: Re: [kbuild-all] Re: [linux-next:master 12593/13311] mm/kfence/core.c:250:13: sparse: sparse: context imbalance in 'kfence_guarded_alloc' - wrong count at exit To: Marco Elver Cc: Alexander Potapenko , kbuild-all@lists.01.org, Linux Memory Management List , Dmitry Vyukov , Jann Horn , Andrew Morton References: <20201217013925.GQ67148@shao2-debian> From: "Chen, Rong A" Message-ID: Date: Fri, 18 Dec 2020 17:59:11 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 12/17/2020 1:58 PM, Marco Elver wrote: > On Thu, 17 Dec 2020 at 02:40, kernel test robot wrote: >> >> tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master >> head: 9317f948b0b188b8d2fded75957e6d42c460df1b >> commit: e21d96503adda2ccb571d577ad32929383c710ea [12593/13311] x86, kfence: enable KFENCE for x86 >> config: x86_64-randconfig-s022-20201216 (attached as .config) >> compiler: gcc-9 (Debian 9.3.0-15) 9.3.0 >> reproduce: >> # apt-get install sparse >> # sparse version: v0.6.3-184-g1b896707-dirty >> # https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=e21d96503adda2ccb571d577ad32929383c710ea >> git remote add linux-next https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git >> git fetch --no-tags linux-next master >> git checkout e21d96503adda2ccb571d577ad32929383c710ea >> # save the attached .config to linux build tree >> make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 >> >> If you fix the issue, kindly add following tag as appropriate >> Reported-by: kernel test robot >> >> >> "sparse warnings: (new ones prefixed by >>)" >>>> mm/kfence/core.c:250:13: sparse: sparse: context imbalance in 'kfence_guarded_alloc' - wrong count at exit >>>> mm/kfence/core.c:825:9: sparse: sparse: context imbalance in 'kfence_handle_page_fault' - different lock contexts for basic block > > This is a false positive, and sparse can't seem to follow locking done > here. This code has been tested extensively with lockdep. Hi Marco, Sorry for the inconvenience, I was just wondering that is there any chance to use the macros for lock checking? https://www.kernel.org/doc/html/latest/dev-tools/sparse.html#using-sparse-for-lock-checking Best Regards, Rong Chen > >> vim +/kfence_guarded_alloc +250 mm/kfence/core.c >> >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 249 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 @250 static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp) >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 251 { >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 252 struct kfence_metadata *meta = NULL; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 253 unsigned long flags; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 254 struct page *page; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 255 void *addr; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 256 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 257 /* Try to obtain a free object. */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 258 raw_spin_lock_irqsave(&kfence_freelist_lock, flags); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 259 if (!list_empty(&kfence_freelist)) { >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 260 meta = list_entry(kfence_freelist.next, struct kfence_metadata, list); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 261 list_del_init(&meta->list); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 262 } >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 263 raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 264 if (!meta) >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 265 return NULL; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 266 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 267 if (unlikely(!raw_spin_trylock_irqsave(&meta->lock, flags))) { >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 268 /* >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 269 * This is extremely unlikely -- we are reporting on a >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 270 * use-after-free, which locked meta->lock, and the reporting >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 271 * code via printk calls kmalloc() which ends up in >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 272 * kfence_alloc() and tries to grab the same object that we're >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 273 * reporting on. While it has never been observed, lockdep does >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 274 * report that there is a possibility of deadlock. Fix it by >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 275 * using trylock and bailing out gracefully. >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 276 */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 277 raw_spin_lock_irqsave(&kfence_freelist_lock, flags); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 278 /* Put the object back on the freelist. */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 279 list_add_tail(&meta->list, &kfence_freelist); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 280 raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 281 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 282 return NULL; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 283 } >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 284 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 285 meta->addr = metadata_to_pageaddr(meta); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 286 /* Unprotect if we're reusing this page. */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 287 if (meta->state == KFENCE_OBJECT_FREED) >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 288 kfence_unprotect(meta->addr); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 289 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 290 /* >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 291 * Note: for allocations made before RNG initialization, will always >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 292 * return zero. We still benefit from enabling KFENCE as early as >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 293 * possible, even when the RNG is not yet available, as this will allow >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 294 * KFENCE to detect bugs due to earlier allocations. The only downside >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 295 * is that the out-of-bounds accesses detected are deterministic for >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 296 * such allocations. >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 297 */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 298 if (prandom_u32_max(2)) { >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 299 /* Allocate on the "right" side, re-calculate address. */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 300 meta->addr += PAGE_SIZE - size; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 301 meta->addr = ALIGN_DOWN(meta->addr, cache->align); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 302 } >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 303 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 304 addr = (void *)meta->addr; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 305 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 306 /* Update remaining metadata. */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 307 metadata_update_state(meta, KFENCE_OBJECT_ALLOCATED); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 308 /* Pairs with READ_ONCE() in kfence_shutdown_cache(). */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 309 WRITE_ONCE(meta->cache, cache); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 310 meta->size = size; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 311 for_each_canary(meta, set_canary_byte); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 312 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 313 /* Set required struct page fields. */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 314 page = virt_to_page(meta->addr); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 315 page->slab_cache = cache; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 316 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 317 raw_spin_unlock_irqrestore(&meta->lock, flags); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 318 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 319 /* Memory initialization. */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 320 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 321 /* >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 322 * We check slab_want_init_on_alloc() ourselves, rather than letting >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 323 * SL*B do the initialization, as otherwise we might overwrite KFENCE's >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 324 * redzone. >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 325 */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 326 if (unlikely(slab_want_init_on_alloc(gfp, cache))) >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 327 memzero_explicit(addr, size); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 328 if (cache->ctor) >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 329 cache->ctor(addr); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 330 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 331 if (CONFIG_KFENCE_STRESS_TEST_FAULTS && !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS)) >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 332 kfence_protect(meta->addr); /* Random "faults" by protecting the object. */ >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 333 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 334 atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCATED]); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 335 atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCS]); >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 336 >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 337 return addr; >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 338 } >> 3b295ea3a66b734 Alexander Potapenko 2020-12-10 339 >> >> :::::: The code at line 250 was first introduced by commit >> :::::: 3b295ea3a66b734a0cd23ae66bae0747a078725a mm: add Kernel Electric-Fence infrastructure >> >> :::::: TO: Alexander Potapenko >> :::::: CC: Stephen Rothwell >> >> --- >> 0-DAY CI Kernel Test Service, Intel Corporation >> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org >> _______________________________________________ >> kbuild mailing list -- kbuild@lists.01.org >> To unsubscribe send an email to kbuild-leave@lists.01.org > _______________________________________________ > kbuild-all mailing list -- kbuild-all@lists.01.org > To unsubscribe send an email to kbuild-all-leave@lists.01.org >