From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67D36C43461 for ; Thu, 10 Sep 2020 20:25:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B94042065E for ; Thu, 10 Sep 2020 20:25:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="doPdBnJs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B94042065E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E76EA900005; Thu, 10 Sep 2020 16:25:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E268D900002; Thu, 10 Sep 2020 16:25:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D15C3900005; Thu, 10 Sep 2020 16:25:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0085.hostedemail.com [216.40.44.85]) by kanga.kvack.org (Postfix) with ESMTP id BB288900002 for ; Thu, 10 Sep 2020 16:25:29 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6F8632465 for ; Thu, 10 Sep 2020 20:25:29 +0000 (UTC) X-FDA: 77248281978.04.wash45_0a12f0a270e8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 45122800E11B for ; Thu, 10 Sep 2020 20:25:29 +0000 (UTC) X-HE-Tag: wash45_0a12f0a270e8 X-Filterd-Recvd-Size: 5116 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Thu, 10 Sep 2020 20:25:28 +0000 (UTC) Received: from paulmck-ThinkPad-P72.home (unknown [50.45.173.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4E83E20829; Thu, 10 Sep 2020 20:25:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1599769527; bh=j1D8IPJ4Y08qnoWlFLiDNLrabhQVaPZSJO9zVTmslg8=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=doPdBnJsbif/E+6P+b8WZBM0DPFyC8Ar2wQffAXUnGxFuH8CuQy6cIoH1ChJxEEdp T/094BXQngRogk0hZw0MrLqFPTkSpdB3jz95pU+oOeYJExyZ+BkHTDqXzi+TVSEyPF W22Q8DV32RdLZ+Yl4WNrQphSoWVZuPu3oZHwDtKg= Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id CF0923523080; Thu, 10 Sep 2020 13:25:26 -0700 (PDT) Date: Thu, 10 Sep 2020 13:25:26 -0700 From: "Paul E. McKenney" To: Dmitry Vyukov Cc: Alexander Potapenko , Marco Elver , Andrew Morton , Catalin Marinas , Christoph Lameter , David Rientjes , Joonsoo Kim , Mark Rutland , Pekka Enberg , "H. Peter Anvin" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Dave Hansen , Eric Dumazet , Greg Kroah-Hartman , Ingo Molnar , Jann Horn , Jonathan Corbet , Kees Cook , Peter Zijlstra , Qian Cai , Thomas Gleixner , Will Deacon , the arch/x86 maintainers , "open list:DOCUMENTATION" , LKML , kasan-dev , Linux ARM , Linux-MM Subject: Re: [PATCH RFC 01/10] mm: add Kernel Electric-Fence infrastructure Message-ID: <20200910202526.GU29330@paulmck-ThinkPad-P72> Reply-To: paulmck@kernel.org References: <20200907134055.2878499-1-elver@google.com> <20200907134055.2878499-2-elver@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) X-Rspamd-Queue-Id: 45122800E11B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 10, 2020 at 07:11:41PM +0200, Dmitry Vyukov wrote: > On Thu, Sep 10, 2020 at 6:19 PM Alexander Potapenko wrote: > > > > On Thu, Sep 10, 2020 at 5:43 PM Dmitry Vyukov wrote: > > > > > > > > + /* Calculate address for this allocation. */ > > > > + if (right) > > > > + meta->addr += PAGE_SIZE - size; > > > > + meta->addr = ALIGN_DOWN(meta->addr, cache->align); > > > > > > I would move this ALIGN_DOWN under the (right) if. > > > Do I understand it correctly that it will work, but we expect it to do > > > nothing for !right? If cache align is >PAGE_SIZE, nothing good will > > > happen anyway, right? > > > The previous 2 lines look like part of the same calculation -- "figure > > > out the addr for the right case". > > > > Yes, makes sense. > > > > > > + > > > > + schedule_delayed_work(&kfence_timer, 0); > > > > + WRITE_ONCE(kfence_enabled, true); > > > > > > Can toggle_allocation_gate run before we set kfence_enabled? If yes, > > > it can break. If not, it's still somewhat confusing. > > > > Correct, it should go after we enable KFENCE. We'll fix that in v2. > > > > > > +void __kfence_free(void *addr) > > > > +{ > > > > + struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr); > > > > + > > > > + if (unlikely(meta->cache->flags & SLAB_TYPESAFE_BY_RCU)) > > > > > > This may deserve a comment as to why we apply rcu on object level > > > whereas SLAB_TYPESAFE_BY_RCU means slab level only. > > > > Sorry, what do you mean by "slab level"? > > SLAB_TYPESAFE_BY_RCU means we have to wait for possible RCU accesses > > in flight before freeing objects from that slab - that's basically > > what we are doing here below: > > Exactly! You see it is confusing :) > SLAB_TYPESAFE_BY_RCU does not mean that. rcu-freeing only applies to > whole pages, that's what I mean by "slab level" (whole slabs are freed > by rcu). Just confirming Dmitry's description of SLAB_TYPESAFE_BY_RCU semantics. Thanx, Paul