From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C64BCC433EF for ; Sun, 10 Oct 2021 22:49:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3E52C60F21 for ; Sun, 10 Oct 2021 22:49:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3E52C60F21 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id CD5316B006C; Sun, 10 Oct 2021 18:49:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C84476B0072; Sun, 10 Oct 2021 18:49:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B730D900002; Sun, 10 Oct 2021 18:49:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id A592B6B006C for ; Sun, 10 Oct 2021 18:49:10 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5A3F1181AEF2A for ; Sun, 10 Oct 2021 22:49:10 +0000 (UTC) X-FDA: 78682020060.36.100925D Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf09.hostedemail.com (Postfix) with ESMTP id 0FA7E30008F5 for ; Sun, 10 Oct 2021 22:49:09 +0000 (UTC) Received: by mail-pg1-f172.google.com with SMTP id q5so996225pgr.7 for ; Sun, 10 Oct 2021 15:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=8EcW2ShAuYWF0QWBJ5C9v70rQEiQ/Ji8vWsPGRffTa4=; b=tIpCjTWtXKNGQJUpE6NPQfM2QPMkw6oTY7R71q3/D/7RUZDQ+x6ZY9899Lp7OOGdru 2uuCSCilLNWMitn+XVpEqCq5xFUErtDMqivNCsdQi5proI88u6r373VQpDPSz11lQfFy Gt/BO7j12Pi5XQb1pRVqi3CPufpn1mZ9h+XvBTGa33EzpLmkd6tv4wvAGF2uU7OO8D13 dCS2UAmFLRhCPsqkmRwYffez9C+GBxjZkhuylxs+O4v2YG7NtiM8kN/6EojBYtuCV7MD eku6uoMEDF1jJGVYYS15NaM0vCNZVpn3GuAqfv9yqWh90EUYq3zYZpxQRoAJEwHFOTMT Ehig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=8EcW2ShAuYWF0QWBJ5C9v70rQEiQ/Ji8vWsPGRffTa4=; b=F0OoMuCxDuln1leds4yktSq38EXTmx8l63WdoDb3kQXI4vybDbnizW775Umsxpqls2 ZS5CMYjKKpFKPQlh0x/XmXUDTLhMsgdD9M0B8Bxim/2rA8S9XKsMYVkPXdAftzyy+FRa 4BU0KhBJrnhq2ScWyy24vuy1FNUrGU1AtK/jI8GpHiaIKTInkETY4oyjnbcUDXp9GtAc 557lTK9W3ytLbj2lXr6NhGUJ44O+AIYWr1dlaGwbUiKSoXR0gt+96XvyI4K0CP5Cnpen P4O60JIPXrfOkTZSOo2vxRUKvbYfOA33HAaH1oVdbuoGH5rY7p3aWl5+uzVLjZmyfTax NrIg== X-Gm-Message-State: AOAM5303ge9ukiZPYFckfacB5t2s3/GjpciO5qXlAlIDZ0yK1je6x12w Z/68fW/jG1McMehbM3yMbQQwNw== X-Google-Smtp-Source: ABdhPJwlNO90Xh8cmYucSShJXLf+PIqFNTI8K4kHV+potciLWvaDJvnKsgMzBGTjNB2bPrBOAzjYsg== X-Received: by 2002:a05:6a00:230e:b0:44c:4f2d:9b00 with SMTP id h14-20020a056a00230e00b0044c4f2d9b00mr22150072pfh.24.1633906148786; Sun, 10 Oct 2021 15:49:08 -0700 (PDT) Received: from [2620:15c:17:3:3280:1d46:7d55:1fbb] ([2620:15c:17:3:3280:1d46:7d55:1fbb]) by smtp.gmail.com with ESMTPSA id u24sm5403060pfm.85.2021.10.10.15.49.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 10 Oct 2021 15:49:08 -0700 (PDT) Date: Sun, 10 Oct 2021 15:49:07 -0700 (PDT) From: David Rientjes To: Hyeonggon Yoo <42.hyeyoo@gmail.com> cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Vlastimil Babka Subject: Re: [PATCH] mm, slub: Use prefetchw instead of prefetch In-Reply-To: <20211008133602.4963-1-42.hyeyoo@gmail.com> Message-ID: <30a76d87-e0af-3eec-d095-d87e898b31cf@google.com> References: <20211008133602.4963-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 0FA7E30008F5 X-Stat-Signature: qyoosdpz37mr5thbs4wy7uczrj5xy8be Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=tIpCjTWt; spf=pass (imf09.hostedemail.com: domain of rientjes@google.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=rientjes@google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1633906149-99187 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 8 Oct 2021, Hyeonggon Yoo wrote: > It's certain that an object will be not only read, but also > written after allocation. > Why is it certain? I think perhaps what you meant to say is that if we are doing any prefetching here, then access will benefit from prefetchw instead of prefetch. But it's not "certain" that allocated memory will be accessed at all. > Use prefetchw instead of prefetchw. On supported architecture If we're using prefetchw instead of prefetchw, I think the diff would be 0 lines changed :) > like x86, it helps to invalidate cache line when the object exists > in other processors' cache. > > Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > --- > mm/slub.c | 7 +++---- > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 3d2025f7163b..2aca7523165e 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -352,9 +352,9 @@ static inline void *get_freepointer(struct kmem_cache *s, void *object) > return freelist_dereference(s, object + s->offset); > } > > -static void prefetch_freepointer(const struct kmem_cache *s, void *object) > +static void prefetchw_freepointer(const struct kmem_cache *s, void *object) > { > - prefetch(object + s->offset); > + prefetchw(object + s->offset); > } > > static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) > @@ -3195,10 +3195,9 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, > note_cmpxchg_failure("slab_alloc", s, tid); > goto redo; > } > - prefetch_freepointer(s, next_object); > + prefetchw_freepointer(s, next_object); > stat(s, ALLOC_FASTPATH); > } > - > maybe_wipe_obj_freeptr(s, object); > init = slab_want_init_on_alloc(gfpflags, s); > > -- > 2.27.0 > >