From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DA8EC433EF for ; Sun, 12 Jun 2022 15:44:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 717406B024A; Sun, 12 Jun 2022 11:44:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6774C6B024B; Sun, 12 Jun 2022 11:44:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 53F4B6B024C; Sun, 12 Jun 2022 11:44:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 44DD76B024A for ; Sun, 12 Jun 2022 11:44:57 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 04B9820485 for ; Sun, 12 Jun 2022 15:44:56 +0000 (UTC) X-FDA: 79570007034.17.C5FA77E Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf20.hostedemail.com (Postfix) with ESMTP id 91B051C0081 for ; Sun, 12 Jun 2022 15:44:54 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id j7so3545403pjn.4 for ; Sun, 12 Jun 2022 08:44:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=sDXoUvF9HvlqoQcYGACYQ/XDUHt968ekXwv/rzATOLg=; b=uYODikCP+VbMeXAeEkPfiA0/WPN47Gy8epeNTvX4VPR2T9patprusHUG4n0DD+88S6 L4MLSE8fLQzbNjPb9FApack4csETvvbZVFtpVI3fBAOCn/99ahV56lZTzPJMucV4FuFg gcktDO51ig0rLKU3JSEm5n0t7X9UI/me+uvROdyhblDRomOf1j+RhVDOvUqZCQnVnqpO ntNAJhRg89EH/2TpoZcnaStdB8ybLLvVDrr/IYhyyGUsME99RFklvLvFKIOcYgHDA63G cabmuMvQjNfRSQsRA4JR15IgLuAGPIABm54VfNnayT7FT+CLWZJWiTwnHA5WzVVzwQPa dVdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=sDXoUvF9HvlqoQcYGACYQ/XDUHt968ekXwv/rzATOLg=; b=ImaB8QX59x5Rb7Q7ipRvvVDbgTYldDY2MpSOCRh8BxexxSHR27mrHqXoGbiMg2rWsV 9BDK/Eb9U5CZE3pfsmxeYFUdkQNYgaN02ZXLxb3A+2noc+oM8GXTm39c8rrsm2Hu3Nwe yGRg5xEIrfP2ioGDWOdovaIpdZRdx6FWU2QjS6kfJoMz5daAPjf/WSleb5gdkhXoIcgL IsMftJFYjW7NUwcrueYjfvdR5qKTaQzToD4PVNQz3Dxn4Lk2R+FQjTcazj6XvYDaY5FQ 1fBhFy42EkF4llANIYcNO6cNLf2eBR45U8t5vhY0sUIlMIZ81BhjMtVFZO9cKu0cP7Va 77cQ== X-Gm-Message-State: AOAM531NQiz6NtZgcaYa+ftCa7baontUTsHOQ9fUNjbsAoIiuvl3q3Nx 71YVYzJuUoIOBTs7ES20RdmdGA== X-Google-Smtp-Source: ABdhPJxCoE6EY4u6oA+K5cxA2kNS0LpB6DJ9qb0GGxknSxzY5UxyIDF2pHUD34LJmFweEv5WPDQxGA== X-Received: by 2002:a17:902:ac8f:b0:163:fbb7:b057 with SMTP id h15-20020a170902ac8f00b00163fbb7b057mr55066011plr.67.1655048693203; Sun, 12 Jun 2022 08:44:53 -0700 (PDT) Received: from localhost ([2408:8207:18da:2310:f58b:e20:6b56:4b57]) by smtp.gmail.com with ESMTPSA id d10-20020a170903208a00b001689e31ff9dsm3243826plc.9.2022.06.12.08.44.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 12 Jun 2022 08:44:52 -0700 (PDT) Date: Sun, 12 Jun 2022 23:44:47 +0800 From: Muchun Song To: Miaohe Lin Cc: akpm@linux-foundation.org, joao.m.martins@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm/page_alloc: minor clean up for memmap_init_compound() Message-ID: References: <20220611021352.13529-1-linmiaohe@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220611021352.13529-1-linmiaohe@huawei.com> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655048695; a=rsa-sha256; cv=none; b=K3YufV2biQzHC9QdKIXGPcE6ERfG2Sqwp04hSSktFbcJMzFM65qY0HzO34F+wVmWYzRjrf ffzes754D91oXrarJNlaUYQ7qZCTha2AlP4SE6LqcQNEa7iI1LuMp6lLaM5mPWKr7HBJm9 L3ck/2U4QYeMoEC3EkNr+ZuG625xB0E= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=uYODikCP; spf=pass (imf20.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655048695; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sDXoUvF9HvlqoQcYGACYQ/XDUHt968ekXwv/rzATOLg=; b=uw/ZqEfhoxehSiWWrMo1IerGtOzfOOYY580LOJJ9uqHVqTuKnmCFUwZRxizpboLe4xRzLt JhWO+Hjz9LNAGQbEMxz0NBOultfK7+NhpXg+yzIGKqmlSsucCCUDlsFNSJwJfSQ7coHgXb no/faCg9W/OcOGzDI2H6kSODvrr4EFU= X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 91B051C0081 X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=uYODikCP; spf=pass (imf20.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: wedq7ma55pbtmyrk37zizzquoxsacioq X-HE-Tag: 1655048694-476978 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Jun 11, 2022 at 10:13:52AM +0800, Miaohe Lin wrote: > Since commit 5232c63f46fd ("mm: Make compound_pincount always available"), > compound_pincount_ptr is stored at first tail page now. So we should call > prep_compound_head() after the first tail page is initialized to take > advantage of the likelihood of that tail struct page being cached given > that we will read them right after in prep_compound_head(). > > Signed-off-by: Miaohe Lin > Cc: Joao Martins > --- > v2: > Don't move prep_compound_head() outside loop per Joao. > --- > mm/page_alloc.c | 17 +++++++++++------ > 1 file changed, 11 insertions(+), 6 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4c7d99ee58b4..048df5d78add 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6771,13 +6771,18 @@ static void __ref memmap_init_compound(struct page *head, > set_page_count(page, 0); > > /* > - * The first tail page stores compound_mapcount_ptr() and > - * compound_order() and the second tail page stores > - * compound_pincount_ptr(). Call prep_compound_head() after > - * the first and second tail pages have been initialized to > - * not have the data overwritten. > + * The first tail page stores compound_mapcount_ptr(), > + * compound_order() and compound_pincount_ptr(). Call > + * prep_compound_head() after the first tail page have > + * been initialized to not have the data overwritten. > + * > + * Note the idea to make this right after we initialize > + * the offending tail pages is trying to take advantage > + * of the likelihood of those tail struct pages being > + * cached given that we will read them right after in > + * prep_compound_head(). > */ > - if (pfn == head_pfn + 2) > + if (unlikely(pfn == head_pfn + 1)) > prep_compound_head(head, order); For me it is weird not to put this out of the loop. I saw the reason is because of the caching suggested by Joao. But I think this is not a hot path and putting it out of the loop may be more intuitive at least for me. Maybe this optimization is unnecessary (maybe I am wrong). And it will be consistent with prep_compound_page() (at least it does not do the similar optimization) if we drop this optimization. Hi Joao, I am wondering is it a significant optimization for zone device memory? I found this code existed from the 1st version you introduced. So I suspect maybe you have some numbers, would you like to share with us? Thanks.