From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F00D5C433E1 for ; Sat, 15 Aug 2020 00:30:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B2D5D22B45 for ; Sat, 15 Aug 2020 00:30:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="z6cYPBmz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B2D5D22B45 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A0296B0024; Fri, 14 Aug 2020 20:30:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 526FB6B0025; Fri, 14 Aug 2020 20:30:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43D096B0026; Fri, 14 Aug 2020 20:30:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 281996B0024 for ; Fri, 14 Aug 2020 20:30:26 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DA53B75BC for ; Sat, 15 Aug 2020 00:30:25 +0000 (UTC) X-FDA: 77150921610.03.owl29_12010f127001 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id A0CC228A4E8 for ; Sat, 15 Aug 2020 00:30:25 +0000 (UTC) X-HE-Tag: owl29_12010f127001 X-Filterd-Recvd-Size: 4774 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Sat, 15 Aug 2020 00:30:25 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1824920774; Sat, 15 Aug 2020 00:30:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1597451424; bh=atWj1s3AaQIvM+CfnbhCRtapJx17HPQVSzXzeyu+MhM=; h=Date:From:To:Subject:In-Reply-To:From; b=z6cYPBmzf7JApweVs88eAgJEJCBSVB061UM1Q5gtGEyNgecpY6/aogBYB6Jud2cUa RlUxuX0xFtJQvtXFt8tv/5+Fsevb8EYIfp0CiGtaYXIfGETWwrXuTATzNu4Utzx7fE i8AC78yno0zeISd7EffQMGSrw3fTlAtDwvzk4BVM= Date: Fri, 14 Aug 2020 17:30:23 -0700 From: Andrew Morton To: akpm@linux-foundation.org, david@redhat.com, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, william.kucharski@oracle.com, willy@infradead.org, ziy@nvidia.com Subject: [patch 07/39] mm: store compound_nr as well as compound_order Message-ID: <20200815003023.62xy4KMOu%akpm@linux-foundation.org> In-Reply-To: <20200814172939.55d6d80b6e21e4241f1ee1f3@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: A0CC228A4E8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Subject: mm: store compound_nr as well as compound_order Patch series "THP prep patches". These are some generic cleanups and improvements, which I would like merged into mmotm soon. The first one should be a performance improvement for all users of compound pages, and the others are aimed at getting code to compile away when CONFIG_TRANSPARENT_HUGEPAGE is disabled (ie small systems). Also better documented / less confusing than the current prefix mixture of compound, hpage and thp. This patch (of 7): This removes a few instructions from functions which need to know how many pages are in a compound page. The storage used is either page->mapping on 64-bit or page->index on 32-bit. Both of these are fine to overlay on tail pages. Link: http://lkml.kernel.org/r/20200629151959.15779-1-willy@infradead.org Link: http://lkml.kernel.org/r/20200629151959.15779-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: William Kucharski Reviewed-by: Zi Yan Cc: David Hildenbrand Cc: Mike Kravetz Cc: "Kirill A. Shutemov" Signed-off-by: Andrew Morton --- include/linux/mm.h | 5 ++++- include/linux/mm_types.h | 1 + mm/page_alloc.c | 5 +++-- 3 files changed, 8 insertions(+), 3 deletions(-) --- a/include/linux/mm.h~mm-store-compound_nr-as-well-as-compound_order +++ a/include/linux/mm.h @@ -922,12 +922,15 @@ static inline int compound_pincount(stru static inline void set_compound_order(struct page *page, unsigned int order) { page[1].compound_order = order; + page[1].compound_nr = 1U << order; } /* Returns the number of pages in this potentially compound page. */ static inline unsigned long compound_nr(struct page *page) { - return 1UL << compound_order(page); + if (!PageHead(page)) + return 1; + return page[1].compound_nr; } /* Returns the number of bytes in this potentially compound page. */ --- a/include/linux/mm_types.h~mm-store-compound_nr-as-well-as-compound_order +++ a/include/linux/mm_types.h @@ -134,6 +134,7 @@ struct page { unsigned char compound_dtor; unsigned char compound_order; atomic_t compound_mapcount; + unsigned int compound_nr; /* 1 << compound_order */ }; struct { /* Second tail page of compound page */ unsigned long _compound_pad_1; /* compound_head */ --- a/mm/page_alloc.c~mm-store-compound_nr-as-well-as-compound_order +++ a/mm/page_alloc.c @@ -666,8 +666,6 @@ void prep_compound_page(struct page *pag int i; int nr_pages = 1 << order; - set_compound_page_dtor(page, COMPOUND_PAGE_DTOR); - set_compound_order(page, order); __SetPageHead(page); for (i = 1; i < nr_pages; i++) { struct page *p = page + i; @@ -675,6 +673,9 @@ void prep_compound_page(struct page *pag p->mapping = TAIL_MAPPING; set_compound_head(p, page); } + + set_compound_page_dtor(page, COMPOUND_PAGE_DTOR); + set_compound_order(page, order); atomic_set(compound_mapcount_ptr(page), -1); if (hpage_pincount_available(page)) atomic_set(compound_pincount_ptr(page), 0); _