From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0537BC433E0 for ; Wed, 29 Jul 2020 13:01:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9AE2E206D4 for ; Wed, 29 Jul 2020 13:01:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9AE2E206D4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C267E6B0003; Wed, 29 Jul 2020 09:01:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD84D6B0005; Wed, 29 Jul 2020 09:01:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC5866B0006; Wed, 29 Jul 2020 09:01:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id 973E86B0003 for ; Wed, 29 Jul 2020 09:01:44 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4654D184B5476 for ; Wed, 29 Jul 2020 13:01:44 +0000 (UTC) X-FDA: 77091125328.28.price75_2f0afb126f72 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id B36211E18 for ; Wed, 29 Jul 2020 13:00:54 +0000 (UTC) X-HE-Tag: price75_2f0afb126f72 X-Filterd-Recvd-Size: 8733 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Wed, 29 Jul 2020 13:00:49 +0000 (UTC) Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06TCZaTh124587; Wed, 29 Jul 2020 09:00:39 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 32hsqgumyj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jul 2020 09:00:39 -0400 Received: from m0098414.ppops.net (m0098414.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 06TCbQ1q136668; Wed, 29 Jul 2020 09:00:38 -0400 Received: from ppma02fra.de.ibm.com (47.49.7a9f.ip4.static.sl-reverse.com [159.122.73.71]) by mx0b-001b2d01.pphosted.com with ESMTP id 32hsqgumwg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jul 2020 09:00:38 -0400 Received: from pps.filterd (ppma02fra.de.ibm.com [127.0.0.1]) by ppma02fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06TD0an3022946; Wed, 29 Jul 2020 13:00:36 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma02fra.de.ibm.com with ESMTP id 32gcq0u44y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 29 Jul 2020 13:00:36 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06TD0XDB59572686 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Jul 2020 13:00:33 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 36118AE099; Wed, 29 Jul 2020 13:00:32 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 18FC5AE06E; Wed, 29 Jul 2020 13:00:28 +0000 (GMT) Received: from linux.ibm.com (unknown [9.148.204.160]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Wed, 29 Jul 2020 13:00:27 +0000 (GMT) Date: Wed, 29 Jul 2020 16:00:25 +0300 From: Mike Rapoport To: David Hildenbrand Cc: Justin He , Dan Williams , Vishal Verma , Catalin Marinas , Will Deacon , Greg Kroah-Hartman , "Rafael J. Wysocki" , Dave Jiang , Andrew Morton , Steve Capper , Mark Rutland , Logan Gunthorpe , Anshuman Khandual , Hsin-Yi Wang , Jason Gunthorpe , Dave Hansen , Kees Cook , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-mm@kvack.org" , Wei Yang , Pankaj Gupta , Ira Weiny , Kaly Xin Subject: Re: [RFC PATCH 0/6] decrease unnecessary gap due to pmem kmem alignment Message-ID: <20200729130025.GD3672596@linux.ibm.com> References: <20200729033424.2629-1-justin.he@arm.com> <20200729093150.GC3672596@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-07-29_07:2020-07-29,2020-07-29 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=5 malwarescore=0 priorityscore=1501 phishscore=0 spamscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=532 adultscore=0 clxscore=1015 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007290082 X-Rspamd-Queue-Id: B36211E18 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 29, 2020 at 11:35:20AM +0200, David Hildenbrand wrote: > On 29.07.20 11:31, Mike Rapoport wrote: > > Hi Justin, > >=20 > > On Wed, Jul 29, 2020 at 08:27:58AM +0000, Justin He wrote: > >> Hi David > >>>> > >>>> Without this series, if qemu creates a 4G bytes nvdimm device, we = can > >>> only > >>>> use 2G bytes for dax pmem(kmem) in the worst case. > >>>> e.g. > >>>> 240000000-33fdfffff : Persistent Memory > >>>> We can only use the memblock between [240000000, 2ffffffff] due to= the > >>> hard > >>>> limitation. It wastes too much memory space. > >>>> > >>>> Decreasing the SECTION_SIZE_BITS on arm64 might be an alternative,= but > >>> there > >>>> are too many concerns from other constraints, e.g. PAGE_SIZE, huge= tlb, > >>>> SPARSEMEM_VMEMMAP, page bits in struct page ... > >>>> > >>>> Beside decreasing the SECTION_SIZE_BITS, we can also relax the kme= m > >>> alignment > >>>> with memory_block_size_bytes(). > >>>> > >>>> Tested on arm64 guest and x86 guest, qemu creates a 4G pmem device= . dax > >>> pmem > >>>> can be used as ram with smaller gap. Also the kmem hotplug add/rem= ove > >>> are both > >>>> tested on arm64/x86 guest. > >>>> > >>> > >>> Hi, > >>> > >>> I am not convinced this use case is worth such hacks (that=E2=80=99= s what it is) > >>> for now. On real machines pmem is big - your example (losing 50% is > >>> extreme). > >>> > >>> I would much rather want to see the section size on arm64 reduced. = I > >>> remember there were patches and that at least with a base page size= of 4k > >>> it can be reduced drastically (64k base pages are more problematic = due to > >>> the ridiculous THP size of 512M). But could be a section size of 51= 2 is > >>> possible on all configs right now. > >> > >> Yes, I once investigated how to reduce section size on arm64 thought= fully: > >> There are many constraints for reducing SECTION_SIZE_BITS > >> 1. Given page->flags bits is limited, SECTION_SIZE_BITS can't be red= uced too > >> much. > >> 2. Once CONFIG_SPARSEMEM_VMEMMAP is enabled, section id will not be = counted > >> into page->flags. > >> 3. MAX_ORDER depends on SECTION_SIZE_BITS=20 > >> - 3.1 mmzone.h > >> #if (MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS > >> #error Allocator MAX_ORDER exceeds SECTION_SIZE > >> #endif > >> - 3.2 hugepage_init() > >> MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER >=3D MAX_ORDER); > >> > >> Hence when ARM64_4K_PAGES && CONFIG_SPARSEMEM_VMEMMAP are enabled, > >> SECTION_SIZE_BITS can be reduced to 27. > >> But when ARM64_64K_PAGES, given 3.2, MAX_ORDER > 29-16 =3D 13. > >> Given 3.1 SECTION_SIZE_BITS >=3D MAX_ORDER+15 > 28. So SECTION_SIZE_= BITS can not > >> be reduced to 27. > >> > >> In one word, if we considered to reduce SECTION_SIZE_BITS on arm64, = the Kconfig > >> might be very complicated,e.g. we still need to consider the case fo= r > >> ARM64_16K_PAGES. > >=20 > > It is not necessary to pollute Kconfig with that. > > arch/arm64/include/asm/sparesemem.h can have something like > >=20 > > #ifdef CONFIG_ARM64_64K_PAGES > > #define SPARSE_SECTION_SIZE 29 > > #elif defined(CONFIG_ARM16K_PAGES) > > #define SPARSE_SECTION_SIZE 28 > > #elif defined(CONFIG_ARM4K_PAGES) > > #define SPARSE_SECTION_SIZE 27 > > #else > > #error > > #endif >=20 > ack >=20 > > =20 > > There is still large gap with ARM64_64K_PAGES, though. > >=20 > > As for SPARSEMEM without VMEMMAP, are there actual benefits to use it= ? >=20 > I was asking myself the same question a while ago and didn't really fin= d > a compelling one. Memory overhead for VMEMMAP is larger, especially for arm64 that knows how to free empty parts of the memory map with "classic" SPARSEMEM. =20 > I think it's always enabled as default (SPARSEMEM_VMEMMAP_ENABLE) and > would require config tweaks to even disable it. Nope, it's right there in menuconfig, "Memory Management options" -> "Sparse Memory virtual memmap" > --=20 > Thanks, >=20 > David / dhildenb >=20 --=20 Sincerely yours, Mike.