From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 468FEC7EE30 for ; Wed, 25 Jun 2025 09:37:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC3B46B00B0; Wed, 25 Jun 2025 05:37:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D75006B00B3; Wed, 25 Jun 2025 05:37:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3BD96B00B4; Wed, 25 Jun 2025 05:37:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AB3B46B00B0 for ; Wed, 25 Jun 2025 05:37:14 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 69560141B68 for ; Wed, 25 Jun 2025 09:37:14 +0000 (UTC) X-FDA: 83593419588.12.8D7061B Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf01.hostedemail.com (Postfix) with ESMTP id DCC2740002 for ; Wed, 25 Jun 2025 09:37:11 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b="ol/Y0rap"; spf=pass (imf01.hostedemail.com: domain of donettom@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=donettom@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750844232; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=duIT7vYThX9XWpWOLWRNfP/IrShB8W5j2qBMdKOPyRA=; b=GdehV9d3avXixIJJgBcUC/y5UEMAReUeS/+1kSNM0/afGtCAAYccV7ttSJ2Bo6JcNJ/hYW IXHWoaUeOBDMNyNwSDMzRFTbEU95fRbvhNtftTO6uVB5ph1BwWBNKoOMYLn/Ant6kyuduS GDrbgaIU1H9TfooOKWMB454jD1gBoNs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750844232; a=rsa-sha256; cv=none; b=mmdHhGyyo2wT7no9vZlACcnqaydrjA6eLPdpU9pLKxoUxc4PoHqtBE0AbB5Gk9oCGPe3SE 3GU+na+y9hbU+cWrHsm5A5XT2o79nEu1dMoax335ZRpvGsuaxtjXAaFHR5cTg8tqJ2lGI1 CVTOTMQ14tKIx0gSW1FMxZ/iBa0xKmw= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b="ol/Y0rap"; spf=pass (imf01.hostedemail.com: domain of donettom@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=donettom@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55P7VHJ3016009; Wed, 25 Jun 2025 09:37:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pp1; bh=duIT7v YThX9XWpWOLWRNfP/IrShB8W5j2qBMdKOPyRA=; b=ol/Y0rapTnRtY/ETlbu7JY Ww6HEywCSzIYs2R5GVbPdm/Dd39AJJp3zB3TvzfTsQrIn1xmSziC2Uk5QQFaa9yh QYBZeeR+X/DboLaAN2VqWJbgnYRtL78GDfcw0GAgO8G/75pcmS9TLQfrKqjJitwL r8PHtrbXYkXHALSVHJmrjOTp5ztHSEPRSknu1SkPXvKQ4FnlqwsECvk835DtX7g+ AKNK38HwkSRC6kuIx0tTGVUt6gkbD0iPoJ/O4wlRZVFeKeU1L8gMNGuFuG0s9v2z TraKml9ZmjY0wBBJataik3Rjlwxu1guzqqzN7ed+4ypL98wOTn5zbezKqdmFCcEg == Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 47dm8jefss-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 25 Jun 2025 09:37:05 +0000 (GMT) Received: from m0360083.ppops.net (m0360083.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 55P9UZG1027503; Wed, 25 Jun 2025 09:37:04 GMT Received: from ppma11.dal12v.mail.ibm.com (db.9e.1632.ip4.static.sl-reverse.com [50.22.158.219]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 47dm8jefsd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 25 Jun 2025 09:37:04 +0000 (GMT) Received: from pps.filterd (ppma11.dal12v.mail.ibm.com [127.0.0.1]) by ppma11.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 55P7SM4h014710; Wed, 25 Jun 2025 09:37:01 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma11.dal12v.mail.ibm.com (PPS) with ESMTPS id 47e9s2gb4q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 25 Jun 2025 09:37:01 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 55P9b0CG53084626 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 25 Jun 2025 09:37:00 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0E4B020043; Wed, 25 Jun 2025 09:37:00 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C66F420040; Wed, 25 Jun 2025 09:36:55 +0000 (GMT) Received: from li-06431bcc-2712-11b2-a85c-a6fe68df28f9.ibm.com (unknown [9.124.208.75]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTPS; Wed, 25 Jun 2025 09:36:55 +0000 (GMT) Date: Wed, 25 Jun 2025 15:06:52 +0530 From: Donet Tom To: Dev Jain Cc: Lorenzo Stoakes , Aboorva Devarajan , akpm@linux-foundation.org, Liam.Howlett@oracle.com, shuah@kernel.org, pfalcato@suse.de, david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, npache@redhat.com, ryan.roberts@arm.com, baohua@kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, ritesh.list@gmail.com Subject: Re: [PATCH 1/6] mm/selftests: Fix virtual_address_range test issues. Message-ID: References: <815793f1-6800-4b9a-852e-f13d6308f50f@arm.com> <2756fa2b-e8bf-4c66-bf9b-c85dc63dfc33@lucifer.local> <41d9a70d-9791-4212-af23-5b13d8e4a47d@arm.com> <546d7aa5-9ea3-4fce-a604-b1676a61d6cd@arm.com> <2fc32719-1e38-4bf0-8ec5-5bcb452d939f@arm.com> <673c9442-7d69-408b-a2c4-2baa696a7e86@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <673c9442-7d69-408b-a2c4-2baa696a7e86@arm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI1MDA2OSBTYWx0ZWRfX0iJZbH4RZkZK u0t2Zi/7WypS2huIG4OwOXtScs4HMFEvh2ZN5BOVuZKALBJBROM1uecgePSLKgn+uVDVWJAX20L ok4yRD+KKfuN5Rhg2bo3SnxOPykto1ALr64JeraenWVmqk6su3AMhOCNZejPcp3nylQuPkiUYiA SWpoNLRskT3W1YJH1qt0yik5z1CDl3LDguqS5hb5CM+rPNJW3pgVMJUb73CqOJW32aLw9Pre75C TunpOwtkSfjgM0d8SvzfaRFpKaTHOiddcKFwfHg+ucZmuxzuA3xJCqRuUD96Bw3XcXgjIGtA/72 F4mdHfhACEYN/oH1etHgrTsf6VjqCjxg6jKGuHVNfOSVk5lyLlVuBAGhPQDLQJEhaL4SqZ6RgEw XDRY8JNpOn4idAmfmay4z/otASuq6XzfSZg2QUHRns9a4jOr8kDEW1OQsk9cZ0DBFnJ38KsY X-Proofpoint-GUID: Z5r55ByDizHJ-aXph3TfBfNKCoelqRP3 X-Proofpoint-ORIG-GUID: GITBJs78SYYz-Zab8GAvxJcLabiTrtcu X-Authority-Analysis: v=2.4 cv=combk04i c=1 sm=1 tr=0 ts=685bc341 cx=c_pps a=aDMHemPKRhS1OARIsFnwRA==:117 a=aDMHemPKRhS1OARIsFnwRA==:17 a=IkcTkHD0fZMA:10 a=6IFa9wvqVegA:10 a=CpnFgls9S9_tUT08MQ0A:9 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-25_02,2025-06-23_07,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 spamscore=0 adultscore=0 mlxlogscore=999 clxscore=1015 impostorscore=0 suspectscore=0 mlxscore=0 phishscore=0 lowpriorityscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506250069 X-Rspamd-Queue-Id: DCC2740002 X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: bz4c67w7xwiwddtx4nfu45z47zu3osai X-HE-Tag: 1750844231-355082 X-HE-Meta: U2FsdGVkX18FpzYD82fJ/3+n9ayvYMzcVZkjBLH/UcLO0dQj9VNt/v8GVLU1PKt2PPvQsQSSQuaElXKAnm9yFOMYBqxkyKVB/3s9aJwlQgJJOVLm6ZmKoJtfdpw9ojjDlGG7/XoRiWQYimny6p0Qt6yFjFFoNcTVE+hOnqF2vBPRY/j1FJpoIVR3SieHDgIAYkkhq/WaYhVPELeEb1ZPAU528/e4KvwEfIWbZvBn3DX3rp3usWXYqyupoEvYNfkUdpg7WxcqYqDueqtRhX1RIVZ5B2k4H90MSFx455Bj+KZ/XP6I/p0D43Q1x95NiqNgd0N6wAG0+GkFtuESOHR7yoc7BMygkOygClMdtemrKjA2EwBGKLYBuaCEksOs/dq71HMvRab1/+orcZ/kCuGn2253hSmYotJtGGcdiXXrnywISQ0I5k7kdu/pRKhsOHKVwUcJR3zwbvGt1KCZJnv88Ldq20RhAJlGQ29O3saozYJ4+wCPOtriqANw/pEA2MApAjcZN7Apu0x/2NTTyDRdNIYdLNZvUkakgzjbZPvqrfThATszsmdGcNtVkHX+vdGo+gqrqpCzAup84Ij6mCxrx1i5Y6siBYOgI8s4d2IPKH2QfoIMdCIWHSNRoDnl8odoupZwUI6ft32hp/DdOtIdZFj4l47gwiLl0Wmre8R+6rjFvsffcEZlKdQySI3LMoulX5JMd6qyYzvWgSqnD+odCOu909nmpXn0J8HtbB2QH7UbLjIQVNnM1pedx09m4PRf9pFL5ig8yfy4mfQzrXlXBeN9sj+woe808+L84XHHCs5j+Ks3NYXZk8QHqBpSWbI/v7SVzfJetkBMG/jOVGgXccxUoZyCexGslpasxKljc/x9Di9P7sDnGJYVcU9J7cPYCho+PtuvCcddpx2FRfW8pO0XFr6vUgKTXBEu+SRUO+RPmC7idW8puFql8yCve0iZK0Sbovaj2BNZMWDcdM0 aujQ4dET f3bB/3skwtnnIAPfKWGOyjvVwAr4wMCuvYKAr371HCChm9KqbtSgqd/hydVc+kZj2LIbjxhzXtBpJrdLUQCGYNZ/GlDSWjG8VCgHFPBlQSbEnoLD4qPeghaIHI8WSQ/P+QIFe6RGfT/JMoBv/qIE/Lf5uXn93iDGsJhiMLXzBlMlsv6zHFoPtxDRRVKFZ/uznscUoqibvrEHJ2FfMKTuhCDlsUBJhArQbhfujuNnnlb/CoCJsiv3DrIXiJ6IwYWXJSFhubncT4UzZEb292KZGxYLtCcgGHCrtUBiQufN4kIvhCLTBWS1xkxStz37MdtYWx92fA4bF4ubsEMSbSNknvQkqQpSKB35KdANpD/RIORypVwDcSxCxaFC2rsZzhwj30fSaqWBTgRRTz77t77UM/pim4TaWHOWw82j0X5fzMkaJ+eu9bqElEdL4PFTUtiMCMU4Vbg4V0Ekj41k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: eOn Tue, Jun 24, 2025 at 11:45:09AM +0530, Dev Jain wrote: > > On 23/06/25 11:02 pm, Donet Tom wrote: > > On Mon, Jun 23, 2025 at 10:23:02AM +0530, Dev Jain wrote: > > > On 21/06/25 11:25 pm, Donet Tom wrote: > > > > On Fri, Jun 20, 2025 at 08:15:25PM +0530, Dev Jain wrote: > > > > > On 19/06/25 1:53 pm, Donet Tom wrote: > > > > > > On Wed, Jun 18, 2025 at 08:13:54PM +0530, Dev Jain wrote: > > > > > > > On 18/06/25 8:05 pm, Lorenzo Stoakes wrote: > > > > > > > > On Wed, Jun 18, 2025 at 07:47:18PM +0530, Dev Jain wrote: > > > > > > > > > On 18/06/25 7:37 pm, Lorenzo Stoakes wrote: > > > > > > > > > > On Wed, Jun 18, 2025 at 07:28:16PM +0530, Dev Jain wrote: > > > > > > > > > > > On 18/06/25 5:27 pm, Lorenzo Stoakes wrote: > > > > > > > > > > > > On Wed, Jun 18, 2025 at 05:15:50PM +0530, Dev Jain wrote: > > > > > > > > > > > > Are you accounting for sys.max_map_count? If not, then you'll be hitting that > > > > > > > > > > > > first. > > > > > > > > > > > run_vmtests.sh will run the test in overcommit mode so that won't be an issue. > > > > > > > > > > Umm, what? You mean overcommit all mode, and that has no bearing on the max > > > > > > > > > > mapping count check. > > > > > > > > > > > > > > > > > > > > In do_mmap(): > > > > > > > > > > > > > > > > > > > > /* Too many mappings? */ > > > > > > > > > > if (mm->map_count > sysctl_max_map_count) > > > > > > > > > > return -ENOMEM; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > As well as numerous other checks in mm/vma.c. > > > > > > > > > Ah sorry, didn't look at the code properly just assumed that overcommit_always meant overriding > > > > > > > > > this. > > > > > > > > No problem! It's hard to be aware of everything in mm :) > > > > > > > > > > > > > > > > > > I'm not sure why an overcommit toggle is even necessary when you could use > > > > > > > > > > MAP_NORESERVE or simply map PROT_NONE to avoid the OVERCOMMIT_GUESS limits? > > > > > > > > > > > > > > > > > > > > I'm pretty confused as to what this test is really achieving honestly. This > > > > > > > > > > isn't a useful way of asserting mmap() behaviour as far as I can tell. > > > > > > > > > Well, seems like a useful way to me at least : ) Not sure if you are in the mood > > > > > > > > > to discuss that but if you'd like me to explain from start to end what the test > > > > > > > > > is doing, I can do that : ) > > > > > > > > > > > > > > > > > I just don't have time right now, I guess I'll have to come back to it > > > > > > > > later... it's not the end of the world for it to be iffy in my view as long as > > > > > > > > it passes, but it might just not be of great value. > > > > > > > > > > > > > > > > Philosophically I'd rather we didn't assert internal implementation details like > > > > > > > > where we place mappings in userland memory. At no point do we promise to not > > > > > > > > leave larger gaps if we feel like it :) > > > > > > > You have a fair point. Anyhow a debate for another day. > > > > > > > > > > > > > > > I'm guessing, reading more, the _real_ test here is some mathematical assertion > > > > > > > > about layout from HIGH_ADDR_SHIFT -> end of address space when using hints. > > > > > > > > > > > > > > > > But again I'm not sure that achieves much and again also is asserting internal > > > > > > > > implementation details. > > > > > > > > > > > > > > > > Correct behaviour of this kind of thing probably better belongs to tests in the > > > > > > > > userland VMA testing I'd say. > > > > > > > > > > > > > > > > Sorry I don't mean to do down work you've done before, just giving an honest > > > > > > > > technical appraisal! > > > > > > > Nah, it will be rather hilarious to see it all go down the drain xD > > > > > > > > > > > > > > > Anyway don't let this block work to fix the test if it's failing. We can revisit > > > > > > > > this later. > > > > > > > Sure. @Aboorva and Donet, I still believe that the correct approach is to elide > > > > > > > the gap check at the crossing boundary. What do you think? > > > > > > > > > > > > > One problem I am seeing with this approach is that, since the hint address > > > > > > is generated randomly, the VMAs are also being created at randomly based on > > > > > > the hint address.So, for the VMAs created at high addresses, we cannot guarantee > > > > > > that the gaps between them will be aligned to MAP_CHUNK_SIZE. > > > > > > > > > > > > High address VMAs > > > > > > ----------------- > > > > > > 1000000000000-1000040000000 r--p 00000000 00:00 0 > > > > > > 2000000000000-2000040000000 r--p 00000000 00:00 0 > > > > > > 4000000000000-4000040000000 r--p 00000000 00:00 0 > > > > > > 8000000000000-8000040000000 r--p 00000000 00:00 0 > > > > > > e80009d260000-fffff9d260000 r--p 00000000 00:00 0 > > > > > > > > > > > > I have a different approach to solve this issue. > > > > > It is really weird that such a large amount of VA space > > > > > is left between the two VMAs yet mmap is failing. > > > > > > > > > > > > > > > > > > > > Can you please do the following: > > > > > set /proc/sys/vm/max_map_count to the highest value possible. > > > > > If running without run_vmtests.sh, set /proc/sys/vm/overcommit_memory to 1. > > > > > In validate_complete_va_space: > > > > > > > > > > if (start_addr >= HIGH_ADDR_MARK && found == false) { > > > > > found = true; > > > > > continue; > > > > > } > > > > Thanks Dev for the suggestion. I set max_map_count and set overcommit > > > > memory to 1, added this code change as well, and then tried. Still, the > > > > test is failing > > > > > > > > > where found is initialized to false. This will skip the check > > > > > for the boundary. > > > > > > > > > > After this can you tell whether the test is still failing. > > > > > > > > > > Also can you give me the complete output of proc/pid/maps > > > > > after putting a sleep at the end of the test. > > > > > > > > > on powerpc support DEFAULT_MAP_WINDOW is 128TB and with > > > > total address space size is 4PB With hint it can map upto > > > > 4PB. Since the hint addres is random in this test random hing VMAs > > > > are getting created. IIUC this is expected only. > > > > > > > > > > > > 10000000-10010000 r-xp 00000000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > > > 10010000-10020000 r--p 00000000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > > > 10020000-10030000 rw-p 00010000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > > > 30000000-10030000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 10030770000-100307a0000 rw-p 00000000 00:00 0 [heap] > > > > 1004f000000-7fff8f000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 7fff8faf0000-7fff8fe00000 rw-p 00000000 00:00 0 > > > > 7fff8fe00000-7fff90030000 r-xp 00000000 fd:00 792355 /usr/lib64/libc.so.6 > > > > 7fff90030000-7fff90040000 r--p 00230000 fd:00 792355 /usr/lib64/libc.so.6 > > > > 7fff90040000-7fff90050000 rw-p 00240000 fd:00 792355 /usr/lib64/libc.so.6 > > > > 7fff90050000-7fff90130000 r-xp 00000000 fd:00 792358 /usr/lib64/libm.so.6 > > > > 7fff90130000-7fff90140000 r--p 000d0000 fd:00 792358 /usr/lib64/libm.so.6 > > > > 7fff90140000-7fff90150000 rw-p 000e0000 fd:00 792358 /usr/lib64/libm.so.6 > > > > 7fff90160000-7fff901a0000 r--p 00000000 00:00 0 [vvar] > > > > 7fff901a0000-7fff901b0000 r-xp 00000000 00:00 0 [vdso] > > > > 7fff901b0000-7fff90200000 r-xp 00000000 fd:00 792351 /usr/lib64/ld64.so.2 > > > > 7fff90200000-7fff90210000 r--p 00040000 fd:00 792351 /usr/lib64/ld64.so.2 > > > > 7fff90210000-7fff90220000 rw-p 00050000 fd:00 792351 /usr/lib64/ld64.so.2 > > > > 7fffc9770000-7fffc9880000 rw-p 00000000 00:00 0 [stack] > > > > 1000000000000-1000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 2000000000000-2000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 4000000000000-4000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 8000000000000-8000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > eb95410220000-fffff90220000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > > > > > > > > > > > > > > > > > If I give the hint address serially from 128TB then the address > > > > space is contigous and gap is also MAP_SIZE, the test is passing. > > > > > > > > 10000000-10010000 r-xp 00000000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > > > 10010000-10020000 r--p 00000000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > > > 10020000-10030000 rw-p 00010000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > > > 33000000-10033000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 10033380000-100333b0000 rw-p 00000000 00:00 0 [heap] > > > > 1006f0f0000-10071000000 rw-p 00000000 00:00 0 > > > > 10071000000-7fffb1000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 7fffb15d0000-7fffb1800000 r-xp 00000000 fd:00 792355 /usr/lib64/libc.so.6 > > > > 7fffb1800000-7fffb1810000 r--p 00230000 fd:00 792355 /usr/lib64/libc.so.6 > > > > 7fffb1810000-7fffb1820000 rw-p 00240000 fd:00 792355 /usr/lib64/libc.so.6 > > > > 7fffb1820000-7fffb1900000 r-xp 00000000 fd:00 792358 /usr/lib64/libm.so.6 > > > > 7fffb1900000-7fffb1910000 r--p 000d0000 fd:00 792358 /usr/lib64/libm.so.6 > > > > 7fffb1910000-7fffb1920000 rw-p 000e0000 fd:00 792358 /usr/lib64/libm.so.6 > > > > 7fffb1930000-7fffb1970000 r--p 00000000 00:00 0 [vvar] > > > > 7fffb1970000-7fffb1980000 r-xp 00000000 00:00 0 [vdso] > > > > 7fffb1980000-7fffb19d0000 r-xp 00000000 fd:00 792351 /usr/lib64/ld64.so.2 > > > > 7fffb19d0000-7fffb19e0000 r--p 00040000 fd:00 792351 /usr/lib64/ld64.so.2 > > > > 7fffb19e0000-7fffb19f0000 rw-p 00050000 fd:00 792351 /usr/lib64/ld64.so.2 > > > > 7fffc5470000-7fffc5580000 rw-p 00000000 00:00 0 [stack] > > > > 800000000000-2aab000000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > > > > > > > > Thank you for this output. I can't wrap my head around why this behaviour changes > > > when you generate the hint sequentially. The mmap() syscall is supposed to do the > > > following (irrespective of high VA space or not) - if the allocation at the hint > > Yes, it is working as expected. On PowerPC, the DEFAULT_MAP_WINDOW is > > 128TB, and the system can map up to 4PB. > > > > In the test, the first mmap call maps memory up to 128TB without any > > hint, so the VMAs are created below the 128TB boundary. > > > > In the second mmap call, we provide a hint starting from 256TB, and > > the hint address is generated randomly above 256TB. The mappings are > > correctly created at these hint addresses. Since the hint addresses > > are random, the resulting VMAs are also created at random locations. > > > > So, what I tried is: mapping from 0 to 128TB without any hint, and > > then for the second mmap, instead of starting the hint from 256TB, I > > started from 128TB. Instead of using random hint addresses, I used > > sequential hint addresses from 128TB up to 512TB. With this change, > > the VMAs are created in order, and the test passes. > > > > 800000000000-2aab000000000 r--p 00000000 00:00 0 128TB to 512TB VMA > > > > I think we will see same behaviour on x86 with X86_FEATURE_LA57. > > > > I will send the updated patch in V2. > > Since you say it fails on both radix and hash, it means that the generic > code path is failing. I see that on my system, when I run the test with > LPA2 config, write() fails with errno set to -ENOMEM. Can you apply > the following diff and check whether the test fails still. Doing this > fixed it for arm64. > > diff --git a/tools/testing/selftests/mm/virtual_address_range.c b/tools/testing/selftests/mm/virtual_address_range.c > > index b380e102b22f..3032902d01f2 100644 > > --- a/tools/testing/selftests/mm/virtual_address_range.c > > +++ b/tools/testing/selftests/mm/virtual_address_range.c > > @@ -173,10 +173,6 @@ static int validate_complete_va_space(void) > > */ > > hop = 0; > > while (start_addr + hop < end_addr) { > > - if (write(fd, (void *)(start_addr + hop), 1) != 1) > > - return 1; > > - lseek(fd, 0, SEEK_SET); > > - > > if (is_marked_vma(vma_name)) > > munmap((char *)(start_addr + hop), MAP_CHUNK_SIZE); > Even with this change, the test is still failing. In this case, we are allocating physical memory and writing into it, but our issue seems to be with the gap between VMAs, so I believe this might not be directly related. I will send the next revision where the test passes and no issues are observed Just curious — with LPA2, is the second mmap() call successful? And are the VMAs being created at the hint address as expected? > > > > > addr succeeds, then all is well, otherwise, do a top-down search for a large > > > enough gap. I am not aware of the nuances in powerpc but I really am suspecting > > > a bug in powerpc mmap code. Can you try to do some tracing - which function > > > eventually fails to find the empty gap? > > > > > > Through my limited code tracing - we should end up in slice_find_area_topdown, > > > then we ask the generic code to find the gap using vm_unmapped_area. So I > > > suspect something is happening between this, probably slice_scan_available(). > > > > > > > > > From 0 to 128TB, we map memory directly without using any hint. For the range above > > > > > > 256TB up to 512TB, we perform the mapping using hint addresses. In the current test, > > > > > > we use random hint addresses, but I have modified it to generate hint addresses linearly > > > > > > starting from 128TB. > > > > > > > > > > > > With this change: > > > > > > > > > > > > The 0–128TB range is mapped without hints and verified accordingly. > > > > > > > > > > > > The 128TB–512TB range is mapped using linear hint addresses and then verified. > > > > > > > > > > > > Below are the VMAs obtained with this approach: > > > > > > > > > > > > 10000000-10010000 r-xp 00000000 fd:05 135019531 > > > > > > 10010000-10020000 r--p 00000000 fd:05 135019531 > > > > > > 10020000-10030000 rw-p 00010000 fd:05 135019531 > > > > > > 20000000-10020000000 r--p 00000000 00:00 0 > > > > > > 10020800000-10020830000 rw-p 00000000 00:00 0 > > > > > > 1004bcf0000-1004c000000 rw-p 00000000 00:00 0 > > > > > > 1004c000000-7fff8c000000 r--p 00000000 00:00 0 > > > > > > 7fff8c130000-7fff8c360000 r-xp 00000000 fd:00 792355 > > > > > > 7fff8c360000-7fff8c370000 r--p 00230000 fd:00 792355 > > > > > > 7fff8c370000-7fff8c380000 rw-p 00240000 fd:00 792355 > > > > > > 7fff8c380000-7fff8c460000 r-xp 00000000 fd:00 792358 > > > > > > 7fff8c460000-7fff8c470000 r--p 000d0000 fd:00 792358 > > > > > > 7fff8c470000-7fff8c480000 rw-p 000e0000 fd:00 792358 > > > > > > 7fff8c490000-7fff8c4d0000 r--p 00000000 00:00 0 > > > > > > 7fff8c4d0000-7fff8c4e0000 r-xp 00000000 00:00 0 > > > > > > 7fff8c4e0000-7fff8c530000 r-xp 00000000 fd:00 792351 > > > > > > 7fff8c530000-7fff8c540000 r--p 00040000 fd:00 792351 > > > > > > 7fff8c540000-7fff8c550000 rw-p 00050000 fd:00 792351 > > > > > > 7fff8d000000-7fffcd000000 r--p 00000000 00:00 0 > > > > > > 7fffe9c80000-7fffe9d90000 rw-p 00000000 00:00 0 > > > > > > 800000000000-2000000000000 r--p 00000000 00:00 0 -> High Address (128TB to 512TB) > > > > > > > > > > > > diff --git a/tools/testing/selftests/mm/virtual_address_range.c b/tools/testing/selftests/mm/virtual_address_range.c > > > > > > index 4c4c35eac15e..0be008cba4b0 100644 > > > > > > --- a/tools/testing/selftests/mm/virtual_address_range.c > > > > > > +++ b/tools/testing/selftests/mm/virtual_address_range.c > > > > > > @@ -56,21 +56,21 @@ > > > > > > #ifdef __aarch64__ > > > > > > #define HIGH_ADDR_MARK ADDR_MARK_256TB > > > > > > -#define HIGH_ADDR_SHIFT 49 > > > > > > +#define HIGH_ADDR_SHIFT 48 > > > > > > #define NR_CHUNKS_LOW NR_CHUNKS_256TB > > > > > > #define NR_CHUNKS_HIGH NR_CHUNKS_3840TB > > > > > > #else > > > > > > #define HIGH_ADDR_MARK ADDR_MARK_128TB > > > > > > -#define HIGH_ADDR_SHIFT 48 > > > > > > +#define HIGH_ADDR_SHIFT 47 > > > > > > #define NR_CHUNKS_LOW NR_CHUNKS_128TB > > > > > > #define NR_CHUNKS_HIGH NR_CHUNKS_384TB > > > > > > #endif > > > > > > -static char *hint_addr(void) > > > > > > +static char *hint_addr(int hint) > > > > > > { > > > > > > - int bits = HIGH_ADDR_SHIFT + rand() % (63 - HIGH_ADDR_SHIFT); > > > > > > + unsigned long addr = ((1UL << HIGH_ADDR_SHIFT) + (hint * MAP_CHUNK_SIZE)); > > > > > > - return (char *) (1UL << bits); > > > > > > + return (char *) (addr); > > > > > > } > > > > > > static void validate_addr(char *ptr, int high_addr) > > > > > > @@ -217,7 +217,7 @@ int main(int argc, char *argv[]) > > > > > > } > > > > > > for (i = 0; i < NR_CHUNKS_HIGH; i++) { > > > > > > - hint = hint_addr(); > > > > > > + hint = hint_addr(i); > > > > > > hptr[i] = mmap(hint, MAP_CHUNK_SIZE, PROT_READ, > > > > > > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > > > > > > > > > > > > > > > > > > > > > > > Can we fix it this way?