From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D709FC77B7F for ; Thu, 26 Jun 2025 06:52:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6706F6B00B7; Thu, 26 Jun 2025 02:52:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 620A26B00B9; Thu, 26 Jun 2025 02:52:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E8B86B00BB; Thu, 26 Jun 2025 02:52:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3BEE26B00B7 for ; Thu, 26 Jun 2025 02:52:29 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E5BB7C0875 for ; Thu, 26 Jun 2025 06:52:28 +0000 (UTC) X-FDA: 83596633176.10.4D6A447 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf07.hostedemail.com (Postfix) with ESMTP id 66BFB40006 for ; Thu, 26 Jun 2025 06:52:26 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b="oF0tJV/Y"; spf=pass (imf07.hostedemail.com: domain of donettom@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=donettom@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750920746; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IeZtDqwmz1khSscO8YzuYvN2wcU8GpIM6bDlf82eRaU=; b=Qehfu7IwDMEMP6iDg0zI1YZfxOc/Bj2X/p2VE2M9D+AG24Udde8zuT7L09I1wfjjD/a6j8 PcjRRYJ/x4Nio+j2h4twcQI9BEj5z7KREST2wE67UCHxUXH1+xzN9CB8oMbuABcr5azcj/ 2Tqc0q9zdUdbcbDskLUdfoJG4nW7pqo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750920746; a=rsa-sha256; cv=none; b=aG3nhpASBIzzJsb1D0Xbunf8k+T3jgiiG7/vjZ2/yDDo5XZ3U7mFOafCWgyGHg8t1BvhMq QpFlEn53HoGFVc35jgFmsaw/8gBtMiJHX2hG3xczyZtHbbtV5nWA1cteidvVGgov/c8P7Y 9YPKXr/IHVI20Ez87eGACHHY4immS+c= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b="oF0tJV/Y"; spf=pass (imf07.hostedemail.com: domain of donettom@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=donettom@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55Q4AQad010332; Thu, 26 Jun 2025 06:52:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pp1; bh=IeZtDq wmz1khSscO8YzuYvN2wcU8GpIM6bDlf82eRaU=; b=oF0tJV/Y9gpoo4qiP4FVg4 mkF6s7XNy6SkB6RyGRQkq5WGA4CXwRfEnXtmkUrrVvTJD814DQAGuTNYbzM9zwSb vGhYobRoDp7dj78E9WV2Ad4Q+551JNkSholmeEr1/Ni5h+ahii8QgHuH+cbaD1Ra DeZm1sp4Ldb4sS7bUVaEhos9++qD2ha94m+7xvVkGoBk73ymzwSkEodlGOEga3Vd H4cRHwYs7UggGK0WvxbL98iDfOJKxO6aOtwAMxzkGbyO9WGlmeqs5Ab8lQ6HkmHg PFJxKAqoixsnIgnV3QZEHoOE0SkqIV8vOuWEZ5OHxQsIQb7QGxUD2bXBk8UOMKfQ == Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 47gsphjdy0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Jun 2025 06:52:19 +0000 (GMT) Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 55Q6iBkH017987; Thu, 26 Jun 2025 06:52:19 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 47gsphjdxw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Jun 2025 06:52:19 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 55Q4T0tt006513; Thu, 26 Jun 2025 06:52:17 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 47e82pdm4k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 26 Jun 2025 06:52:17 +0000 Received: from smtpav01.fra02v.mail.ibm.com (smtpav01.fra02v.mail.ibm.com [10.20.54.100]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 55Q6qFp642271202 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 26 Jun 2025 06:52:15 GMT Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AFCDE20043; Thu, 26 Jun 2025 06:52:15 +0000 (GMT) Received: from smtpav01.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C0AA020040; Thu, 26 Jun 2025 06:52:11 +0000 (GMT) Received: from li-06431bcc-2712-11b2-a85c-a6fe68df28f9.ibm.com (unknown [9.39.20.202]) by smtpav01.fra02v.mail.ibm.com (Postfix) with ESMTPS; Thu, 26 Jun 2025 06:52:11 +0000 (GMT) Date: Thu, 26 Jun 2025 12:22:09 +0530 From: Donet Tom To: Dev Jain Cc: Lorenzo Stoakes , Aboorva Devarajan , akpm@linux-foundation.org, Liam.Howlett@oracle.com, shuah@kernel.org, pfalcato@suse.de, david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, npache@redhat.com, ryan.roberts@arm.com, baohua@kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, ritesh.list@gmail.com Subject: Re: [PATCH 1/6] mm/selftests: Fix virtual_address_range test issues. Message-ID: References: <815793f1-6800-4b9a-852e-f13d6308f50f@arm.com> <2756fa2b-e8bf-4c66-bf9b-c85dc63dfc33@lucifer.local> <41d9a70d-9791-4212-af23-5b13d8e4a47d@arm.com> <16fff6e9-98f5-4004-9906-feac49f0bbb4@arm.com> <3bc08930-06f3-443e-a267-ff02c2c053f6@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-TM-AS-GCONF: 00 X-Proofpoint-GUID: fyKqlAs7kRepZgT4KWwEd2vRHKZh_qpi X-Proofpoint-ORIG-GUID: xjo9p56mQA51R81vGFWdIeAW4WB9lUIz X-Authority-Analysis: v=2.4 cv=Hul2G1TS c=1 sm=1 tr=0 ts=685cee24 cx=c_pps a=GFwsV6G8L6GxiO2Y/PsHdQ==:117 a=GFwsV6G8L6GxiO2Y/PsHdQ==:17 a=IkcTkHD0fZMA:10 a=6IFa9wvqVegA:10 a=yRzeS63Ns0XtCOZXxIcA:9 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjI2MDA1NCBTYWx0ZWRfX8kGszTSPju7U WYsiv3J538aky1D1AjNGIbsAuOFw6yoZDUEhMzY4rixtMpXN8mi7jeetNnA1gshGDadZ4HLgMOU O69ydE+wbojfppwI6VTVgKurHblF4n7WXEXHikH/T6g/WldhXTwjOySdnJe/WhkW9x9mQt9Fi/5 +jUlajdKSOuwvKFbVLhJPAMEmXjZ8DPw3cftDcyXjhhIcG8q9bufiEDPbbIeJCuWrpESoDIjxMF srNpmU7clZwpfrC3BNFVj1aVX94Ht+dIx7cUGZrhYCbcfrgsN7g3x15jMdVIIfDnQSzfvA06N/C qBLg/vZucLAyixy61z3L+kEd7fznzj0dBltT/Tah64iydyLAfOa+twb6qvzd/b/1qqZYH5EaCe1 vFZJH5LyUBLceWT6ZDueJAcBtlL9pFHiugNyjEvG5oCWJrVXt89ip5w+U8gyjMXdaLYHpWf0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-26_03,2025-06-25_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxlogscore=999 bulkscore=0 mlxscore=0 adultscore=0 spamscore=0 impostorscore=0 clxscore=1015 priorityscore=1501 lowpriorityscore=0 suspectscore=0 phishscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506260054 X-Rspamd-Queue-Id: 66BFB40006 X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: c5xk4muaqf7i3w334en4t54n7mwsjnj7 X-HE-Tag: 1750920746-373748 X-HE-Meta: U2FsdGVkX1/idt2dHyFee6QcvVlpqcIMwjo3QfX4H7KXjDZ2Grwq/wRZD0g9p5SEcxutRO1qwLQz8wY2DAPthCHoVVwzjnkPeKq62kKrqKEXETDqiKKDKaKlnQI9ynuDEvhfaO4/Sk7EvBeRBMR81KgTS013tOyD3h2OfdlYadEuoWE6pE9nCW0WAe3g+gq7AN/eDmLkNLrx+Lmg1ADW6+U0QlUR6ntpML9O89iC1rnfaAN6wihjMEzVNR1dKRQc8uWZh7vLV6ogHqBalgQpDfKSpdJAx2WCosVc1VvV1CR/6KOXiFXuWsaJOFNnB3WZjxBe575yrH8lJMnzXR+si9i4el6jFmnOWL7eh5ocoR+CblCHEqrt2vOd1oTBMVno3ca6jus0lFQp+6pzHG8l/D/7OtvXN/r/wKk0Zt6hH9KbseKUyt7CBX7yVe+udKn8ktz0wt7Qzv2d0ZsQPUmM3R+4lexfN9jsESX21/s8c4oEvoP1TDwcQp+5sSjTJygzgjTW2k6N5xhZ8lnNhlt/V6T8xL5VNNchaM3MMtV28uQfsZT6hG+ll0CQ9P67U9LOL1+2ENZc8uOk8TC0GgA8MTjjV542JIUjCYihR9Qe1Y5NO0mYdJfn3W9RspZBGOU9pufG0dHQjQMcO6dLp0j0ZwCSeYMQPDj5uKxrJZF4DtmctR7pyxHFDGP5l5g6IsOSCMhGH6NQph/UdUluoHEFTM2GxqpaHNjpyEMgnLqrlmGcz078bye1lcbiDiRgQIl893za/PPmpRHF7rzQXAP9zomy7eRoOgbaO0EdsKkBo2d8eMMQT6F6+HOUnCDV6a3rPAz/kCXZqIa2qp18JC80D8kzjx9/QtHOTGkLa54wXWlOFUhIz9Vj741UCDuRDNn6chfWs2iCZPrySNGv2Hz7ZaIjZvm+nt2c7+bg0MIoI6azyJGB4eKSaFzWgJC35GJk0o1QnORjpHDI8tZR4Qm FDzWpUwc paLYLYKQks9O6stQdoFb5ywRElecKrX/AtPW5FfgdjmdTGryBW1mRGX4lmuw5mAdyV1Zekq6KZmHnV8lUqLx1uLxJdndJRuTu0+YDaCD5nnafPm61mo3IWl2lmRe4u9B4urGyruH833ml8ha7LbbauTossBwNmL4AmgTd7kUGcXUVlbQ0UwqcyeScnokO8hW/rWhiy7ZN4E9XhxZZUOeePNyPLlCTOg1mp4AdXE/gH6rnEnJF01JGXN1Q7phAHGMi3EzFx/yYyVHUQJhwhlEENG/ZICAwiaYn+38RJU1kZ4cXt2MVWqI1fY4JsZX6uXLENTdPXOROp1h8wt0KlQXeBrekog== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jun 26, 2025 at 12:05:11PM +0530, Dev Jain wrote: > > On 26/06/25 11:12 am, Donet Tom wrote: > > On Thu, Jun 26, 2025 at 09:27:30AM +0530, Dev Jain wrote: > > > On 25/06/25 10:47 pm, Donet Tom wrote: > > > > On Wed, Jun 25, 2025 at 06:22:53PM +0530, Dev Jain wrote: > > > > > On 19/06/25 1:53 pm, Donet Tom wrote: > > > > > > On Wed, Jun 18, 2025 at 08:13:54PM +0530, Dev Jain wrote: > > > > > > > On 18/06/25 8:05 pm, Lorenzo Stoakes wrote: > > > > > > > > On Wed, Jun 18, 2025 at 07:47:18PM +0530, Dev Jain wrote: > > > > > > > > > On 18/06/25 7:37 pm, Lorenzo Stoakes wrote: > > > > > > > > > > On Wed, Jun 18, 2025 at 07:28:16PM +0530, Dev Jain wrote: > > > > > > > > > > > On 18/06/25 5:27 pm, Lorenzo Stoakes wrote: > > > > > > > > > > > > On Wed, Jun 18, 2025 at 05:15:50PM +0530, Dev Jain wrote: > > > > > > > > > > > > Are you accounting for sys.max_map_count? If not, then you'll be hitting that > > > > > > > > > > > > first. > > > > > > > > > > > run_vmtests.sh will run the test in overcommit mode so that won't be an issue. > > > > > > > > > > Umm, what? You mean overcommit all mode, and that has no bearing on the max > > > > > > > > > > mapping count check. > > > > > > > > > > > > > > > > > > > > In do_mmap(): > > > > > > > > > > > > > > > > > > > > /* Too many mappings? */ > > > > > > > > > > if (mm->map_count > sysctl_max_map_count) > > > > > > > > > > return -ENOMEM; > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > As well as numerous other checks in mm/vma.c. > > > > > > > > > Ah sorry, didn't look at the code properly just assumed that overcommit_always meant overriding > > > > > > > > > this. > > > > > > > > No problem! It's hard to be aware of everything in mm :) > > > > > > > > > > > > > > > > > > I'm not sure why an overcommit toggle is even necessary when you could use > > > > > > > > > > MAP_NORESERVE or simply map PROT_NONE to avoid the OVERCOMMIT_GUESS limits? > > > > > > > > > > > > > > > > > > > > I'm pretty confused as to what this test is really achieving honestly. This > > > > > > > > > > isn't a useful way of asserting mmap() behaviour as far as I can tell. > > > > > > > > > Well, seems like a useful way to me at least : ) Not sure if you are in the mood > > > > > > > > > to discuss that but if you'd like me to explain from start to end what the test > > > > > > > > > is doing, I can do that : ) > > > > > > > > > > > > > > > > > I just don't have time right now, I guess I'll have to come back to it > > > > > > > > later... it's not the end of the world for it to be iffy in my view as long as > > > > > > > > it passes, but it might just not be of great value. > > > > > > > > > > > > > > > > Philosophically I'd rather we didn't assert internal implementation details like > > > > > > > > where we place mappings in userland memory. At no point do we promise to not > > > > > > > > leave larger gaps if we feel like it :) > > > > > > > You have a fair point. Anyhow a debate for another day. > > > > > > > > > > > > > > > I'm guessing, reading more, the _real_ test here is some mathematical assertion > > > > > > > > about layout from HIGH_ADDR_SHIFT -> end of address space when using hints. > > > > > > > > > > > > > > > > But again I'm not sure that achieves much and again also is asserting internal > > > > > > > > implementation details. > > > > > > > > > > > > > > > > Correct behaviour of this kind of thing probably better belongs to tests in the > > > > > > > > userland VMA testing I'd say. > > > > > > > > > > > > > > > > Sorry I don't mean to do down work you've done before, just giving an honest > > > > > > > > technical appraisal! > > > > > > > Nah, it will be rather hilarious to see it all go down the drain xD > > > > > > > > > > > > > > > Anyway don't let this block work to fix the test if it's failing. We can revisit > > > > > > > > this later. > > > > > > > Sure. @Aboorva and Donet, I still believe that the correct approach is to elide > > > > > > > the gap check at the crossing boundary. What do you think? > > > > > > > > > > > > > One problem I am seeing with this approach is that, since the hint address > > > > > > is generated randomly, the VMAs are also being created at randomly based on > > > > > > the hint address.So, for the VMAs created at high addresses, we cannot guarantee > > > > > > that the gaps between them will be aligned to MAP_CHUNK_SIZE. > > > > > > > > > > > > High address VMAs > > > > > > ----------------- > > > > > > 1000000000000-1000040000000 r--p 00000000 00:00 0 > > > > > > 2000000000000-2000040000000 r--p 00000000 00:00 0 > > > > > > 4000000000000-4000040000000 r--p 00000000 00:00 0 > > > > > > 8000000000000-8000040000000 r--p 00000000 00:00 0 > > > > > > e80009d260000-fffff9d260000 r--p 00000000 00:00 0 > > > > > > > > > > > > I have a different approach to solve this issue. > > > > > > > > > > > > From 0 to 128TB, we map memory directly without using any hint. For the range above > > > > > > 256TB up to 512TB, we perform the mapping using hint addresses. In the current test, > > > > > > we use random hint addresses, but I have modified it to generate hint addresses linearly > > > > > > starting from 128TB. > > > > > > > > > > > > With this change: > > > > > > > > > > > > The 0–128TB range is mapped without hints and verified accordingly. > > > > > > > > > > > > The 128TB–512TB range is mapped using linear hint addresses and then verified. > > > > > > > > > > > > Below are the VMAs obtained with this approach: > > > > > > > > > > > > 10000000-10010000 r-xp 00000000 fd:05 135019531 > > > > > > 10010000-10020000 r--p 00000000 fd:05 135019531 > > > > > > 10020000-10030000 rw-p 00010000 fd:05 135019531 > > > > > > 20000000-10020000000 r--p 00000000 00:00 0 > > > > > > 10020800000-10020830000 rw-p 00000000 00:00 0 > > > > > > 1004bcf0000-1004c000000 rw-p 00000000 00:00 0 > > > > > > 1004c000000-7fff8c000000 r--p 00000000 00:00 0 > > > > > > 7fff8c130000-7fff8c360000 r-xp 00000000 fd:00 792355 > > > > > > 7fff8c360000-7fff8c370000 r--p 00230000 fd:00 792355 > > > > > > 7fff8c370000-7fff8c380000 rw-p 00240000 fd:00 792355 > > > > > > 7fff8c380000-7fff8c460000 r-xp 00000000 fd:00 792358 > > > > > > 7fff8c460000-7fff8c470000 r--p 000d0000 fd:00 792358 > > > > > > 7fff8c470000-7fff8c480000 rw-p 000e0000 fd:00 792358 > > > > > > 7fff8c490000-7fff8c4d0000 r--p 00000000 00:00 0 > > > > > > 7fff8c4d0000-7fff8c4e0000 r-xp 00000000 00:00 0 > > > > > > 7fff8c4e0000-7fff8c530000 r-xp 00000000 fd:00 792351 > > > > > > 7fff8c530000-7fff8c540000 r--p 00040000 fd:00 792351 > > > > > > 7fff8c540000-7fff8c550000 rw-p 00050000 fd:00 792351 > > > > > > 7fff8d000000-7fffcd000000 r--p 00000000 00:00 0 > > > > > > 7fffe9c80000-7fffe9d90000 rw-p 00000000 00:00 0 > > > > > > 800000000000-2000000000000 r--p 00000000 00:00 0 -> High Address (128TB to 512TB) > > > > > > > > > > > > diff --git a/tools/testing/selftests/mm/virtual_address_range.c b/tools/testing/selftests/mm/virtual_address_range.c > > > > > > index 4c4c35eac15e..0be008cba4b0 100644 > > > > > > --- a/tools/testing/selftests/mm/virtual_address_range.c > > > > > > +++ b/tools/testing/selftests/mm/virtual_address_range.c > > > > > > @@ -56,21 +56,21 @@ > > > > > > #ifdef __aarch64__ > > > > > > #define HIGH_ADDR_MARK ADDR_MARK_256TB > > > > > > -#define HIGH_ADDR_SHIFT 49 > > > > > > +#define HIGH_ADDR_SHIFT 48 > > > > > > #define NR_CHUNKS_LOW NR_CHUNKS_256TB > > > > > > #define NR_CHUNKS_HIGH NR_CHUNKS_3840TB > > > > > > #else > > > > > > #define HIGH_ADDR_MARK ADDR_MARK_128TB > > > > > > -#define HIGH_ADDR_SHIFT 48 > > > > > > +#define HIGH_ADDR_SHIFT 47 > > > > > > #define NR_CHUNKS_LOW NR_CHUNKS_128TB > > > > > > #define NR_CHUNKS_HIGH NR_CHUNKS_384TB > > > > > > #endif > > > > > > -static char *hint_addr(void) > > > > > > +static char *hint_addr(int hint) > > > > > > { > > > > > > - int bits = HIGH_ADDR_SHIFT + rand() % (63 - HIGH_ADDR_SHIFT); > > > > > > + unsigned long addr = ((1UL << HIGH_ADDR_SHIFT) + (hint * MAP_CHUNK_SIZE)); > > > > > > - return (char *) (1UL << bits); > > > > > > + return (char *) (addr); > > > > > > } > > > > > > static void validate_addr(char *ptr, int high_addr) > > > > > > @@ -217,7 +217,7 @@ int main(int argc, char *argv[]) > > > > > > } > > > > > > for (i = 0; i < NR_CHUNKS_HIGH; i++) { > > > > > > - hint = hint_addr(); > > > > > > + hint = hint_addr(i); > > > > > > hptr[i] = mmap(hint, MAP_CHUNK_SIZE, PROT_READ, > > > > > > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > > > > Ah you sent it here, thanks. This is fine really, but the mystery is > > > > > something else. > > > > > > > > > Thanks Dev > > > > > > > > I can send out v2 with this patch included, right? > > > Sorry not yet :) this patch will just hide the real problem, which > > > is, after the hint addresses get exhausted, why on ppc the kernel > > > cannot find a VMA to install despite having such large gaps between > > > VMAs. > > > > I think there is some confusion here, so let me clarify. > > > > On PowerPC, mmap is able to find VMAs both with and without a hint. > > There is no issue there. If you look at the test, from 0 to 128TB we > > are mapping without any hint, and the VMAs are getting created as > > expected. > > > > Above 256TB, we are mapping with random hint addresses, and with > > those hints, all VMAs are being created above 258TB. No mmap call > > is failing in this case. > > > > The problem is with the test itself: since we are providing random > > hint addresses, the VMAs are also being created at random locations. > > > > Below is the VMAs created with hint addreess > > > > 1. 256TB hint address > > > > 1000000000000-1000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 2. 512TB hint address > > 2000000000000-2000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 3. 1024TB Hint address > > 4000000000000-4000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 4. 2048TB hint Address > > 8000000000000-8000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > 5. above 3096 Hint address > > eb95410220000-fffff90220000 r--p 00000000 00:00 0 [anon:virtual_address_range]. > > > > > > We support up to 4PB, and since the hint addresses are random, > > the VMAs are created at random locations. > > > > With sequential hint addresses from 128TB to 512TB, we provide the > > hint addresses in order, and the VMAs are created at the hinted > > addresses. > > > > Within 512TB, we were able to test both high and low addresses, so > > I thought sequential hinting would be a good approach. Since there > > has been a lot of confusion, I’m considering adding a complete 4PB > > allocation test — from 0 to 128TB we allocate without any hint, and > > from 128TB onward we use sequential hint addresses. > > > > diff --git a/tools/testing/selftests/mm/virtual_address_range.c b/tools/testing/selftests/mm/virtual_address_range.c > > index e24c36a39f22..f2009d23f8b2 100644 > > --- a/tools/testing/selftests/mm/virtual_address_range.c > > +++ b/tools/testing/selftests/mm/virtual_address_range.c > > @@ -50,6 +50,7 @@ > > #define NR_CHUNKS_256TB (NR_CHUNKS_128TB * 2UL) > > #define NR_CHUNKS_384TB (NR_CHUNKS_128TB * 3UL) > > #define NR_CHUNKS_3840TB (NR_CHUNKS_128TB * 30UL) > > +#define NR_CHUNKS_3968TB (NR_CHUNKS_128TB * 31UL) > > #define ADDR_MARK_128TB (1UL << 47) /* First address beyond 128TB */ > > #define ADDR_MARK_256TB (1UL << 48) /* First address beyond 256TB */ > > @@ -59,6 +60,11 @@ > > #define HIGH_ADDR_SHIFT 49 > > #define NR_CHUNKS_LOW NR_CHUNKS_256TB > > #define NR_CHUNKS_HIGH NR_CHUNKS_3840TB > > +#elif defined(__PPC64__) > > +#define HIGH_ADDR_MARK ADDR_MARK_128TB > > +#define HIGH_ADDR_SHIFT 47 > > +#define NR_CHUNKS_LOW NR_CHUNKS_128TB > > +#define NR_CHUNKS_HIGH NR_CHUNKS_3968TB > > #else > > #define HIGH_ADDR_MARK ADDR_MARK_128TB > > #define HIGH_ADDR_SHIFT 48 > > > > > > With this the test is passing. > > Ah okay this was the problem, PPC got extended for 52 bits and the > test was not updated. This is the correct fix, you can go ahead > with this one. Thanks Dev > > > > > > > > > It should be quite easy to trace which function is failing. Can you > > > please do some debugging for me? Otherwise I will have to go ahead > > > with setting up a PPC VM and testing myself :) > > > > > > > > > Can we fix it this way? >