From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0CC7C7115B for ; Mon, 23 Jun 2025 17:32:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 377726B0093; Mon, 23 Jun 2025 13:32:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34FED6B00C3; Mon, 23 Jun 2025 13:32:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 263CE6B00C5; Mon, 23 Jun 2025 13:32:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 15F0F6B0093 for ; Mon, 23 Jun 2025 13:32:27 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id AF53E10493C for ; Mon, 23 Jun 2025 17:32:26 +0000 (UTC) X-FDA: 83587359492.21.80B606E Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf20.hostedemail.com (Postfix) with ESMTP id 33B0D1C0009 for ; Mon, 23 Jun 2025 17:32:23 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=QaSAmH2U; spf=pass (imf20.hostedemail.com: domain of donettom@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=donettom@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750699944; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9I4GtTJwjOlQhxwSOiDaNMGHo4s8mbhwV9yzIlbywGw=; b=gtXyhm72XbDIw34Ilz9t+d0gCGINMuO8KK/tBatAdZRxFkHVaOl4b2u96x6QMTX0s+3lDF 33HOOxi37sIJK7x3pEw692NXoI/P/NamPj8nF7abKWHrWw/lM5NsUQqEyvS8FhORT66+Db Opa0sPj35ZD7/7qFxHdXqLd5pALzs6M= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=QaSAmH2U; spf=pass (imf20.hostedemail.com: domain of donettom@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=donettom@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750699944; a=rsa-sha256; cv=none; b=V6EKKpIBrTHviX4a2y3c+88jZHNEhLAitFPtFoXSO2VT/AgJ3AdtWzA8eSIFMwGKH1fQvX icBt0TfHxmI08zeWeUjlg0MXWXVUOz8FQGT30cSvPP+WlnS0aWalltSbQGmSOMt5BIyczZ JAam5YCxYHC448svRGJNmpuppmo4oqg= Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55NGQrel019048; Mon, 23 Jun 2025 17:32:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pp1; bh=9I4GtT JwjOlQhxwSOiDaNMGHo4s8mbhwV9yzIlbywGw=; b=QaSAmH2UcPlFffqAeQHL89 BuRwckpRpR4H513PEESX5dnXMH8R7zRq12Y3smtS+GsB/z2z0BjHD5QcdR4Hhd1a a8EywIwNBF4Mxx3WS+v/CYpphd2iccrSG3ZB2weaeIRRS1O6x+VTlNfl3LXcv485 lJOeh3Zbb9KZ8Ic9rWxBQwsFSxc57ymOoUjJn/rrbs4HkffTtHxUROd02yRWkgKy tXnsbpJM+OH+I+6NUpcipJzTUuUPOVVK49Xfv25QBElpGogjK1AfDoCgHPcqtdZF Gb4kpI5mAvUp9IFFkzvVBrz8PeMmIsU4PXaKdEVlcmeYxK29UXtEJXnud7ttzunw == Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 47dmfe3qyq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 23 Jun 2025 17:32:17 +0000 (GMT) Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 55NHWHOe017026; Mon, 23 Jun 2025 17:32:17 GMT Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 47dmfe3qyn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 23 Jun 2025 17:32:16 +0000 (GMT) Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 55NGf8Si014988; Mon, 23 Jun 2025 17:32:15 GMT Received: from smtprelay05.fra02v.mail.ibm.com ([9.218.2.225]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 47e72tg4y2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 23 Jun 2025 17:32:15 +0000 Received: from smtpav04.fra02v.mail.ibm.com (smtpav04.fra02v.mail.ibm.com [10.20.54.103]) by smtprelay05.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 55NHWEg230605782 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 23 Jun 2025 17:32:14 GMT Received: from smtpav04.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0DF2420040; Mon, 23 Jun 2025 17:32:14 +0000 (GMT) Received: from smtpav04.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D776A20043; Mon, 23 Jun 2025 17:32:09 +0000 (GMT) Received: from li-06431bcc-2712-11b2-a85c-a6fe68df28f9.ibm.com (unknown [9.39.21.1]) by smtpav04.fra02v.mail.ibm.com (Postfix) with ESMTPS; Mon, 23 Jun 2025 17:32:09 +0000 (GMT) Date: Mon, 23 Jun 2025 23:02:06 +0530 From: Donet Tom To: Dev Jain Cc: Lorenzo Stoakes , Aboorva Devarajan , akpm@linux-foundation.org, Liam.Howlett@oracle.com, shuah@kernel.org, pfalcato@suse.de, david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, npache@redhat.com, ryan.roberts@arm.com, baohua@kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, ritesh.list@gmail.com Subject: Re: [PATCH 1/6] mm/selftests: Fix virtual_address_range test issues. Message-ID: References: <8e23c5d3-6ce3-4fe8-b6fe-69658d5d0727@lucifer.local> <815793f1-6800-4b9a-852e-f13d6308f50f@arm.com> <2756fa2b-e8bf-4c66-bf9b-c85dc63dfc33@lucifer.local> <41d9a70d-9791-4212-af23-5b13d8e4a47d@arm.com> <546d7aa5-9ea3-4fce-a604-b1676a61d6cd@arm.com> <2fc32719-1e38-4bf0-8ec5-5bcb452d939f@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2fc32719-1e38-4bf0-8ec5-5bcb452d939f@arm.com> X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: jLGT2a0U8ZsCAlxrgl6YF50lTsDaxUFV X-Proofpoint-GUID: E2Vz7miouMA1DRtsznH1rjQ5pnihtyJR X-Authority-Analysis: v=2.4 cv=BpqdwZX5 c=1 sm=1 tr=0 ts=68598fa1 cx=c_pps a=bLidbwmWQ0KltjZqbj+ezA==:117 a=bLidbwmWQ0KltjZqbj+ezA==:17 a=IkcTkHD0fZMA:10 a=6IFa9wvqVegA:10 a=0I7eUEaT9GVU_OUAxaMA:9 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjIzMDEwNiBTYWx0ZWRfX16+2lN6g6KsR 0i61THr3SyxlRneW4gD9NshIYTpOVCKf0yDhsRMdqL48ICGg7rj237FAwJpcmC6/NqHPXvzPCIt ZxAe/bX+CsdYKmvc1bhNeIduZDZWv+2wqtMg56TOKuHranLS+Zv6ZrGuBR5MZ9UIAjFhXBa1jQp kvE63g/Puj0ZkCO9HdObC7M/WfT5ZS5lAUAs0xQ/PgNnyMsGXEXCImauuUVaSvBiWZmUGNwgJfD hvbUg0KPE6m6eS9j4y4UYUHIy10+imXe0lpv/5Wt40ClWDxDg+tlcgCjLqCUt2PwPjpqjHMireH VPtlpbTgcHKiMCY7RWcp7Qjv2BZbCh4E305NgwUsrUqpas3Oigji6GuFmAQhp+e2b4jlHwcmSy/ cLqMst3OxlU4HppI0PmUWxCONn8dcLfNWZkX3bNPq+9PFhIP4uPhKV4RqIcY5GCYyVZexc8h X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.1.7,FMLib:17.12.80.40 definitions=2025-06-23_05,2025-06-23_07,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0 impostorscore=0 clxscore=1015 spamscore=0 mlxlogscore=999 priorityscore=1501 phishscore=0 malwarescore=0 adultscore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506230106 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 33B0D1C0009 X-Stat-Signature: kybigi1eihtm7gur4coichhh8wsujor1 X-HE-Tag: 1750699943-380862 X-HE-Meta: U2FsdGVkX18XMxiyB+/0wWCnlZLQSYP3kqyEevjnXdBeauXhlU25pNGnUEv0PnZ/fuGu5Z9IOACj0aFDtw8VdMTpVuk2F02MbZcThktUY7kUv7ivTJFOevesedQuXc3WLrswcBlSGAC2cDt8IFKh1+dQeFYkOAm0ULOB4UOoUJR5ENujVYnp07EBEWjkXurB7yeRbYIk9GNoGwFIjo4Ninr5pPmj/+7+AyqpS14YA2PhGW5c6Tb5peTkdk0whdsaB90IlfYC6rp84hyBWRgA2MVy/jaqMH0u0hi/L/qFnE67g+Cie7iXTCrkeuWXnrMzMSlfk4/4movcCKeZIRDHTQxaB1v7zm2fVpRYM65dxN4tomcTYd4JvZt15icoG+6ZG8jMfVtx024JU0YOeXnmWFe6veH9uv8lUeIHGApaekxRqcxVRnZDcAl8bT1qxw81yK1/habv2p8RjeiHIz+CYf43ZlGyBp6H/m7/eY1NeDG/eDYLVjhm8WNHYIqfkSxNmC8HVb2aWuvbhs312V/xbpqXER3CPU7ZNowg4Oq0VeupRHRxrmjM75cN6dvfU/uHirR6qN587yx3IEBCE3SEuEONSnkE+sr5uoh4dXZ1GJLbr+7o6GwXaFSK4u60CKZx4TQll1VrZyPPb/GOWp64mgGEZmGJqMZuX9ic+8zEvwelagnBY5ScC7PvvFyaLnV7Bl1qJLvKUua7tKxeEn+NMKAP7AMBc1J/Oqn04Fu9uOyY2VvIQhIoEQ2NVCdH/et8kU0HDKNOmMHuWqxhrCLRtXiuPOBayeA92DkWQUbJchc/xwzRXYD4Gb89D4uykXS8oUtUBdc58GsFWwNk+yf4xR1z2fyVx5g3/T7NczxSMlLr752f0WN9apMCHZ4wZCfz2GZcM9Q3mZttJk6d6qbUcFuDoOZR1K1++wp7uaR8FsreMOrww4LOTnWicZwQFaaaG1XJpUfDbXE5MIQYAc9 wYPzES0h Mm9cOyUPsCTtmpncM5BG6cr6PFjBSVFIRrOBU2qKMWydXXXeiiSmsybdKTEm8YyDQLS91yRoczoCEOfvMZS9v32PBQN3T8MJWvY2W2OweOSKPFWz+SmD986cIi6OTe/BWI92eUFX4s2r4OFymtHgRXHPxH52+gi0IKbzjNRyxGJtNjTCpGPTf3ihAkUXwv65LR5MclBPBLxjSkYsGiejLsU+O9+m5UsKGmjJqvt4Z4oxf4BX1MlfGgS48bRKN84Fk2K8WJh7SYppsMky5ZcYBUrUsQQxOm0bNYQHzXFkPNvj1gU2CKhAPYz6kKa24M7lj2W6++eZRtaLSmGi2BbZ+pdljCYYrO//H9L2o/iqEl2wckMpuDC9oA+Y/BhaHSBkgGY/X6izOO5kZhQkFxzdHUi19KUsnivoxmBrfsIMEYrP+8CMQCISxJoCIKdaPs1tLWK1KcSYz/P2iebw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jun 23, 2025 at 10:23:02AM +0530, Dev Jain wrote: > > On 21/06/25 11:25 pm, Donet Tom wrote: > > On Fri, Jun 20, 2025 at 08:15:25PM +0530, Dev Jain wrote: > > > On 19/06/25 1:53 pm, Donet Tom wrote: > > > > On Wed, Jun 18, 2025 at 08:13:54PM +0530, Dev Jain wrote: > > > > > On 18/06/25 8:05 pm, Lorenzo Stoakes wrote: > > > > > > On Wed, Jun 18, 2025 at 07:47:18PM +0530, Dev Jain wrote: > > > > > > > On 18/06/25 7:37 pm, Lorenzo Stoakes wrote: > > > > > > > > On Wed, Jun 18, 2025 at 07:28:16PM +0530, Dev Jain wrote: > > > > > > > > > On 18/06/25 5:27 pm, Lorenzo Stoakes wrote: > > > > > > > > > > On Wed, Jun 18, 2025 at 05:15:50PM +0530, Dev Jain wrote: > > > > > > > > > > Are you accounting for sys.max_map_count? If not, then you'll be hitting that > > > > > > > > > > first. > > > > > > > > > run_vmtests.sh will run the test in overcommit mode so that won't be an issue. > > > > > > > > Umm, what? You mean overcommit all mode, and that has no bearing on the max > > > > > > > > mapping count check. > > > > > > > > > > > > > > > > In do_mmap(): > > > > > > > > > > > > > > > > /* Too many mappings? */ > > > > > > > > if (mm->map_count > sysctl_max_map_count) > > > > > > > > return -ENOMEM; > > > > > > > > > > > > > > > > > > > > > > > > As well as numerous other checks in mm/vma.c. > > > > > > > Ah sorry, didn't look at the code properly just assumed that overcommit_always meant overriding > > > > > > > this. > > > > > > No problem! It's hard to be aware of everything in mm :) > > > > > > > > > > > > > > I'm not sure why an overcommit toggle is even necessary when you could use > > > > > > > > MAP_NORESERVE or simply map PROT_NONE to avoid the OVERCOMMIT_GUESS limits? > > > > > > > > > > > > > > > > I'm pretty confused as to what this test is really achieving honestly. This > > > > > > > > isn't a useful way of asserting mmap() behaviour as far as I can tell. > > > > > > > Well, seems like a useful way to me at least : ) Not sure if you are in the mood > > > > > > > to discuss that but if you'd like me to explain from start to end what the test > > > > > > > is doing, I can do that : ) > > > > > > > > > > > > > I just don't have time right now, I guess I'll have to come back to it > > > > > > later... it's not the end of the world for it to be iffy in my view as long as > > > > > > it passes, but it might just not be of great value. > > > > > > > > > > > > Philosophically I'd rather we didn't assert internal implementation details like > > > > > > where we place mappings in userland memory. At no point do we promise to not > > > > > > leave larger gaps if we feel like it :) > > > > > You have a fair point. Anyhow a debate for another day. > > > > > > > > > > > I'm guessing, reading more, the _real_ test here is some mathematical assertion > > > > > > about layout from HIGH_ADDR_SHIFT -> end of address space when using hints. > > > > > > > > > > > > But again I'm not sure that achieves much and again also is asserting internal > > > > > > implementation details. > > > > > > > > > > > > Correct behaviour of this kind of thing probably better belongs to tests in the > > > > > > userland VMA testing I'd say. > > > > > > > > > > > > Sorry I don't mean to do down work you've done before, just giving an honest > > > > > > technical appraisal! > > > > > Nah, it will be rather hilarious to see it all go down the drain xD > > > > > > > > > > > Anyway don't let this block work to fix the test if it's failing. We can revisit > > > > > > this later. > > > > > Sure. @Aboorva and Donet, I still believe that the correct approach is to elide > > > > > the gap check at the crossing boundary. What do you think? > > > > > > > > > One problem I am seeing with this approach is that, since the hint address > > > > is generated randomly, the VMAs are also being created at randomly based on > > > > the hint address.So, for the VMAs created at high addresses, we cannot guarantee > > > > that the gaps between them will be aligned to MAP_CHUNK_SIZE. > > > > > > > > High address VMAs > > > > ----------------- > > > > 1000000000000-1000040000000 r--p 00000000 00:00 0 > > > > 2000000000000-2000040000000 r--p 00000000 00:00 0 > > > > 4000000000000-4000040000000 r--p 00000000 00:00 0 > > > > 8000000000000-8000040000000 r--p 00000000 00:00 0 > > > > e80009d260000-fffff9d260000 r--p 00000000 00:00 0 > > > > > > > > I have a different approach to solve this issue. > > > It is really weird that such a large amount of VA space > > > is left between the two VMAs yet mmap is failing. > > > > > > > > > > > > Can you please do the following: > > > set /proc/sys/vm/max_map_count to the highest value possible. > > > If running without run_vmtests.sh, set /proc/sys/vm/overcommit_memory to 1. > > > In validate_complete_va_space: > > > > > > if (start_addr >= HIGH_ADDR_MARK && found == false) { > > > found = true; > > > continue; > > > } > > > > Thanks Dev for the suggestion. I set max_map_count and set overcommit > > memory to 1, added this code change as well, and then tried. Still, the > > test is failing > > > > > where found is initialized to false. This will skip the check > > > for the boundary. > > > > > > After this can you tell whether the test is still failing. > > > > > > Also can you give me the complete output of proc/pid/maps > > > after putting a sleep at the end of the test. > > > > > > > on powerpc support DEFAULT_MAP_WINDOW is 128TB and with > > total address space size is 4PB With hint it can map upto > > 4PB. Since the hint addres is random in this test random hing VMAs > > are getting created. IIUC this is expected only. > > > > > > 10000000-10010000 r-xp 00000000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > 10010000-10020000 r--p 00000000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > 10020000-10030000 rw-p 00010000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > 30000000-10030000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > 10030770000-100307a0000 rw-p 00000000 00:00 0 [heap] > > 1004f000000-7fff8f000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > 7fff8faf0000-7fff8fe00000 rw-p 00000000 00:00 0 > > 7fff8fe00000-7fff90030000 r-xp 00000000 fd:00 792355 /usr/lib64/libc.so.6 > > 7fff90030000-7fff90040000 r--p 00230000 fd:00 792355 /usr/lib64/libc.so.6 > > 7fff90040000-7fff90050000 rw-p 00240000 fd:00 792355 /usr/lib64/libc.so.6 > > 7fff90050000-7fff90130000 r-xp 00000000 fd:00 792358 /usr/lib64/libm.so.6 > > 7fff90130000-7fff90140000 r--p 000d0000 fd:00 792358 /usr/lib64/libm.so.6 > > 7fff90140000-7fff90150000 rw-p 000e0000 fd:00 792358 /usr/lib64/libm.so.6 > > 7fff90160000-7fff901a0000 r--p 00000000 00:00 0 [vvar] > > 7fff901a0000-7fff901b0000 r-xp 00000000 00:00 0 [vdso] > > 7fff901b0000-7fff90200000 r-xp 00000000 fd:00 792351 /usr/lib64/ld64.so.2 > > 7fff90200000-7fff90210000 r--p 00040000 fd:00 792351 /usr/lib64/ld64.so.2 > > 7fff90210000-7fff90220000 rw-p 00050000 fd:00 792351 /usr/lib64/ld64.so.2 > > 7fffc9770000-7fffc9880000 rw-p 00000000 00:00 0 [stack] > > 1000000000000-1000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > 2000000000000-2000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > 4000000000000-4000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > 8000000000000-8000040000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > eb95410220000-fffff90220000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > > > > > > > If I give the hint address serially from 128TB then the address > > space is contigous and gap is also MAP_SIZE, the test is passing. > > > > 10000000-10010000 r-xp 00000000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > 10010000-10020000 r--p 00000000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > 10020000-10030000 rw-p 00010000 fd:05 134226638 /home/donet/linux/tools/testing/selftests/mm/virtual_address_range > > 33000000-10033000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > 10033380000-100333b0000 rw-p 00000000 00:00 0 [heap] > > 1006f0f0000-10071000000 rw-p 00000000 00:00 0 > > 10071000000-7fffb1000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > 7fffb15d0000-7fffb1800000 r-xp 00000000 fd:00 792355 /usr/lib64/libc.so.6 > > 7fffb1800000-7fffb1810000 r--p 00230000 fd:00 792355 /usr/lib64/libc.so.6 > > 7fffb1810000-7fffb1820000 rw-p 00240000 fd:00 792355 /usr/lib64/libc.so.6 > > 7fffb1820000-7fffb1900000 r-xp 00000000 fd:00 792358 /usr/lib64/libm.so.6 > > 7fffb1900000-7fffb1910000 r--p 000d0000 fd:00 792358 /usr/lib64/libm.so.6 > > 7fffb1910000-7fffb1920000 rw-p 000e0000 fd:00 792358 /usr/lib64/libm.so.6 > > 7fffb1930000-7fffb1970000 r--p 00000000 00:00 0 [vvar] > > 7fffb1970000-7fffb1980000 r-xp 00000000 00:00 0 [vdso] > > 7fffb1980000-7fffb19d0000 r-xp 00000000 fd:00 792351 /usr/lib64/ld64.so.2 > > 7fffb19d0000-7fffb19e0000 r--p 00040000 fd:00 792351 /usr/lib64/ld64.so.2 > > 7fffb19e0000-7fffb19f0000 rw-p 00050000 fd:00 792351 /usr/lib64/ld64.so.2 > > 7fffc5470000-7fffc5580000 rw-p 00000000 00:00 0 [stack] > > 800000000000-2aab000000000 r--p 00000000 00:00 0 [anon:virtual_address_range] > > > > > > Thank you for this output. I can't wrap my head around why this behaviour changes > when you generate the hint sequentially. The mmap() syscall is supposed to do the > following (irrespective of high VA space or not) - if the allocation at the hint Yes, it is working as expected. On PowerPC, the DEFAULT_MAP_WINDOW is 128TB, and the system can map up to 4PB. In the test, the first mmap call maps memory up to 128TB without any hint, so the VMAs are created below the 128TB boundary. In the second mmap call, we provide a hint starting from 256TB, and the hint address is generated randomly above 256TB. The mappings are correctly created at these hint addresses. Since the hint addresses are random, the resulting VMAs are also created at random locations. So, what I tried is: mapping from 0 to 128TB without any hint, and then for the second mmap, instead of starting the hint from 256TB, I started from 128TB. Instead of using random hint addresses, I used sequential hint addresses from 128TB up to 512TB. With this change, the VMAs are created in order, and the test passes. 800000000000-2aab000000000 r--p 00000000 00:00 0 128TB to 512TB VMA I think we will see same behaviour on x86 with X86_FEATURE_LA57. I will send the updated patch in V2. > addr succeeds, then all is well, otherwise, do a top-down search for a large > enough gap. I am not aware of the nuances in powerpc but I really am suspecting > a bug in powerpc mmap code. Can you try to do some tracing - which function > eventually fails to find the empty gap? > > Through my limited code tracing - we should end up in slice_find_area_topdown, > then we ask the generic code to find the gap using vm_unmapped_area. So I > suspect something is happening between this, probably slice_scan_available(). > > > > > > > From 0 to 128TB, we map memory directly without using any hint. For the range above > > > > 256TB up to 512TB, we perform the mapping using hint addresses. In the current test, > > > > we use random hint addresses, but I have modified it to generate hint addresses linearly > > > > starting from 128TB. > > > > > > > > With this change: > > > > > > > > The 0–128TB range is mapped without hints and verified accordingly. > > > > > > > > The 128TB–512TB range is mapped using linear hint addresses and then verified. > > > > > > > > Below are the VMAs obtained with this approach: > > > > > > > > 10000000-10010000 r-xp 00000000 fd:05 135019531 > > > > 10010000-10020000 r--p 00000000 fd:05 135019531 > > > > 10020000-10030000 rw-p 00010000 fd:05 135019531 > > > > 20000000-10020000000 r--p 00000000 00:00 0 > > > > 10020800000-10020830000 rw-p 00000000 00:00 0 > > > > 1004bcf0000-1004c000000 rw-p 00000000 00:00 0 > > > > 1004c000000-7fff8c000000 r--p 00000000 00:00 0 > > > > 7fff8c130000-7fff8c360000 r-xp 00000000 fd:00 792355 > > > > 7fff8c360000-7fff8c370000 r--p 00230000 fd:00 792355 > > > > 7fff8c370000-7fff8c380000 rw-p 00240000 fd:00 792355 > > > > 7fff8c380000-7fff8c460000 r-xp 00000000 fd:00 792358 > > > > 7fff8c460000-7fff8c470000 r--p 000d0000 fd:00 792358 > > > > 7fff8c470000-7fff8c480000 rw-p 000e0000 fd:00 792358 > > > > 7fff8c490000-7fff8c4d0000 r--p 00000000 00:00 0 > > > > 7fff8c4d0000-7fff8c4e0000 r-xp 00000000 00:00 0 > > > > 7fff8c4e0000-7fff8c530000 r-xp 00000000 fd:00 792351 > > > > 7fff8c530000-7fff8c540000 r--p 00040000 fd:00 792351 > > > > 7fff8c540000-7fff8c550000 rw-p 00050000 fd:00 792351 > > > > 7fff8d000000-7fffcd000000 r--p 00000000 00:00 0 > > > > 7fffe9c80000-7fffe9d90000 rw-p 00000000 00:00 0 > > > > 800000000000-2000000000000 r--p 00000000 00:00 0 -> High Address (128TB to 512TB) > > > > > > > > diff --git a/tools/testing/selftests/mm/virtual_address_range.c b/tools/testing/selftests/mm/virtual_address_range.c > > > > index 4c4c35eac15e..0be008cba4b0 100644 > > > > --- a/tools/testing/selftests/mm/virtual_address_range.c > > > > +++ b/tools/testing/selftests/mm/virtual_address_range.c > > > > @@ -56,21 +56,21 @@ > > > > #ifdef __aarch64__ > > > > #define HIGH_ADDR_MARK ADDR_MARK_256TB > > > > -#define HIGH_ADDR_SHIFT 49 > > > > +#define HIGH_ADDR_SHIFT 48 > > > > #define NR_CHUNKS_LOW NR_CHUNKS_256TB > > > > #define NR_CHUNKS_HIGH NR_CHUNKS_3840TB > > > > #else > > > > #define HIGH_ADDR_MARK ADDR_MARK_128TB > > > > -#define HIGH_ADDR_SHIFT 48 > > > > +#define HIGH_ADDR_SHIFT 47 > > > > #define NR_CHUNKS_LOW NR_CHUNKS_128TB > > > > #define NR_CHUNKS_HIGH NR_CHUNKS_384TB > > > > #endif > > > > -static char *hint_addr(void) > > > > +static char *hint_addr(int hint) > > > > { > > > > - int bits = HIGH_ADDR_SHIFT + rand() % (63 - HIGH_ADDR_SHIFT); > > > > + unsigned long addr = ((1UL << HIGH_ADDR_SHIFT) + (hint * MAP_CHUNK_SIZE)); > > > > - return (char *) (1UL << bits); > > > > + return (char *) (addr); > > > > } > > > > static void validate_addr(char *ptr, int high_addr) > > > > @@ -217,7 +217,7 @@ int main(int argc, char *argv[]) > > > > } > > > > for (i = 0; i < NR_CHUNKS_HIGH; i++) { > > > > - hint = hint_addr(); > > > > + hint = hint_addr(i); > > > > hptr[i] = mmap(hint, MAP_CHUNK_SIZE, PROT_READ, > > > > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > > > > > > > > > > > > > > > Can we fix it this way? >