From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23E7DC7618D for ; Thu, 6 Apr 2023 07:31:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6DC266B0071; Thu, 6 Apr 2023 03:31:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 68CDE6B0074; Thu, 6 Apr 2023 03:31:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 554666B0075; Thu, 6 Apr 2023 03:31:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 429AA6B0071 for ; Thu, 6 Apr 2023 03:31:25 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 024AB1A0EE3 for ; Thu, 6 Apr 2023 07:31:24 +0000 (UTC) X-FDA: 80650145730.18.66B4447 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf08.hostedemail.com (Postfix) with ESMTP id 0A0D5160020 for ; Thu, 6 Apr 2023 07:31:22 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=g+yqaDwX; spf=pass (imf08.hostedemail.com: domain of ardb@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=ardb@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680766283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Bbvwab9SLkGY4+JpCCY+Zj/yLsL4iltKW8FGiPnAl/k=; b=E6Inu4XbPO+cprtKD4gSja6DXNiJDSlYRqW1xlpoaM2PTJYnJLwU/Q+bI4H7tINbW7k7jz 4J+vCClpm39gIEpqGpczy6nlc2yqcVU6nxV4NqxBrKadbB42qgDnEJCzPpYF0sF7fgbQ0I dsMU8/tsdKnfyPz20ghHG93zktY/sNY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=g+yqaDwX; spf=pass (imf08.hostedemail.com: domain of ardb@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=ardb@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680766283; a=rsa-sha256; cv=none; b=MXSmBXPG44Y7857h4vGUhnE9szQBB19O7sHfKyXYt69TFcrp2T3okRfpiWqTSAk3Ezh5lP WVKUu0rmWWn7QGO4SRpgTGVfiA9ENUVxf4Arp12VPV8HtOzzw5nrHAqRnIYLkOfHFnFcex CWUrd8PYoCh0CAOTke4pY3oUKh09AcI= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id EB691643E2 for ; Thu, 6 Apr 2023 07:31:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91A23C433AE for ; Thu, 6 Apr 2023 07:31:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680766281; bh=ETbfyfCMDp3/1UzRk+wb1+5DWlQo/l40rg20m5e16cU=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=g+yqaDwXIWwFaaFSp/PkhyUZ4wki+zkdYZxgkWjkg69YT+maDEJCHKU2kZEAipH66 moSHtF7o/YUJr24G1pN426d8u5xtTia7en25hlCc0bijtI+SJZcmiamJrhDU19BKa6 lOPoYQhW5Ccy1iJaDvZjLogjiH3V48G5QYf8qHy/05HeBeTATj9aredUuXwPIKkRpj CNWLycNWFzIiAEiqS8h7NFZu9bsf8QaQ3wT3kGy1OCegcIHaHL0MU7qoD8QGJvuh0J or4ofV5F3twEEp8grfTAV1WzFvXdrCKlIn9d9hSXeBLPuTQcoxr+2fVUHDV4t0FqK7 efMWOV280V5xg== Received: by mail-lj1-f172.google.com with SMTP id a44so21161303ljr.10 for ; Thu, 06 Apr 2023 00:31:21 -0700 (PDT) X-Gm-Message-State: AAQBX9dhaD1r1ulr7Jj/qGT1jRNOkN58wMMbX9PUAuvfA3garo8UAV/s UMMEYBvadCjZOJpnNFWywSSNouxgtWxWUZNYHss= X-Google-Smtp-Source: AKy350bQNM+gUl6oQj2ADMATHsYFHDqbNQfOH6aDOcb3ZHj4ykvfAYvg5QVvyYoQEt59P3U63UrU1cj2BKR2Hk5kNsI= X-Received: by 2002:a2e:93c3:0:b0:298:bddc:dbbf with SMTP id p3-20020a2e93c3000000b00298bddcdbbfmr3104529ljh.2.1680766279202; Thu, 06 Apr 2023 00:31:19 -0700 (PDT) MIME-Version: 1.0 References: <20230406040515.383238-1-jhubbard@nvidia.com> In-Reply-To: <20230406040515.383238-1-jhubbard@nvidia.com> From: Ard Biesheuvel Date: Thu, 6 Apr 2023 09:31:07 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] arm64/mm: don't WARN when alloc/free-ing device private pages To: John Hubbard Cc: Andrew Morton , Catalin Marinas , Will Deacon , Anshuman Khandual , Mark Rutland , Kefeng Wang , Feiyang Chen , Alistair Popple , Ralph Campbell , linux-arm-kernel@lists.infradead.org, LKML , linux-mm@kvack.org, stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 8bt9badgu3jh1pyy95na4k1dx43r1c9s X-Rspamd-Queue-Id: 0A0D5160020 X-HE-Tag: 1680766282-793746 X-HE-Meta: U2FsdGVkX19jToWcwnEyCTMl7wxuQTl85KRCAsi8Ta65SBLatkhTSqql1BwE5l0LXEp9Ew4ORTkOXNz9xYKqbBXbvH5GNKetrpUM42T6rRzk0uF0NUeQGcUxzY5kz5DGPQiUss9VO04c1V4/EMKZZFxOSPxk7CnqF9+wWD/ELQ+IGg1ixy4kXMuEQJrbncBTBX8se/zFz+C4buJ3Q6hx+KA0BrBEAG5Q/9cYl6HpiZKR4bnWu/aw9rHEIqywL4wdriKRtylZsYWGKB88jIgSqb6nmjQr1KwcJ8LJb4yN2n0zTtNGp51oqvAZoMXXgG3YpTjqgMYIaed446/Ber0gfwuExE2ZPUJdFmFDZJ3kotFeEKVTf14xkhneWSNuQUgkaCV6CAPhT6iWuPsT3lZtpaXtVrX/5ejhvYhLg2NBmBCWbA18v4uAS5dtjUUaDiz98uWkSOS3OXlZKLZX0qX2KGbuC3wpVrQzwBShJW8omur8xXEBoAVb2cyPTFxA7zR0RynBsmwY+sU+nUfiuHoOzIC2bDb3JSTE/V1riBya0MrPXrbpWsfwZUTGdfHnI7Nm76FA/qfBS5CXhUVZWsBT2mytfvHS9mIXjlk8ROZmi8E+SXkooIBBeWkpRVbABmQzzKjh2pmYNfHtb286YmvBhYLS74bg1AurERsPLBxum84ML8L4oywRz30D9RkLjnH+9qthqOtXB/aaUTa2eLdF9wBLUfB/FzYSKaqHJ6ywPOMo1rhq4cG0e/Yq/T24LnbbKAd/NbeOMVSCsiXmt9zE3LSaDhjl9MXgI6xdt1N+mcr3o6MH2eDvSHLLAnx2LPiGQbJ51FvHQJU9R2oPwaCM77/6Qm5TitYbok752W9XS6Id7HCdSCawMV2Muk600906vVmoUIKOT71TAkYSTXRCl0d5sn2WuLndc3Da9F2uHe7ymEbIxAa7RNGQ/pG3PkUf3utU8cITF/X3IdKR8w6 lmWHWkjC pftR4xTMSbgEM1lQKqi60bBknGMELO4+OKlkneSQyYG7cij9x/TksBJtFzH44dZxMUr0j4rC2ZjC4Rb97rQTA6Cbm7ImHN0Px9G7aOaeIy67Uo/T41Y6wfpi+qesdEES67sLCAhr0Au7Lxxg6p6rx/82xL2tEnsIQKn80Lj4z099TtlDfsWCrsXXUGbA8DiRfyJFGIugRxmUxWc9cdUhd+G9zWAc0EHPUAX6EoMIgEhPZ3HtuheE2Un0Jfojw7XIcCnFVPaXUS2dmNdO/jkbK42W8k8jIFRDfFpZz9Ahm4Ca9dqMs/YSACw13e7GL/OTAlzTutcTgNsnfc/Z1imIsPeGc8/9onyqkBdYqDL+84yFflHsi3xmeI31xwbZ8Che6S6uNv99jiaUIxrM+iBZ7b9mCPtGUeYmZX6y5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello John, On Thu, 6 Apr 2023 at 06:05, John Hubbard wrote: > > Although CONFIG_DEVICE_PRIVATE and hmm_range_fault() and related > functionality was first developed on x86, it also works on arm64. > However, when trying this out on an arm64 system, it turns out that > there is a massive slowdown during the setup and teardown phases. > > This slowdown is due to lots of calls to WARN_ON()'s that are checking > for pages that are out of the physical range for the CPU. However, > that's a design feature of device private pages: they are specfically > chosen in order to be outside of the range of the CPU's true physical > pages. > Currently, the vmemmap region is dimensioned to only cover the PFN range that backs the linear map. So the WARN() seems appropriate here: you are mapping struct page[] ranges outside of the allocated window, and afaict, you might actually wrap around and corrupt the linear map at the start of the kernel VA space like this. > x86 doesn't have this warning. It only checks that pages are properly > aligned. I've shown a comparison below between x86 (which works well) > and arm64 (which has these warnings). > > memunmap_pages() > pageunmap_range() > if (pgmap->type == MEMORY_DEVICE_PRIVATE) > __remove_pages() > __remove_section() > sparse_remove_section() > section_deactivate() > depopulate_section_memmap() > /* arch/arm64/mm/mmu.c */ > vmemmap_free() > { > WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); > ... > } > > /* arch/x86/mm/init_64.c */ > vmemmap_free() > { > VM_BUG_ON(!PAGE_ALIGNED(start)); > VM_BUG_ON(!PAGE_ALIGNED(end)); > ... > } > > So, the warning is a false positive for this case. Therefore, skip the > warning if CONFIG_DEVICE_PRIVATE is set. > I don't think this is a false positive. We'll need to adjust VMEMMAP_SIZE to account for this. > Signed-off-by: John Hubbard > cc: > --- > arch/arm64/mm/mmu.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 6f9d8898a025..d5c9b611a8d1 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -1157,8 +1157,10 @@ int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node, > int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > struct vmem_altmap *altmap) > { > +/* Device private pages are outside of the CPU's physical page range. */ > +#ifndef CONFIG_DEVICE_PRIVATE > WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); > - > +#endif > if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) > return vmemmap_populate_basepages(start, end, node, altmap); > else > @@ -1169,8 +1171,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, > void vmemmap_free(unsigned long start, unsigned long end, > struct vmem_altmap *altmap) > { > +/* Device private pages are outside of the CPU's physical page range. */ > +#ifndef CONFIG_DEVICE_PRIVATE > WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); > - > +#endif > unmap_hotplug_range(start, end, true, altmap); > free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END); > } > -- > 2.40.0 >